I don't really believe you ran either the Mojo or the Julia code. There's no way your single-threaded C code outperformed multi-threaded simd optimized Julia or Mojo. It's flat out impossible.
The only other explanation is if you ran the non-simd Julia version under a single thread.
I did. Running with threads improves performance by 50%, but is still nowhere near C performance. My machine only has two cores so threading doesn't help much.
That's interesting. It makes sense that a two core machine doesn't benefit too much from multithreading, but "nowhere near C performance" is pretty surprising. I'll try out both the programs around this weekend on my own fairly anaemic machine, and see how they fair for me. Thanks for responding!
Cool. If Julia runs much faster for you than for me I'd be interested in hearing it. I was honestly surprised the performance was so bad so perhaps I did something wrong.
Would the C compiler automatically exploit vectorized instructions on the CPU, or loop/kernel fusion, etc? It’s unclear otherwise how it would be faster than Julia/Mojo code exploiting several hardware features.
In a HLL like Julia or Mojo you use special types and annotations to nudge the compiler to use the correct SIMD instructions. In C the instructions are directly usable via intrinsics. Julia's and Mojo's advantage is that the same code is portable over many SIMD instruction sets like sse, avx2, avx512, etc. But you generally never get close to the same performance hand-optimized C code gets you.
That is not "the same as C" and you certainly do not achieve the same performance as you do with C. Furthermore my point, which you missed, was that developers typically use different methods to vectorize performance-sensitive code in different languages (even Python has a SIMD wrapper but most people would use NumPy instead).
what's the difference? an llvm (or assembly) intrinsic called from Julia and one called from c will have exactly the same performance. c isn't magic pixie dust that makes your CPU faster.
That SIMD.jl doesn't give you direct control over which SIMD instructions are emitted, and that SIMD code generated with that module is awful compared to what a C compiler would emit. The Mandelbrot benchmark is there. Prove me wrong by implementing it using SIMD.jl and achieving performance rivaling C. Bet you can't.
I wasn't talking about using SIMD.jl. I was talking about the implimentation of the package (which is why I linked to a specific file in the package) which does directly (with some macros) generate simd intrinsics. As for the performance difference per core you're seeing, it's only because your C code is using 32 bit floats compared to the 64 bit floats that Julia is using here.
He has a point. Currently there is no way in Julia of checking with CPU instructions are available. So in practice, it's impossible to write low-level assembly code in Julia.
IIUC, SIMD.jl only works because it only provides what is guaranteed by LLVM to work cross-platform, which is quite far from being able to use AVX2, for example.
IIRC it relies on HostCPUFeatures.jl which parses output from LLVM. However, this means it just crashes when used on a different CPU than it was compiled on (which can happen on compute clusters) and it crashes if the user sets JULIA_CPU_TARGET.