Impressive. I also like his approach of prototyping it in C first (I chuckled at the thought of C as a prototyping language). That reference is also in the repo and I hope to at least read that :)
Everybody I know who writes assembly language in practice now, self included, prototypes most of it in C first -- why not take the huge headstart from the compiler generated ASM?
In those cases, are you writing in asm for fun? If not, what advantages do you see in taking this approach over just writing most of the project in C and using inline asm where necessary?
It usually is inline assembly, but you write it in C first and copy the disassembly as a starting point. We do this in the embedded world a lot because we're constantly switching architectures, and nobody memorizes the instruction set of every architecture :)
The most common reason is using architecture-dependent instructions that the compiler doesn't generate well, or doesn't generate at all. Examples are SIMD (auto-vectorization is nice, but far from perfect) and DSPs that have specific multiple-and-accumulate instructions or flags that change the behavior of the accumulate register.
In a project I'm currently working on, inlining was still inferior to fully native ASM. LLVM generated unnecessary stack loading in the prologue, and the completely unused memory access had something like a 4% speed penalty.
Would you please file a bug report [0] with a reproduction of the missed optimizations that you see? I'm interested in taking a look at it. Today LLVM trunk enabled the vectorizer for -O2 and -Os.
A lot of missed optimization opportunities come from the impracticability of communicating to the compiler information known to the developer. Basic examples include “the two lvalues here do not alias” and “the int variable x here is always positive, x/8 can be compiled as straightforward right shift instead of a more complex sequence of instructions”. There are various source-level workarounds including using restrict when applicable, copying lvalues to local variables whose address is never taken to make the lack of aliasing clearer, and casting to unsigned int before dividing. In the worst cases, you have to make the program less readable in order to improve the generated code, with no guarantees that the change makes the generated code better for all compilers (some compilers might still miss the optimization opportunity and generate additional instructions for the copies/casts).
On a related note, I have a dream one day to discover a real example where undefined behavior can be used constructively as a license for the compiler to optimize: the following post alludes to this idea but assembly dumps at the bottom show that the compiler is not taking advantage of the information encoded into the undefined behavior:
More seriously, an annotation language for expressing properties that are supposed to be true at various points of the program can be useful to transmit information from the programmer to the compiler and enabling optimizations that would otherwise require difficult full-program analysis. And these annotations can be used to analyze the program too!
Yes, although I make fun of LLVM for the non-optimal code, I really doubt anyone would consider them 'bugs' -- and I did not. They aren't bugs, they are just optimizations that we haven't found a good way to automatically identify yet.
Though, it does seem to always store the old stack pointer in r7, even though it doesn't restore from r7, and even though my inline assembly block specifies r7 on the clobber list. That might be a bug, but it's a single 'add', so who cares.
When you're writing on an architecture with only poor C compilers, it makes a lot of sense to prototype in C and hand compile it. For example, as a hobby, I write on a platform where program memory is limited to 16k and the only available C compiler is sdcc, which outputs relatively large binaries.
So writing in C is fine for smaller projects, but those are easy enough in assembly anyway, especially on the old architecture designed for human-produced asm.
Back when I started programming (1986), Basic, C, Pascal and Forth were seen as prototyping languages, before coding the real applications in Z80, 6502 Assembly.
My major introduction to C was hand optimizing a large C library compiled to M68000 assembly, while complaining incessantly about how shitty the compiler was, as I could at the time easily strip the code down to <40% of original size almost only be removing unnecessary usage of temporary registers... But these days C compilers generate tolerable code in most cases..
I'm always amazed by the strange assembly generated by modern compilers. LLVM will auto-vectorize a routine into this awesome NEON SIMD masterpiece... and then throw in a bunch of completely redundant stack loads/stores and refuse to use any registers except r0 and r1! 0_o
You can tell where the devs with Ph.Ds in compiler design are putting all of their effort :)
That's true. I guess my perspective has also changed in that I don't need every cycle to be used carefully any more - on that 7.16MHz Amiga, deleting those instructions mattered a lot more.. :)
I think that's also affecting where the optimization effort goes to a great extent - it's more likely to be invested on the type of code people are more likely to use in critical inner loops to be run in places that might saturate large numbers of cores...