Hacker News new | past | comments | ask | show | jobs | submit login
{n} times faster than C (owen.cafe)
447 points by 414owen on July 6, 2023 | hide | past | favorite | 237 comments



I'm not so sure that the right take-away is "hand-written assembler is 6x faster than C." It's more like "jumps are a lot slower than conditional arithmetic." And that can [edit:often] be achieved easily in C by simply not using switch statements when an if statement or two will work fine.

Rewriting the C function as follows got a 5.5x speedup:

    int run_switches(char *input) {
        int r = 0;
        char c; 
        while (1) {
            c = *input++;
            if (c == 's') r++;
            if (c == 'p') r--;
            if (c == '\0') break;
        }
        return r;
    }
Results:

    [16:50:14 user@boxer ~/looptest] $ gcc -O3 bench.c loop1.c -o lone
    [16:50:37 user@boxer ~/looptest] $ gcc -O3 bench.c loop2.c -o ltwo
    [16:50:47 user@boxer ~/looptest] $ time ./lone 1000 1
    449000
    ./lone 1000 1  3.58s user 0.00s system 99% cpu 3.589 total
    [16:50:57 user@boxer ~/looptest] $ time ./ltwo 1000 1
    449000
    ./ltwo 1000 1  0.65s user 0.00s system 99% cpu 0.658 total


Nice! There's a part two in which I rewrote the C. I got a 12x speedup :)

https://owen.cafe/posts/the-same-speed-as-c/

And as others have pointed out, you can tweak the input, then vectorize the algo, if you want to go that route.

I considered this a pedagogical exercise and I sincerely hope nobody will start dropping down to assembly without a very good reason to.


Wondering how res += (c=='s')-(c=='p') might do. I sure there is some C undefined behaviour relevant there. Curious but too lazy to check it myself!


While `false` evaluates to 0, not sure `true` always evaluate to 1 in C... maybe compiler dependent. Maybe add `? 1 : 0`


C doesn't even originally have true/false, I think you may be conflating the two concepts that "any nonzero int is truthy" and "boolean expressions evaluate to ints". The standard mandates that boolean expressions like equality always evaluate to 0/1.


The `true` constant is always 1. C11 §7.18 (3):

> true which expands to the integer constant 1,

And equality yields a 1 or 0. C11 §6.5.9 (3):

> The == (equal to) and != (not equal to) operators are analogous to the relational operators except for their lower precedence. Each of the operators yields 1 if the specified relation is true and 0 if it is false.


ive seen people doing += !!(c=='s')-!!(c=='p') for that


I'm sure people do that (even though it's not necessary per some year C standard) but generally the pattern is actually for converting things which are not already 0 or 1 into 0 or 1. For example, you might want to use it here:

    int num_empty_strings = !!(strlen(s1)) + !!(strlen(s2)) + !!(strlen(s3))
which is equivalent to:

    int num_empty_strings = (strlen(s1) != 0) + (strlen(s2) != 0) + (strlen(s3) != 0)
Which you use is really a matter of coding style.


If we are being cryptic already, why not

    int num_empty_strings = !!*s1 + !!*s2 + !!*s3;


That isn't only more cryptic, it's also potentially a lot more efficient -- strlen takes time proportional to the length of the string, which of course you don't need to do if you only care whether or not the length is zero. You shouldn't use strlen for empty-string tests.


In practice, GCC and Clang don't seem to have any issues inlining the necessary part of strlen at -O1 or higher (https://godbolt.org/z/rM198aYea). But MSVC inlines the empty-string case, while still calling out for nonempty strings, probably since it doesn't realize that the returned length will be nonzero.


I guess since strlen uses an unsigned size, which has specified overflow behavior the compiler not only has to proof the initial iteration, but also all the ULLONG_MAX+1 multiples, which of course refer to the same memory address. But maybe its harder for the optimizer to see.


Note that your code computes the number of nonempty strings.


That is entirely unneccessary. An == expression will always evaluate to 1 or 0. The !!x trick can be useful in some other situations, though.

Here’s a thing you could do (but I don’t know why):

+= !(c-’s’) - !(c-’p’)


Great post!

One thought: If the code is rewritten using bit arithmetic, then potentially the result could be even faster as there need not be a pointer look-up.

A bit arithmetic solution would have a mask created for the characters ‘p’ and ‘s’, and then the result could be AND-ed, and then with more bit arithmetic this all 1s value can be translated to a 1 if and only if all the bits are 1. Following which, there would be a no conditional check and simply be both an add and a subtract operation but where the value to be added will only be 1 if the mask for ‘p’ matches and 1 to be subtracted if the mask for ‘s’ matches respectively. I’m not fully sure if this would necessarily be faster than the pointer look-up solution, but it would be interested to try this version of the code and see how fast it performs.

Update: The bit arithmetic could also be done with an XOR on the mask, and following which the ‘popcnt’ x86 instruction could be used to figure out if all are 0 bits.


Thank you for your post and reply but I fear with a post + title like this you may just be chumming the waters.


What do you mean by "chumming the waters" in this context?


People who just skim the headline and article will come away convinced that dropping to assembly is the “way to go fast” even if they never actually do it.


Anyone with a passing understanding of Assembly or compilers would find that idea laughable. As for the others, it turns out not knowing what you don’t know can be very expensive.


"Clickbait"


> jumps are a lot slower than conditional arithmetic.

This statement is true if the jumps are unpredictable. If the jumps are predictable, then jumps will be faster.

Linus had a whole rant about this back in the day, arguing that cmov is not useful if branches are predictable: https://yarchive.net/comp/linux/cmov.html


I haven't run any benchmarks, but jump-if-equal and set-if-equal would seem to have the same level of predictability.

My naive, untested intuition is that there's only one meaningful difference: the former has to dump the entire pipeline on a miss, and the latter only has to nop a single instruction on a miss.

But maybe I'm missing something. I'll re-read his rant.

EDIT:

Linus rants a lot, but makes one concrete claim:

    You can always replace it by
    
      j<negated condition> forward
      mov ..., %reg
     forward:
    
    and assuming the branch is AT ALL predictable (and 95+% of all branches
    are), *the branch-over will actually be a LOT better for a CPU.*
So, I decided to test that.

    [18:50:14 user@boxer ~/src/looptest] $ diff -u loop2.s loop4.s
    --- loop2.s 2023-07-06 18:40:11.000000000 -0400
    +++ loop4.s 2023-07-06 18:46:58.000000000 -0400
    @@ -17,11 +17,15 @@
      incq %rdi
      xorl %edx, %edx
      cmpb $115, %cl
    - sete %dl
    + jne _run_switches_jmptgt1
    + mov $1,   %dl
    +_run_switches_jmptgt1:  
      addl %edx, %eax
      xorl %edx, %edx
      cmpb $112, %cl
    - sete %dl
    + jne _run_switches_jmptgt2
    + mov $1,   %dl
    +_run_switches_jmptgt2:  
      subl %edx, %eax
      testb %cl, %cl
      jne LBB0_1
    [18:50:29 user@boxer ~/src/looptest] $ gcc -O3 bench.c loop2.s -o l2
    [18:50:57 user@boxer ~/src/looptest] $ gcc -O3 bench.c loop4.s -o l4
    [18:51:02 user@boxer ~/src/looptest] $ time ./l2 1000 1
    449000
    ./l2 1000 1  0.69s user 0.00s system 99% cpu 0.697 total
    [18:51:09 user@boxer ~/src/looptest] $ time ./l4 1000 1
    449000
    ./l4 1000 1  4.53s user 0.01s system 99% cpu 4.542 total
I feel pretty confident that Linus has made a poor prediction about poor prediction here. Jumps are indeed slower.

To be fair to Linus, since Clang and I are using sete here, not cmov, I also tested cmov, and the difference was insignificant:

    [19:53:12 user@boxer ~/src/looptest] $ time ./l2 1000 1            
    449000
    ./l2 1000 1  0.69s user 0.00s system 99% cpu 0.700 total
    [19:53:15 user@boxer ~/src/looptest] $ time ./l5 1000 1            
    449000
    ./l5 1000 1  0.68s user 0.00s system 99% cpu 0.683 total
Jumps are slower.


> jump-if-equal and set-if-equal would seem to have the same level of predictability.

The difference is that branches have dedicated hardware (branch predictors) that will speculatively execute subsequent instructions based on their best guess about which way the branch will go. Whereas conditional moves cannot execute any subsequent instructions until the correct value is available.

Put another way, CPUs have control flow speculation, but not conditional move speculation. I don't know if conditional move speculation would be a feasible thing to implement or not, but I'm pretty sure that no mainstream CPUs have such a feature.


> Whereas conditional moves cannot execute any subsequent instructions until the correct value is available.

That is incorrect. Super-scalar processors have no problem executing subsequent instructions before the cmov writebacks. However, the register cmov writes to can of course not be read before cmov has has passed the execution unit. But that's not different from other arithmetic instructions.


You are correct, I should have clarified, subsequent instructions that depend on the result of the cmov cannot execute until the cmov has executed. Whereas subsequent instructions that depend on the result of the branch instruction can be speculatively executed even before the branch conditional has been evaluated.


True, but independently of whether "cmov rax, ..." or "jnz L; mov rax, ...; L:" is used, subsequent instructions that reads rax needs to stall until rax has been written to (or at least until cmov/mov has executed if bypasses are used).


The difference is that in the case where the condition is false and predicted false the jump variant will not delay if the value being moved into rax is delayed, the cmov variant will. Effectively that value becomes a false dependency.

As best I can tell this case is rare enough that one shouldn't generally be afraid of cmov, and probably compiler authors should consider using it more frequently.

What one shouldn't do is to load values, that are likely in memory or L3, unnecessarily in order to be able to use cmov. It is the case that runs the greatest risk of degrading performance, and it puts extra load on resources that are shared between cores.


There is also the issue of the branch predicate itself. It is always a true dependency, but when is its value actually needed? For cmov, it is needed before dependent instructions can even be executed. For branch instructions, it is only needed before they can be retired. Speculative execution can keep the pipeline full in the meantime.


Oh, right! I totally forgot about that. I guess it (at least theoretically) could make a big difference in code for the abs function if the noop is the common case and also easily predictable.


I'd be curious to learn why CPUs don't have conditional move speculation.


Because modern CPUs as a rule don't speculate on values to arithmetic, only on control flow, and CMOV acts like arithmetic.

That is, if there is an add instruction on rax and rbx, no matter what, the add instruction will not execute until both rbx and rbx are available. If the result went into rax, and there is an another instruction that uses that as a source, no matter what that instruction will not execute until the add has completed.

CMOV is implemented as an ALU instruction that always writes into it's output, and either writes the value that is already in there (which is why it depends on the value of it's output) or the value provided, depending on flags.


I'm not saying you're wrong — I'm completely ignorant at the microcode level — but it seems to me like between

    cmp x, y
    je z
and

    cmp x, y
    sete z
the actual speculative part is the same: speculating as to the result of cmp x, y

If that's true, why would it not simply pipeline sete and the following instructions and simply execute (or not execute) sete according to its prediction, and then double check itself and reverse (or apply) the operation if the prediction was wrong?

I probably just have a bad mental model of what's going on under the (under the) hood, so whatever patience you have to deal with my stupid questions would be greatly appreciated.


The two sequences look very similar, and could be implemented the same way, but the actual implementation could not be more different.

> If that's true, why would it not simply pipeline sete and the following instructions and simply execute (or not execute) sete according to its prediction, and then double check itself and reverse (or apply) the operation if the prediction was wrong?

You cannot just reverse or apply one operation. The way speculation works, when the frontend encounters a conditional jump, the entire architectural state of the current thread is stored, and all future memory writes are held in the store buffer and not written out. Then a long time, potentially dozens of cycles later, after the je is executed in the backend either the old state is restored and the pending writes are discarded, or the saved state is discarded and the pending writes are released.

In contrast, in ALUs, the inputs for instructions are always available before the instructions are scheduled to execute. It would be possible to implement sete like je, but this would imply significant changes to how and where it is executed. ALU ops cannot trigger speculation because there is no machinery for storing state at that part of the pipeline.

And no-one is ever going to implement cmov or sete like a jump, because moving the op from being an ALU op to being one that is speculatively executed in the frontend like jmp would make both positive and negative changes, and that would be a significant pessimization of existing software because for decades cmovs have been used for unpredictable values, where sequencing and waiting for the real value is a better idea than speculating and failing half the time. Using a cmov serializes execution when any following operations use the value, but if you can have independent work after it, you can always successfully execute that. Speculating at an unpredictable CMOV would cause that to be thrown away uselessly half the time.


Taking the example:

      cmpb $115, %cl
      sete %dl
      addl %edx, %eax
vs

      cmpb $115, %cl
      jne _run_switches_jmptgt1
      mov $1,   %dl
     _run_switches_jmptgt1:  
      addl %edx, %eax
The argument about why `jne` might be faster is that that in the former case, the CPU always executes a dependency chain of length 3: `cmpb` -> `sete` -> `addl`. Each of these instructions have to be computed one after the other, as `sete` depends on the result of `cmpb`, and `addl` depends on the result of `sete`.

With `jne`, the CPU might predict the branch is not taken, in which case, the dependency chain is `mov` -> `addl` (the `mov` of an immediate might be handled by register renaming?).

Or that it is taken, in which case in which case the dependency chain is just `addl`.

I guess you're arguing that the CPU should handle `sete` the same way? That is, instead of treating `addl` as dependent on the result, predict what `sete` does and start executing `addl` before `sete` finishes, rewinding if that went wrong?


Yeah, or at least I don't understand why that wouldn't be possible.

Microcode can set the EIP register based on its prediction of what the result of cmpb $115, %cl will be.

Why can't it set the EDX register based on its prediction of what the result of cmpb $115, %cl will be?


In principle is perfectly possible to speculatively execute cmov (and viceversa to change jump-over-one-instruction into conditional execution).

But Intel historically didn't do it as programs tend to use cmov when the condition is unpredictable , so there was little reason to optimize it.

After Spectre, I believe intel has given an architectural guarantee that cmov is never speculated so it can be used as part of speculation attack prevention.


The purpose of control flow speculation is to avoid stalling the pipeline.

If each instruction was executed in one single clock cycle, the cost of executing a branch would be one cycle and that's it.

However since there is a maximum speed at which operations can happen in hardware, the period of such a clock cycle that can execute a whole instruction would be very long and so the amount of "instructions per second" the CPU could execute would be low.

Now, if you can break up each instruction in smaller steps and execute the smaller steps in an overlapping manner, such that while you're executing the second step of the first instruction you're executing the first step of the next instruction and so on (like on an assembly line in a factory) you can have a much shorter clock period for each of these steps, and at the end of each clock tick an instruction would complete execution. The CPU will be still running one instruction per clock cycle, but since each clock period is shorter the overall instruction per second rate will be higher.

But for this to work the next instruction you want to execute must be known in advance so that at each clock cycle the CPU can start step 1 of a new instruction.

That's easy when the program is executing sequentially but when there are branches involved it's more tricky.

And that's tricky also if the branch is not conditional! If the instruction execution is broken into many small steps, it may take one or more steps before figuring out that you have a branch in the first place, let alone decoding where you need to branch to. In the meantime the CPU will have happily started to execute the first "steps" of the next instruction.

This is called a "branch hazard"

Early CPU implementations handled branch hazards by just throwing away the intermediate states if the few instructions that we're half way through the pipeline and call it a day (stalling the pipeline).

Early RISC CPUs attempted to be clever and use a trick called "delay slots": the instruction(s) already in the pipeline will continue to execute as if they were logically before the branch. This puta the onus to the programmer (or the compiler) to make sure that only instructions that are safe to be executed regardless of whether the branch is taken or not, are actually put after the branch instruction (otherwise you can just write nops).

But branch delay slots are not a panacea. As pipelines got deeper it became I practical to have a large number of delay slots and even a small number of delay slots were often just filled with nops anyway.

Improving on UNconditional branches was done by "looking ahead" in the instruction stream for branch instructions. When the instructions are all of the same size it's easy to quickly look a few instructions ahead and tell when you found a branch. You also need an instruction encoding scheme that is relatively fast to decode, at the very least it should be fast to decode branches (the more complicated the logic to decode a branch is, the farther ahead you'd have to look in the instruction stream, which in turn would limit the size of the sequence of instructions you can fill your pipeline with between subsequent branches).

To further complicate the matter, even if you found the branch instruction and you decoded it, it doesn't mean you yet know where it will branch to!

Indirect jumps (where the address is in a register) are similar to conditional jumps in that you don't know the address you're jumping to by merely looking ahead in the instruction stream and noticing the branch instruction. You need to either wait until you execute the branch and stall the pipeline in the meantime, or keep them in the pipeline and flush the pipeline once you know the target of the branch.

The next trick that CPU designers came up way before speculative execution is "branch target prediction".

The CPU keeps a little associative memory that maps addresses of a branch instruction to branch targets. When the lookahead logic spots a branch instruction it looks in this map and gets a guess of the branch target and uses that immediately ad the next instruction so that the pipeline is kept fed with something.

If by the time the branch instruction is executed the guess turned out to be wrong, the pipeline is flushed in the same way it would have to be flushed anyway if we had no clever branch lookahead in the first place. But if the guess was right we paid only one cycle to execute the branch.

This works for indirect unconditional branches and also for conditional branches! The prediction logic can be more subtle and complicated, many many things gave been attempted but this the general idea.


I hope you work on compiler backends.


With all due respect this is quite literally the level of stuff covered in an undergrad EE architecture course and is covered in an elementary text like Patterson and Hennessy.


> With all due respect

> quite literally

You could have conveyed the close to the same thing by saying, "things like this are covered in Patterson and Hennessy"

> elementary text

Jesus, do you even lift? The rest of the discussion is amazing.


For those not aware Patterson and Hennessy is elementary (“relating to the basic elements of a subject.”) because it is often used in an introductory course of computer architecture. This isn’t a slight.


Speculative execution is all about control flow. It's about what value is in the instruction pointer at some nebulous point in the future.

A conditional jump can put one of two values into the instruction pointer, they will either increment the instruction pointer (jump not taken) or put the immediate value into the instruction pointer. (jump taken)

cmov/sete are utterly deterministic; they always increment the instruction pointer. There's nothing to speculate on, there's nothing to predict. They just go to the next instruction.


> Speculative execution is all about control flow

It's murkier than that. Speculation also deals with the order in which instructions can be executed. Take for example memory ordering (discussed in a mini essay elsewhere here): we typically speculate that all loads are unrelated to any other older in-flight stores with unresolved addresses so that we can optimistically launch them. This is not a control flow issue but it is something we both speculate and predict (memory dependence predictors!) despite the next PC being essentially deterministic.


> Speculative execution is all about control flow. It's about what value is in the instruction pointer at some nebulous point in the future.

.. and all about what we can wheedle out of all the background speculation that will help us get root on this box.


One other perspective is that by speculating the outcomes of conditional instructions, you naturally open yourself up to mispeculating them. This sounds obvious but the consequences for the uarch are quite severe. This is because anytime you mispeculate an instruction, most (all?) contemporary CPUs throw out all younger speculative progress (even if it is unrelated!) and restart at the instruction it originally mispeculated. Throwing out all this work is both i) a waste of power/cycles (you did all this speculative work for nothing!) and ii) quite an expensive operation because you either have to iteratively rollback the state (slow!) or take a snapshot the state on every conditional instruction (expensive from power/area perspective).

A similar idea to what you're proposing (and a possible solution to the above issue) does come up in another part of the processor however! Specifically, high performance processors launch loads very aggressively and often times return data as soon as the address is known. This is because memory is often the bottleneck for performance. This, unfortunately, has some challenges. Namely, memory ordering violations. Take for example the following snippet (ARMv8):

    mov x1, #1    
    udiv x3, x2, x1
    str x2, [x3]
    ldr x4, [x2]
    add x5, x4, x4
This is a silly and somewhat contrived code sequence, but note here that both str x2 and ldr x4 access the same address and thus the value in x4 should be x2. Note, however, that since str x2's address (x3) is produced by a slow division operation but ldr x4's address (x2) is available much more quickly, ldr x4 likely will launch before the CPU even knows that str x2 conflicts with it. Thus, the data returned by the load will be whatever random old stale data is in the cache rather than the correct value that is currently sitting in x2. This means that the subsequent add which consumes this data will produce an incorrect value, leading the whole program to derail. Once the CPU detects this issue, it has to throw away all the state and restart execution of the program at ldr x4 in order to fix its mistake and fix up the memory ordering violation. In essence, the CPU is speculating that str x2 and ldr x4 are unrelated because doing so is very important for performance. Unfortunately, however, memory ordering violations are actually somewhat common and constantly having to restart execution has negative performance implication.

Now, this is actually a very similar problem as we'd see with conditional instruction speculation! So how do we solve this issue for memory ordering violations? Well, we predict which pairs of stores and loads are dependent and block the load from launching until the address of its supposed dependent store resolves. If this predictor is functioning well, we are able to both aggressively launch loads while also avoiding many costly fixups!

So, how would we translate this to conditional instruction speculation? Well, one idea is that we could predict both whether a given instruction is predictable and, if so, which way we should predict it. If a conditional instruction is predicted as unpredictable, its result will not be speculated (thereby avoiding frequent costly restarts) but if it is predicted to be predictable, we can try to predict which one to take.

Would this work? Maybe. Will anyone actually do this? Likely not. As others have suggested, conditional instructions are almost exclusively used for hard to predict conditions specifically because CPUs don't speculate them. Thus, in most existing code the predictor would just say "yep can't predict it" and we'd just have ended up wasting a bunch of area and power on a predictor that never gets used.

If you're really dedicated to this cause though, feel free to write a paper on it. Spitballing performance numbers is easy but often wrong in quite surprising ways, so maybe this might just work for some weird reason I've missed :)


Linus' post is 15+ years old. Much has changed in Intel hardware since then. He was probably right on the money re the hardware available at the time.


> I don't know when the change was made, but conditional moves are fast and efficient on the last several generations of AMD and Intel processors. Usually, you are trading 1 or 2 extra cycles of latency against the chance of a ~15 cycle mispredicted branch penalty. If your branch cannot be predicted correctly ~85% of the time, this can be a significant win.

https://news.ycombinator.com/item?id=10749195


I read the rant. He is talking about Pentium 4.


The inputs here are random which is the problem and why this isn't demonstrating that. Create an input of all 's' and compare it.


Better than random input, but still only ~half as fast as using sete

    [19:13:34 user@boxer ~/src/looptest] $ diff -u bench.c bench-alls.c      
    --- bench.c 2023-07-06 16:04:16.000000000 -0400
    +++ bench-alls.c 2023-07-06 19:13:34.000000000 -0400
    @@ -17,7 +17,7 @@
       int num_rand_calls = number / CHAR_BIT + 1;
       unsigned char *buffer = malloc(num_rand_calls * CHAR_BIT);
       for (int i = 0; i < num_rand_calls; i++) {
    -    buffer[i] = rand();
    +    buffer[i] = 's'; //rand();
       }
       return buffer;
     }
    [19:13:37 user@boxer ~/src/looptest] $ gcc -O3 bench-alls.c loop2.s -o l2
    [19:13:42 user@boxer ~/src/looptest] $ gcc -O3 bench-alls.c loop4.s -o l4
    [19:13:47 user@boxer ~/src/looptest] $ time ./l2 1000 1
    250001000
    ./l2 1000 1  0.69s user 0.00s system 99% cpu 0.699 total
    [19:13:55 user@boxer ~/src/looptest] $ time ./l4 1000 1
    250001000
    ./l4 1000 1  1.28s user 0.00s system 99% cpu 1.290 total
Jumps are slower.


Microbenchmarks are hard. You aren't doing any meaningful work that could benefit from speculatively executing instead of stalling for the conditional value.

Similarly you might be busting the pipeline by chaining together the jumps so close together.

Not saying your point is wrong, just saying your proof isn't super solid.


In this benchmark the only loop carried dependency is over the res variable (edit: and of course the index). The jump doesn't break these dependencies, so for this specific problem, the additional latency of the cmov doesn't matter as it is always perfectly pipelined and cmov will always come up on top. But if the input of cmov depended on a previous value, then potentially a branch could be better given an high enough prediciton rate.


Jumps are slower on completely random input. If I understand Linus’s point correctly, he is suggesting that random inputs like this are unusual (although a good way to measure worst case performance)


Did you test this on a Pentium 4, the processor that Linus is talking about?


Is this the reason I dont usually see any speed up if I eliminate array boundary checking in C#? The jump condition is almost always false, is this what "predictable" means?


Indeed.

The cost of bound checking is second order effects like making vectorization harder, slightly higher instruction (and possibly data) cache pressure, or requiring higher decode bandwidth. For the vast majority of programs these bottlenecks do not really matter.


I mean, if the innermost loop is something like 3 assembly instructions, two extra instructions cmp and jg do not make any difference, if jg never executes?


If they are not in the critical path, it doesn't matter. There is no instruction cache issues as the loop is tiny. Also as the loop is tiny it will fit in the u-op cache (or even in the loop cache), so decoding is not an issue either. The only problem is potential lack of vectorization, but a good vector ISA can in principle handle the bound checking with masked reads and writes (but now the check is no longer a predictable branch, but it might end up in the critical path, although it is not necessarily a big cost, or even measurable, anyway).


I thought I began to understand something, your rant proven me wrong) Thanks, anyway)


Forget about the second order effects. The reason the extra instructions in first approximation do not matter is that loops typically are limited by carried loop dependencies.

Think about this: a machine with infinite execution units and memory bandwidth, potentially could execute all iterations of a loop at the same time, in parallel.

Unless each loop iteration depends somehow on the result of the previous iteration. Then only independent instructions of that iteration can execute in parallel and the loop is latency-chain bound (especially when it involves memory accesses). This is often the case. Because branch prediction breaks dependencies, bound checking is never part of a dependency chain, so it is often free or nearly so. For more optimized code, the assumption of infinite resources is of course not warranted and execution bandwidth and possibly even memory bandwidth need to be taken into consideration.


I am by no means an expert, but I believe what you have in mind would likely fit in i-cache without a problem, so you wouldn’t see a significant difference.

There is an interesting talk titled ‘the death of optimizing compilers’ that argues that for most code these optimizations are almost completely meaningless, and in the hot loops where it actually matters, they are not good compared to humans (and sometimes 100x or more improvements are possible and left on the table). While I don’t completely agree with its points, it is a good talk/slides to read through.


Its not that compilers are stupid, they just dont know what humans know about their data, it's ranges, invariants, symmetries etc. They work on most general case, which can be horribly inefficient.


What version of GCC are you using? For me both versions perform the same, both on Ubuntu and Windows:

    $ time ./lone 1000 1
        851000

        real    0m3.578s
        user    0m3.574s
        sys     0m0.004s
        
    $ time ./ltwo 1000 1
        851000

        real    0m3.583s
        user    0m3.583s
        sys     0m0.000s

    $ gcc --version
        gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
        Copyright (C) 2019 Free Software Foundation, Inc.
        This is free software; see the source for copying conditions.  There is NO
        warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


Sorry, I write 'gcc' purely out of force of habit. I'm using Clang/LLVM.

    [17:23:00 user@boxer ~/looptest] $ uname -a
    Darwin boxer.local 21.6.0 Darwin Kernel Version 21.6.0: Thu Jun  8 23:57:12 PDT 2023; root:xnu-8020.240.18.701.6~1/RELEASE_X86_64 x86_64
    [17:23:47 user@boxer ~/looptest] $ cc -v
    Apple clang version 14.0.0 (clang-1400.0.29.202)
    Target: x86_64-apple-darwin21.6.0
    Thread model: posix
    InstalledDir: /Library/Developer/CommandLineTools/usr/bin
Clang generates the sete instruction for me with the above code:

    [17:23:49 user@boxer ~/looptest] $ gcc -c -O3 loop2.c   
    [17:25:00 user@boxer ~/looptest] $ objdump -d --symbolize-operands --x86-asm-syntax=intel --no-show-raw-insn loop2.o
    
    loop2.o: file format mach-o 64-bit x86-64
    
    Disassembly of section __TEXT,__text:
    
    0000000000000000 <_run_switches>:
           0:       push rbp
           1:       mov rbp, rsp
           4:       xor eax, eax
           6:       nop word ptr cs:[rax + rax]
    <L0>:
          10:       movzx ecx, byte ptr [rdi]
          13:       add rdi, 1
          17:       xor edx, edx
          19:       cmp cl, 115
          1c:       sete dl
          1f:       add eax, edx
          21:       xor edx, edx
          23:       cmp cl, 112
          26:       sete dl
          29:       sub eax, edx
          2b:       test cl, cl
          2d:       jne  <L0>
          2f:       pop rbp
          30:       ret


Is rewriting switch statements to a bunch of ifs always faster? Or is there some number of cases where the switch is faster? Seems like it should be added as a compiler optimization if it's consistent.


It shouldn't be.

If the fastest way to implement a particular `switch` in assembly is with the equivalent of a set of `if`s, a reasonably smart compiler "should" be able to output the assembly to do that. And I thought that gcc and clang at least have actually been smart enough to do that for a while now.

But if the number of `if`s is high and the distribution sufficiently dense, where a jump table is better than a bunch of `if`s, then a `switch` should output that.

OTOH, a sufficiently smart compiler could theoretically turn a bunch of `if`s into a `switch`-like jump table - but it's much harder to reason that case through correctly than it is the other way, so I'm not sure any current compilers are sufficiently smart to do that.


On x64, GCC is supposed to use a a jump table with more than 4 cases when the cases are dense enough (gaps of 9 or less) to minimize wasted memory, otherwise it generates sequential comparisons. Testing on Godbolt, it looks like GCC 13.1 uses 10 cases as the jump table threshold for ARM64.


It shouldn't if the control flow is actually identical.

E.g. note how both the switch- and if-based functions generate the same code using a lookup table here:

https://godbolt.org/z/qoT3M7r5G


Shouldn't the compiler be able to do that, too?


It's not true in general.

In general branching code is faster than branchless code and there's many many places that will demonstrate this with a quick Google. You know how many cycles a correctly predicted branch takes? 0.

On the other hand branchless code has to wait for each calculation to reach a certain stage in the pipeline since the thing to be output is dependent on the result. The CPU will have a whole lot of halts.

So why is this faster? Because the input is literally random(). The branch predictor will be wrong. This isn't normal though. The compiler is creating code that will be faster in most normal use cases.

It's an artificial benchmark that works against the output the compiler produces.


You're saying that humans have information to context that allows them to provide use-case-specific optimizations which a compiler which must anticipate general usage couldn't. That's what profile guided optimizers are for though, right?


I tried https://godbolt.org/, and neither Clang nor GCC trunk give the same code for the two programs.

Pretty shocking for such a simple program.


Yes, there’s always the “sufficiently smart compiler” that can generate this code. Question is, does that compiler exist?


I sure hope so. The semantics are trivially identical, the optimizations should be as well, by default - they should depend on semantics, not syntax. And GCC in another comment under this thread seems to be doing similar or identical optimizations in both cases.

I wholly admit that this implies nothing about all optimizers. But it's a pretty reasonable one to expect.


>does that compiler exist?

and if so are the compile times worth it


These days, I's put my money on zig cc. I.E. zig cc -Os or zig cc -O3


Does the zig compiler have many fancy bits? I was under the impression the c support was a "nothing special" compiler that punted to llvm for optimizations.


Correct, zig currently does not do any of its own optimizations. It won't necessarily have identical results to clang because it'll generate equivalent but not identical IR and the optimizer passes may happen to do something different as a result, but it's not going to be consistently different.


Yes, but this will backfire on ARM, where jumps are as roughly fast as conditional arithmetic.

The whole point of using C is not to think about the underlying architecture. As soon as you start taking "jumps are a lot slower than conditional arithmetic on x86" into account, you're not writing in C, you're writing in assembly with extra steps :-)


Note, that's only ARMv7; ARMv8 dropped most of the conditionally executed instruction stuff. And so it isn't even jumps that's fast on ARMv7, it's specifically cases that can be (and are) converted to predicated instrs; jumps are still gonna be slow in general on anything high-perf enough that it needs speculation, which can include the actual jumps of ARMv7.

If a compiler can convert jumpy code to the predicated instrs, it should be able to trivially convert conditional arith to such too (even easier & more consistently than branches I'd say).


Is that because cjumps on ARM are faster, or cmovs on ARM are slower?


Is there any good article comparing performance across programming languages? Seems like everytime I see one they're broken because the tested logic is poorly implemented in language(s) XYZ.


You can shorten the loop to just tree conditionals:

  while (c = *input++) {
      if (c == 's') r++;
      if (c == 'p') r--;
  }


IMHO the original code wasn't written in a way that's particularly friendly to compilers. If you write it like this:

    int run_switches_branchless(const char* s) {
        int result = 0;
        for (; *s; ++s) {
            result += *s == 's';
            result -= *s == 'p';
        }
        return result;
    }
...the compiler will do all the branchless sete/cmov stuff as it sees fit. It will be the same speed as the optimized assembly in the post, +/- something insignificant. However it won't unroll and vectorize the loop. If you write it like this:

    int run_switches_vectorized(const char* s, size_t size) {
        int result = 0;
        for (; size--; ++s) {
            result += *s == 's';
            result -= *s == 'p';
        }
        return result;
    }
It will know the size of the loop, and will unroll it and use AVX-512 instructions if they're available. This will be substantially faster than the first loop for large inputs, although I'm too lazy to benchmark just how much faster it is.

Now, this requires knowing the size of your string in advance, and maybe you're the sort of C programmer who doesn't keep track of how big your strings are. I'm not your coworker, I don't review your code. Do what you want. But you really really probably shouldn't.

https://godbolt.org/z/rde51zMd8


The version that's friendly to the compiler is described in part two: https://owen.cafe/posts/the-same-speed-as-c/

It achieves 3.88GiB/s

I intentionally didn't go down the route of vectorizing. I wanted to keep the scope of the problem small, and show off the assembly tips and tricks in the post, but maybe there's potential for a future post, where I pad the input string and vectorize the algorithm :)


So I downloaded your code. On my desktop, with loop-9 gcc I got ~4.5GB/s, and with loop-7 I got ~4.4GB/s. With the following code:

    #include <stddef.h>
    
    int run_switches(const char *s, size_t n) {
      int res = 0;
      for (; n--; ++s)
        res += (*s == 's') - (*s == 'p');
      return res;
    }
I got ~31GB/s in GCC and ~33GB/s in Clang. This is without any padding, or SIMD intrinsics, or any such nonsense. This is just untying the compiler's hands and giving it permission to do its job properly.

Don't want to pass the string length? That's fine, we can figure that out for ourselves. This code:

    #include <stddef.h>
    #include <string.h>

    int run_switches(const char *s) {
      int res = 0;
      for (size_t n = strlen(s); n--; ++s)
        res += (*s == 's') - (*s == 'p');

      return res;
    }
Is 27GB/s. With a little bit of blocking:

    #include <stddef.h>
    
    int run_switches(const char *s, size_t n) {
      int res = 0;
      char tmp = 0;
      for (size_t i = n & 63; i--; ++s)
        tmp += (*s == 's') - (*s == 'p');
      res += tmp;
    
      for (n >>= 6; n--;) {
        tmp = 0;
        for (size_t i = 64; i--; ++s)
          tmp += (*s == 's') - (*s == 'p');
        res += tmp;
      }
    
      return res;
    }
That's ~55GB/s.

Anyway, the point is, you're pretty far from the point where you ought to give up on C and dive into assembly.


Indeed. I suppose the two lessons are, stick with C, and don't forget the semantics of your original problem when optimizing.

    int run_switches(const char *s) {
      int res = 0;
      uint8_t tmp = 0;
      size_t n = strlen(s);
      for (size_t i = n & 127; i--; ++s)
        tmp += (*s == 's');
      res += tmp;
    
      for (size_t j = n >> 7; j--;) {
        tmp = 0;
        for (size_t i = 128; i--; ++s)
          tmp += (*s == 's');
        res += tmp;
      }
    
      return 2 * res - n;
    }


Neat! Although you'll need to make a copy of `n`. The second loop will reduce the value of n to null.

Edit: Also, there's an off by one error. should be:

    #include <stddef.h>
    #include <stdint.h>
    
    int run_switches(const char *s, const size_t n) {
      int res = 0;
      uint8_t tmp = 0;
      for (int i = n & 127; i--; ++s)
        tmp += *s == 's';
      res += tmp;
    
      for (int size = n >> 7; size--;) {
        tmp = 0;
        for (int i = 128; i--; ++s)
          tmp += *s == 's';
        res += tmp;
      }
    
      return 2 * res - n + 1;
    }
~90GB/s on my machine, compared to 4.5GB/s for his best effort on his blog. So 20x as fast.


This is a wonderful thread.

Which tricks in there are worth playing around with more widely?

Is the uint8_t just "no point in using something bigger" or does it likely help the compiler? Does/can the signedness matter as well as the size?

Ditto looping downwards -- is it often likely to improve things? Can it generalize to pointer/iterator ranges, or is it often worth trying to phrase them in terms of array/index accesses instead?

I guess the compiler's unrolling heuristics generally aren't as good as that blocking "mod then div" alternative to Duff's device? Obviously taking `s` out of the loop condition is part of the magic.

Not checking the 'p' character by comparison is an easy optimization to understand.

Any places to read about this sort of thing, or any tricks or guidelines that come to mind? I write a fair bit of performance-sensitive code but it's all probably 20x slower than it could be because I have no intuition for what transformations compilers will do beyond "this prob gets inlined" etc.


> I guess the compiler's unrolling heuristics generally aren't as good as that blocking "mod then div" alternative to Duff's device? Obviously taking `s` out of the loop condition is part of the magic.

The magic in this case is the compiler autovectorizer. Making the length of the loop a loop invariant allows the autovectorizer to kick in.

The reason "blocking" by accumulating on uint8_t helps further is that it allows the compiler to accumulate on 8 bit SIMD lanes, instead 32 bit SIMD lanes. The same operation on 8 bit SIMD lanes will, to a first approximation, do x4 the work per cycle.


> Is the uint8_t just "no point in using something bigger" or does it likely help the compiler? Does/can the signedness matter as well as the size?

In a good world you could use just uint_fast8_t and compiler would optimize this question for you. In real world I don't think compilers are smart enough, or there are too many other constraints limiting them :(


Replying to my own post: The off by 1 error was incorrect. It's because I was calling the function wrong. I had been giving it the size of the buffer, not the size of the string.

Also, someone else figured out that we can just use an and instruction instead of cmp. That gives us this version:

    #include <stddef.h>
    #include <stdint.h>

    int run_switches(const char *s, const size_t n) {
      int res = 0;
      uint8_t tmp = 0;
      for (int i = n & 127; i--; ++s)
        tmp += 1 & *s;
      res += tmp;

      for (int i = n >> 7; i--;) {
        tmp = 0;
        for (int j = 128; j--; ++s)
          tmp += 1 & *s;
        res += tmp;
      }

      return 2 * res - n;
    }
This is 111GB/s, up from 4.5GB/s in the blog. I'm going to try really hard to put this problem down now and work on something more productive.


ANDs vs cmps seem to be a mixed bag. They are faster on my older Broadwell system (E5-2690V4 / 128GiB RAM) but they are actually consistently slower on my Rome system (AMD EPYC 7B12 / 512GiB RAM). Of course, neither Broadwells nor Romes have AVX512, so likely this is where you're getting the win from.


Fascinating. Thank you for these exchanges, and @414owen for the original posts. This was fun. :-)


I don't understand something. What does n&127 and n>>7 mean here?


127 is 128-1 or 2^7-1, or 1111111b (in binary). It is a faster way to compute n%D when D is known to be a power-of-two.

n>>7 is equal to n/(2^7), and is a faster way to divide with a power-of-two.


The code in question has to process a string of variable length.

But the compiler/CPU can process bytes one at a time or much faster in groups. The code is trying to process as much as possible in groups of 128.

But since the caller can pass in a string which is not a mulitple of 128 chars, the first for-loop (& 127) will figure out how much of the string to process such that the remaining string length is a multiple of 128.

The second for-loop (>> 7) calculates divides by 128 (>> 7) to find out how many multiples of 128 there are to process. The inner for-loop processes 128 chars looking for 's' chars.

Now the for-loop within a for-loop doesn't look any faster than the plain single for-loop, but I'd assume that the heuristics of certain compilers can intuit that it can generate code to operate on multiple chars at the same time (SIMD instructions), since the result of one operation are independent of others.

On a compiler that cannot generate SIMD code, the code won't be much faster, if at all, than the naive straightforward manner.


Am I missing something, or does this not really account for alignment? Is the compiler doing smarter loop splitting?


You're correct, it does not account for alignment.

The reason it helps performance is because it allows the compiler to accumulate in byte sized SIMD variables instead of int sized SIMD variables. My system has AVX-512 so 64 byte wide SIMD registers. With the non-blocking version, the compiler will load 16 chars into ints in a 64 byte ZMM register, then check if it's an 's', and then increment if so. With the blocked version, with the uint8_t tmp variable, the compiler will load 64 chars into uint8_ts in a 64 byte ZMM register instead. But there's a problem; we're gonna overflow the variables. So the compiler will stop every 128 iterations, and then move the 64 byte uint8_t accumulation variable into 4 64 byte int accumlations registers and sum them all up. Then do the next 128 iterations.

I'm pretty sure a similar thing will happen with SSE or AVX2 but I didn't check.


I think it's just reading unaligned. That's just a ~2x loss of throughput from L1, but the second the problem is large enough that the work being done doesn't reliably fit into the L1, that doesn't matter a bit anymore.

In general for x86, unaligned writes are worth doing work to avoid, but reads are in most situations not really an issue.


Bummer! Edited the answer. Not sure about the off-by-one though. Say the string is str[] = "spp\0". n = strlen(str) is 3. In the end, res would be 1 and 2 * res - n == -1.


Oh. Found it. It's because I wasn't using strlen and had been passing over the length of the buffer instead of the length of the string. Only my code had the off by 1.


This makes the assumption that the only characters in the string are "s" and "p". There is no basis for this assumption. I think this code solves a different problem rather than being an optimisation of the original code.


The string can only contain 's' or 'p' if you examine how it is constructed in bench.c, and taking that into consideration yields another ~2x speedup.


But this is not the original problem! Only p's should decrease the counter, in your code every non-s does.


The original problem was working on strings that only hold 's' and 'p' characters, as seen in bench.c. The first implementation checked against 's' and 'p' specifically, and all subsequent version optimized that first version.


Another good reason to write optimization-friendly C (or similar) over assembly code, especially in libraries, is that the compiler will evolve with CPUs, while the assembly code will not.

I've seen plenty of cases where replacing hand-written assembly with C (or similar) lead to a substantial performance increase because the assembly code was written for some old CPU and not the best way of doing things on current CPUs.


This seems like the most efficient solution. I have a neighboring comment on this post which suggests using bit arithmetic, but the above solution is more efficient than that. Here’s what the assembly code for the body of the first loop compiles down to (I had to use ChatGPT-4 as godbolt unfortunately doesn’t work on mobile):

    cmp dl, 's'    ; Compare the character with 's'
    sete dl        ; If the character is 's', set dl to 1. Otherwise, set dl to 0.
    sub al, dl     ; Subtract the result from res

    cmp dl, 'p'    ; Compare the character with 'p'
    sete dl        ; If the character is 'p', set dl to 1. Otherwise, set dl to 0.
    add al, dl     ; Add the result to res


>Anyway, the point is, you're pretty far from the point where you ought to give up on C and dive into assembly.

Thank you. I hope people who post random assembly listings on HN written in some extinct ISA will read your posts.


You forgot an important line of the code:

/* DON’T REFACTOR THIS FOR READABILITY IT WILL SLOW DOWN */


Nice! I tried it in Nim and it appears to trigger it with:

    {.overflowChecks:off.}
    proc run_switches*(input: cstring): int {.exportc.} =
      result = 0
      for c in input:
        result.inc int('s' == c)
        result.dec int('p' == c)
That gives a ~5x speedup on an Apple M1. Keeping overflow checks on only gets it up to ~2x the default C version. Always nice to know good ways to trigger SIMD opts.


> But you really really probably shouldn't.

Shouldn't "not" keep track of string length?


Err... yes. You shouldn't not keep track of string/buffer sizes.


I’m probably an optimization expert, and I would solve that problem completely differently.

On my computer, the initial C version runs at 389 MB / second. I haven’t tested the assembly versions, but if they deliver the same 6.2x speedup, would result in 2.4 GB/second here.

Here’s C++ version which for long buffers exceeds 24 GB/second on my computer: https://gist.github.com/Const-me/3ade77faad47f0fbb0538965ae7... That’s 61x speedup compared to the original version, without any assembly, based on AVX2 intrinsics.


Interesting. I think you can vectorize the prologue using movemask + popcnt instead of keeping a counter in the ymm registers (warning: untested code, still need to benchmark it):

    const __m256i zero = _mm256_setzero_si256();
    const __m256i s = _mm256_set1_epi8( 's' );
    const __m256i p = _mm256_set1_epi8( 'p' );

    const size_t a = (size_t)input;
    const size_t rem = a % 32;
    const char* aligned = input - rem;

    const __m256i v = _mm256_load_si256(( const __m256i*) input);
    const __m256i z = _mm256_cmpeq_epi8( v, zero );

    size_t m_plus = _mm256_movemask_epi8(_mm_cmpeq_epi8(v, s));
    size_t m_minus = _mm256_movemask_epi8(_mm_cmpeq_epi8(v, p));
    size_t m_zero = _mm256_movemask_epi8(_mm_cmpeq_epi8(v, z));
    size_t offset_zero = _mm_tzcnt_64(m_zero >> rem);

    m_plus = _bzhi_u64(m_plus >> rem, offset_zero);
    m_minus = _bzhi_u64(m_minus >> rem, offset_zero);

    // Skip loop we already found the end of the string...
    while (m_zero == 0) {
        // ...
    }
    
    // ...
    
    return m_plus + res - m_minus;


Good idea, and it can be used for both prologue and epilogue pieces. Updated that gist.

However, this is only relevant for very small inputs. For longer inputs the vectorized portion of the function gonna dominate the performance.


Do you know if this is possible using "std::experimental::simd" out of curiosity?

https://en.cppreference.com/w/cpp/experimental/simd


I don’t have any experience with that library.

Still, based on the documentation you have linked, I’m not sure it could possibly generate some code similar to my version. I could be wrong but I don’t see APIs which aggregate or accumulate the `simd_mask` vectors they output for results of vector comparisons.


you might want to rewrite this in a form that is compatible with @414owen's repo.


What’s a good source to learn and practice AVX?


For a starting point, I wrote that article couple years ago: http://const.me/articles/simd/simd.pdf

I don’t recommend assembly. Intrinsics are typically good enough performance wise, and writing correct assembly is hard. For instance, Chromium has non-trivial dependencies with code written in assembly, and they caused tons of fun debugging issues like that https://bugs.chromium.org/p/chromium/issues/detail?id=121838... Using intrinsics would have solved that because modern compilers follow ABI conventions of the target platforms very carefully.

About that highway, I don’t have any experience but based on the documentation I don’t like it too much. They say that’s a thin wrapper over intrinsics, but I believe it still breaks things. Specifically, Intel and ARM don’t document highway, but they have decent documentation on their intrinsics. Similarly, stackoverflow has thousands of questions and answers with tags like SSE and AVX, most of them are related to intrinsics, but nothing related to highway.


You could study Google's Highway library [1].

[1] https://github.com/google/highway


FFmpeg has a lot of assembly language that can be added to:

https://blogs.gnome.org/rbultje/2017/07/14/writing-x86-simd-...


This code screams for SIMD! If you can change the prototype to take an explicit length, you could easily read and process 16 bytes at a time (the compares will give you values you can just add and subtract directly). Heck, even calling strlen() at the function's start to get the explicit length would probably be worth it.


I threw together a quick risc-v vectorized implementation:

    size_t run(char *str) {
            uint8_t *p = (uint8_t*)str;
            long end = 0;
            size_t res = 0, vl;
            while (1) {
             vl = __riscv_vsetvlmax_e8m8();
                    vuint8m8_t v = __riscv_vle8ff_v_u8m8(p, &vl, vl);
                    end = __riscv_vfirst_m_b1(__riscv_vmseq_vx_u8m8_b1(v, '\0', vl), vl);
                    if (end >= 0)
                            break;
                    res += __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 's', vl), vl);
                    res -= __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 'p', vl), vl);
                    p += vl;
            }
            vl = __riscv_vsetvl_e8m8(end);
            vuint8m8_t v = __riscv_vle8_v_u8m8(p, vl);
            res += __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 's', vl), vl);
            res -= __riscv_vcpop_m_b1(__riscv_vmseq_vx_u8m8_b1(v, 'p', vl), vl);
            return res;
    }

Here are the results from the above, the switch and the table c implementation, ran on my mangopi mq pro (C906, in order rv64gc with rvv 0.7.1, and a 128 bit vector length):

    switch: 0.19 Bytes/Cycle
    tbl:    0.17 Bytes/Cycle
    rvv:    1.57 Bytes/Cycle (dips down to 1.35 after ~30 KiB)
Edit: you can go up to 2/1.7 Bytes/Cycle, if you make sure the pointer is page aligned (and vl isn't larger than the page size), see comments


To be fully correct, you'd need the load to be a fault-only-first load (which rvv does have), otherwise that could fail if the null byte was just before the end of allocated memory.


I just found your rvv intrinsics-viewer [0], that'll be so helpful.

I tried building one, my self, but my miserable web skills didn't allow me to lazily load the instructions, which made it too slow for actual use.

Can I share your project on lemmy?

[0] https://dzaima.github.io/intrinsics-viewer


Go ahead! I'm not much of a web dev either, but decided to struggle through it to, mainly, just have better searching. (originally for intel & ARM intrinsics, which are also available if downloaded offline)


I'm not sure I fully understand fault-only-first load, but reading the description of vle8ff.v I think I only need to exchange the load inside of the loop?

How does the normal load deal with faults?

I'll update the parent comment, it slowed down the speed from 2/1.7 to 1.57/1.36 Bytes/Cycle.


You'd probably want to have a new __riscv_vsetvlmax_e8m8 at the start of each loop iteration, as otherwise an earlier iteration could cut off the vl (e.g. page unloaded by the OS), and thus the loop continues with the truncated vl.

The normal load should just segfault if any loaded byte is outside of readable memory, same as with a scalar load which is similarly partly outside.


> You'd probably want to have a new __riscv_vsetvlmax_e8m8 at the start of each loop iteration, as otherwise an earlier iteration could cut off the vl (e.g. page unloaded by the OS), and thus the loop continues with the truncated vl.

Oh, yeah, that was a big oversight, unfortunately, this didn't undo the performance regression.

> The normal load should just segfault if any loaded byte is outside of readable memory, same as with a scalar load which is similarly partly outside.

I don't quite understand how that plays out.

The reference memcpy implementation uses `vle8.v` and the reference strlen implementation uses `vle8ff.v`.

I think I understand how it works in strlen, but why does memcpy work without the ff? Does it just skip the instruction, or repeat it? Because in either case, shouldn't `vle8.v` work with strlen as well? There must be another option, but I can't think of any.

Also, does this mean I can get the original performance back, if I make sure to page align my pointers and use `vle8.v`?


The memcpy doesn't use a vlmax, it uses a hand-chosen vl. The load won't fault on any elements not loaded (here, elements past the vl), so the memcpy is fine as it only loads items it'll definitely need, whereas your original code can read elements past the null byte.

And yeah, aligning the pointer manually would work (though then it wouldn't be portable code, as the spec does allow for rvv implementations with VLEN of up to 65536 (8KB per register; 64KB with LMUL=8), which'll be larger than the regular 4KB pages).


Ah, this makes a lot more sense now. I thought the "fault" was about the kernel interrupting when a new page needs to be loaded into physical memory, which would also happen for memcpy.


I think, it's a particular quirk of x86 architecture. Branching is expensive in comparison because not doing branching is super cheap. https://wordsandbuttons.online/challenge_your_performance_in...

However, on other processors, this might not be the case. https://wordsandbuttons.online/using_logical_operators_for_l...

The good question is what do we need C for in general? Of course, we can hand-tailor our code to run best on one particular piece of hardware. And we don't need C for that, it would be the wrong tool. We need assembly (and a decent macro system for some sugar)

But the original goal of C was to make translating system-level code from one platform to another easier. And we're expected to lose efficiency on this operation. It's like instead of writing a poem in Hindi and translating it in Urdu, we write one in Esperanto and then translate to whatever language we want automatically. You don't get two brilliant poems, you only get two poor translations, but you get them fast. That's what C is for.


Rearranging branches (and perhaps blocks too?) will definitely be done if you are building using FDO, because without FDO (or PGO) the compiler has no idea how likely each branch is to be taken. Cmov can also be enabled by FDO in some cases.

However, whether or not using cmov is effective compared to a regular test/jump is highly dependent on how predictable the branch is, with cmov typically performing better when the branch is very unpredictable. Since they got a 6x speedup with cmov, I assume that their test input (which isn't described in the post, and is also not in their GitHub repo) consists of random strings consisting almost entirely of s and p characters. There's nothing wrong with this, but it does make the post seem a little misleading to me, as their clever speedup is mostly about exploiting an unmentioned property of the data that is highly specific to their benchmark.


> I assume that their test input (which isn't described in the post, and is also not in their GitHub repo) consists of random strings consisting almost entirely of s and p characters.

test code is here: https://github.com/414owen/blog-code/blob/master/02-the-same... it randomly selects between 's' or 'p'. The characters can't be anything other than 's', 'p', or the terminating null. Knowing that particular fact about our input gives us this ...clever... optimization:

    int run_switches(const char* s) {
        int result = 0;
        while (*s)
            result += (1 | *s++) - 'r';
        return result;
    }
which compiles to:

    run_switches:
            movzx   eax, BYTE PTR [rdi]
            xor     edx, edx
            test    al, al
            je      .L1
    .L3:
            or      eax, 1
            inc     rdi
            movsx   eax, al
            lea     edx, [rdx-114+rax]
            movzx   eax, BYTE PTR [rdi]
            test    al, al
            jne     .L3
    .L1:
            mov     eax, edx
            ret
This is too clever by half, of course, but it perfectly illustrates your point about exploiting properties of the data.


You do not even need to subtract 'r'.

    int run_switches(const char* s) {
        int s_count = 0;
        const char *begin = s;
        while(*s) {
            s_count += (1 & *s++);
        }
        int count = s-begin;
        return count - s_count;
    }
which compiles to:

    .L49:
        and     edx, 1
        add     rax, 1
        add     ecx, edx
        movzx   edx, BYTE PTR [rax]
        test    dl, dl
        jne     .L49
edit: other variant

    int run_switches2(const char* s) {
        const char *begin = s;
        int sum = 0;
        while(*s) {
            sum += *s++;
        }
        int count = s-begin;
        int s_count = sum - ('s'*count)/('p'-'s');
        int p_count = count - s_count;
        return p_count - s_count;
   }
which compiles to:

   run_switches2(char const*):
        movsx   eax, BYTE PTR [rdi]
        test    al, al
        je      .L56
        mov     rdx, rdi
        xor     ecx, ecx
   .L55:
        add     rdx, 1
        add     ecx, eax
        movsx   eax, BYTE PTR [rdx]
        test    al, al
        jne     .L55
        sub     rdx, rdi
        imul    esi, edx, 115
        movsx   rax, esi
        sar     esi, 31
        imul    rax, rax, 1431655766
        shr     rax, 32
        sub     eax, esi
        add     ecx, eax
        sub     edx, ecx
        mov     eax, edx
        sub     eax, ecx
        ret
   .L56:
        xor     eax, eax
        ret
None of these will beat the clever blocked SIMD someone showed elsethread.


> because without FDO (or PGO) the compiler has no idea how likely each branch is to be taken

So, the maximum amount of times you can hit '\0' is once in the string, because then the function returns, but you can hit the other characters many times, which seems to be information a compiler has access to without PGO.

PGO does help, of course, and on my machine gives me 2.80s, which is better than the code at the end of the `Rearranging blocks` section :)

> I assume that their test input (which isn't described in the post, and is also not in their GitHub repo)

It's described under `Benchmarking setup`, and is in the repository here: https://github.com/414owen/blog-code/blob/master/01-six-time...

Side note: There's a part two to this post (linked at the bottom) where I make the C code as fast as I possibly can, and it beats all the assembly in this post.

I never said writing assembly is (necessarily) a good idea, I just find optimizing it, and deciphering compiler output, an interesting challenge, and a good learning opportunity.


Imagine a scenario where most of the strings being processed contain a single null character, with no other characters. In that case checking for the null character first would be optimal.

Does the compiler know that this isn't true? No, it doesn't. The author of the article is making an assumption about the contents of the data that might seem reasonable but isn't necessarily true.


But because in the single-null case the loop body is executed only once, the gains of arrangement that prefers nulls are pretty slim compared to long-string cases where the loop body is executed many times. For example if your dataset contains 99 cases of single null strings and one case of 100 chars long string, it might still be optimal on aggregate to use the long-string optimizing arrangement.

Of course there are still some cases where non-zero strings are extremely rare and as such optimizing for those makes sense.


I think I managed to improve on both this post, and its sequel, at the cost of specializing the function for the case of a string made only of 's' and 'p'.

The benchmark only tests strings made of 's' and 'p', so I think it is fair.

The idea is as follow. We want to increase `res` by one when the next character is `s`. Naively, we might try something like this:

    res += (c - 'r');  // is `res += 1` when c == 's' 
This doesn't work, as `'p' - 'r' == -2`, and we'd need it to be -1.

But `'p' - 'r'`, when viewer as an unsigned integer, underflows, setting the carry flag. Turns out x64 has an instruction (adc) that adds two registers _plus_ the carry flag.

Therefore we can replace two `cmp, cmov` with one `sub, adc`:

    run_switches:
            xor    eax, eax            # res = 0
    loop:
            movsx  ecx, byte ptr [rdi]
            test   ecx, ecx
            je     ret
            inc    rdi
            sub    ecx, 'r'
            adc    eax, ecx     # Magic happens here
            jmp    loop
    ret:
            ret
            
Benchmarks are as follows (`bench-x64-8` is the asm above):

    Summary
      '01-six-times-faster-than-c/bench-x64-8 1000 1' ran
        1.08 ± 0.00 times faster than '02-the-same-speed-as-c/bench-c-4-clang 1000 1'
        1.66 ± 0.00 times faster than '01-six-times-faster-than-c/bench-x64-7 1000 1'
Of course, one could improve things further using SWAR/SIMD...


Very interesting approach. I should probably have specified that the somewhat naive assembly in `02-the-same-speed-as-c/loop-5.x64.s` is the fastest version I have.

On my machine I'm getting 0.244s for `loop-5.x64.s` and 0.422s for your implementation above.

I'm not sure why exactly we're seeing this discrepancy, and for what it's worth your implementation looks faster to me. I guess this is why you need to always benchmark on the hardware you're going to be running the code on...


I rerun the benchmark vs loop-5 and loop-7 from the second post. Runtime is basically the same on my machine.

I would have expected yours to be faster given that it needs to execute fewer instructions per loop iteration. Though maybe the CPU can run `adc` on more ports compared to a load from memory?

    Summary
      '01-six-times-faster-than-c/bench-x64-8 1000 1' ran
        1.00 ± 0.00 times faster than '02-the-same-speed-as-c/bench-x64-7 1000 1'
        1.66 ± 0.00 times faster than '01-six-times-faster-than-c/bench-x64-7 1000 1'

    Summary
      '01-six-times-faster-than-c/bench-x64-8 1000 1' ran
        1.01 ± 0.00 times faster than '02-the-same-speed-as-c/bench-x64-5 1000 1'
        1.66 ± 0.00 times faster than '01-six-times-faster-than-c/bench-x64-7 1000 1'


Even simpler: just sum all elements of the array. Then at the end subtract 'p'*len from the sum, then divide by ('s'-'p') to get the s count. The 'p' count is len minus the 's' count.

The initial sum is easily vectorized as well.

If I've not made any mistakes it should work. Only issue is possible overflow on the running sum.

Can't be bothered to benchmark it though:).

edit: missed the decrement when you see 's'. So the final result is p_count - s_count.


That's likely the fastest way to do that without vectorization. But you'd need to upcast 's' to an uint64 (or at least an uint32). That means that vectorization would operate on 32/64 bit lanes.

With vectorization, I think the way to go is to have two nested loops, an outer advances by 32 * 255 elements at a time, and an inner one that loads 32 bytes, compares each character to 's', and accumulates on 8 bit lanes.

Then in the outer loop you do an horizontal sum of the 8 bit accumulators.


My SWAR version almost does what your vectorization algorithm description does - just that the SWAR-code looks rather gnarly because the compiler isn't auto-generating the vector code for you, it's hand-coded in C by me and I'm limited to 64 bits at a time.


Indeed, the blocked vectorization with 8 bits accumulators shown elsethread is going to be faster and there reducing the sum to 1 bit per iteration is worth it.


I took the 64-bit SWAR ('S'IMD-'W'ithin-'A'-'R'egister) road and passed in the string length - the calling code has the length "right there"!!!

Using the original run_switches function, app took 3.554s (average of 10 runs).

With the SWAR-version with the string length passed in, app took 0.117s (average of 10 runs).

That's an overall 27.6x speedup.


If I unroll the main while loop to handle 4x as much each time through the loop in the SWAR-version, the runtime drops to 0.0562s (average 10 runs).

That's an overall 57.5x speedup.


If I convert the unrolled-64-bit SWAR function to use 32-bit chunks instead, average runtime almost doubles, approx. 0.1s now.

Need sleep now.


If I unroll the 64-bit SWAR version by 8x instead of 4x, the runtime is reduced by another 10% over the 4x-unrolled SWAR version. Diminishing returns...


How much faster is this:

    int run_switches(const char *buf) {
       size_t len = strlen(buf);
       int res = 0;
       for (size_t i = 0; i < len; ++i) {
         res += (buf[i] == 's') - (buf[i] == 'p');
       }
       return res;
    }
strlen() should be implemented in a pretty fast way, and after the buffer size is known, the compiler can autovectorize the inner loop, which does happen in practice: https://gcc.godbolt.org/z/qYfadPYoq


A while back, I wrote a UTF-8 decoder in Common Lisp, targeting SBCL (it already has one built in, this was an exercise). Pretty much all of the optimization win (after the obvious low-hanging fruit) was structuring the code so that the compiler would generate cmov* instructions rather than branches.


What's some examples of the code changes that you made? And did you just do repeated disassemblies of the functions to see that it was using the correct instructions, or did you do some benchmarking to show your changes were actual improvements?


Gosh, I'd have to see if I can dig it up this was a few years ago.

I did all of the above, plus profiling (sb-sprof combined with disassemble will show assembly level profiling).


Branches are prone to be faster than conditional moves if they are correctly predicted, because they do not increase the critical path length. And utf-8 decoders are commonly run on all-ascii input. What were you benchmarking on?


I ran separate benchmarks on all-ASCII, BMP-only, and ascii with non-BMP. ASCII was not slower on the low-branch version.


First, before optimizing you should consider correctness and security. input should be const and the return value should be ssize_t (so you don't have numeric overflow on 64-bit).

Second, consider this replacement function:

  ssize_t test(const char \*input) {
    ssize_t res = 0;
    size_t l = strlen(input);
    size_t i;
    for (i=0; i<l; ++i) {
      res += (input[i] == 's') - (input[i] == 'p');
    }
    return res;
  }
The timings are (using gcc -O3 -march=native): your function 640 cycles, mine 128 cycles. How can that be? I'm reading the memory twice! I have one call to strlen in there, and memory is slow. Shouldn't this be much slower?

No. strlen is a hack that uses vector instructions even though it may technically read beyond the string length. It makes sure not to cross page boundaries so it will not cause adverse reactions, but valgrind needs a suppression exception to not complain about it.

If you know the length beforehand, the compiler can vectorize and unroll the loop, which it happens to do here. To great effect, if I may say so.

The art of writing fast code is usually to get out of the way of the compiler, which will do a perfectly fine job if you let it.

If you really wanted to, you could get rid of the strlen by hacking your logic into what strlen does. That would make the C code much less readable and not actually help that much. My test string is "abcdefghijklmnopqrstuvxyz", so it's all in the l1 cache.


There's an error in the pseudocode.

      cmp     ecx, 's'            #   if (c == 's')
      jne     loop                #     continue
      add     eax, 1              #   res++
      jmp     loop                #   continue
should be

      cmp     ecx, 's'            #   if (c != 's')
      jne     loop                #     continue
      add     eax, 1              #   res++
      jmp     loop                #   continue


I believe the first `jne` should be `je`, right ?


No, the assembler is correct. Jump (early) back to the beginning of the loop if not equal to s; otherwise, continue executing the next instruction (add eax, 1) and then unconditionally jump back to the beginning of the loop.


well then there's a magical bit somewhere since both assembly listing are identical


Yes, the assembly listings are identical. I was very clear that the error was in the pseudocode. That is why I said "There's an error in the pseudocode."

There's nothing "magical" about paying attention before condescending to someone.


Oh my bad, I was not condescending, I simply misread your comment and was then very confused after your first answer.

I know I'm the less knowledgeable here, and even then there's nothing to gain in criticizing someone like this online.

Sorry again :)


A clickbait title for an in-depth look at hand-optimizing a very simple loop.


I'm not a compiler expert but if it's a "very simple loop" is it still too complex for the compiler to make good machine code? Did they use a bad compiler on purpose? Or are computers just not yet fast enough to do a good job with very simple loops in practical compilers?


> are computers just not yet fast enough to do a good job with very simple loops in practical compilers?

The short answer to this question is 'yes', but there are some extenuating factors:

- Although we could do interesting things with unlimited computational resources, the current crop of c compilers is simply not very good, compared with what's possible today.

- Performance is always workload-dependent; the compiler has been somewhat shafted here because it doesn't know what sorts of inputs the function usually receives. The compiler output is better than the 'improved' code for some inputs. (It's possible you could get a better result from the existing compilers and c code just by using profile-guided optimisation.)

- The difference is prone to be more pronounced in simple loops than large ones. This is a contrived use-case. There is not a factor of 6 of performance hiding in optimised c code which could be recovered by doing the sorts of optimisations done by the op. Probably something more like 10-20%.


> the current crop of c compilers is simply not very good, compared with what's possible today.

That's quite dismissive. What exactly "is possible today" and why aren't these top compilers using them?


One prominent example: the use of intermediate representations based on basic blocks introduces redundancies that increase the complexity of the compiler, requiring attendant redundancies in order to optimise the same. You can see the redundancy manifest here https://godbolt.org/z/8o3oe39hh as different code generation from f and g. (They may change the result of this particular test in the future, but it seems unlikely that the disease—rather than the symptom—can be treated without a complete rearchitecture.)

E-graphs ameliorate phase ordering issues and allow for exploring the space of non-monotonic rewrites; recent research makes them computationally viable.

Put simply: it's legacy. Gcc and llvm are millions of lines of code, and they assume a particular architecture. Changing that is not easy.

Another issue, which I did not mention (but which is pertinent) is that c is a poor language for compilation. (Fran allen famously said 'c has destroyed our ability to advance the state of the art'.) In some respects, the optimisations performed automatically by modern high-performance cpus are more sophisticated than those done by c compilers, howbeit with less reach; the only reason they are able to do this is that they have direct control of the execution and hence have a greater ability to abstract over the side effects which are rampant in most c code.


E-graphs are interesting, but one still has to deal with combinatorial explosions. Are you alluding to some powerful search heuristic?

Your example touches on the problems of inflexible ABI, namely caller saved registers and the unknowability of side effects of external functions. Very weird that it can't reorder `r = x+y` despite it having no "observable" side effect until `return r`, since that return dominates the assignment, and there's no real relation between (the return, assignment) and (eff()).


I looked at it closer. In C, it is a side effect to assign to a variable. For an extern function not annotated __attribute__((pure)) the compiler has to assume the function call generates side effects. This prevents it from reordering the assignment and function call. Since x86-64 ABI has caller saved registers, in the case where it calls eff() first, it has to save x and y, and after the call, restore them.


The work on e-graphs I refer to is <https://egraphs-good.github.io/>—amortised rebuilds.

My example has nothing whatsoever to do with abi, and everything to do with ir. f and g are exactly semantically equivalent, and this equivalence is trivial to show; that the compilers generate different code for each demonstrates redundancies in their ir.

> it is a side effect to assign to a variable

But that variable is not aliased here.


The problem is the author of the article is making some huge implicit assumptions that the compiler can't possibly know about.

Consider this statement: "However, we know some things about this loop. We know that the only time we break out of it is when we hit the null terminator (’\0’). The code clang generates checks for the null terminator first, but this makes no sense."

This statement contains huge assumptions about the lengths of the input strings and the frequency of the letters 's' and 'p' in the input. And then has the chutzpah to call the compiler's failure to read his mind about this as "making no sense."

Good first effort by the author, but has not sufficiently thought through the problem.


That's the thing, a C compiler has all the information it needs to know that the maximum amount of times a '\0' can be processed in the loop is once (because the function returns), but there's no upper bound on the amount of times other characters are seen in the loop.

I might be missing a reason that this information of opaque to the compiler though, in which case, this section of the article is indeed lacking, but I'm happy to learn :)


It's not just that the C compiler lacks the information... but the reader of this article also lacks this information.

String length tells you the frequency with which nul terminators will be found. Without knowing frequency of occurrence of the nul terminator, 's', and 'p' then you cannot know which one occurs most often.

Consider two benchmark cases: (1) every string tested contains exactly one character (2) every string tested is 1MB long and is composed entirely of 's' and 'p'.

The author's first "optimization" assumes nul is rare. It would make benchmark (1) worse, and (2) better.

The article is a good example of "specification is hard, code is easy." He insufficiently specified the problem to be solved, and his test cases contained information not in the code and not in the text of the article.


I guess the question is whether the compiler should optimize a function containing a loop for a single null terminator, or for more data.

I would suggest the latter is what you want most of the time.

There's also the option of running a quick check for the null terminator before the loop, and then optimizing the loop for the other options.

But in any case, I think the demonstration of the technique of rearranging branches is interesting, and I needed a program to apply it to.


It was still worth reading. Every critic needs something to read and nitpick ;-)

Keep at it! Just as every program is a chance to improve programming, every article written is a chance to improve writing. It was well written.


It's not the upper bound that matters but the frequency. How frequently should the compiler assume an 's' appears in the dataset, or any other character?

We know that E[# of '\0' in a string] == 1.

But what is E[# of 's' in a string]? Is it greater or less than E[# of '\0' in a string], and how should the compiler know this?

You haven't given the compiler any reason to assume that 's' or 'p' will appear more often than '\0'.


Ok the author should have written "this makes no sense... on this particular case"


This is the right answer:

https://news.ycombinator.com/item?id=36622584

Optimal assembly (forgoing SIMD, at least) for this loop on modern x86 is highly dependent on the entropy of the runtime data.


OK so they were abusing the benchmark, like the compiler's output would be faster on less contrived test data? Do I have to search what are fdo or pgo or cmov to understand the answer?


The compiler will generate different code if it knew the rates at which branches were taken.

If a branch is almost always taken or almost never taken, a compiler will want to emit a jump. The frontend will be able to predict the jump with high probability, and a successfully-predicted jump is "free." The cost of a misprediction is paid for by the near-zero cost of the many successful predictions.

If a branch is hard to predict (and taking versus not taking it would load a different value into a register/memory), the compiler wants to emit a conditional move (cmov). A conditional move is slightly "more expensive" in the backend because the CPU has to wait for the condition to resolve before it can execute instructions dependent on the output. However, it is much cheaper than many mispredicted branches (mispredicts around half of the time).

FDO (feedback-directed optimization) or PGO (profile-guided optimization) means "run the code on some sample input and profile how often branches are taken/not taken." It gives the compiler more information to generate better code.

The problem with the blog post is that the compiler has no idea what the function's input data will look like. It (arbitrarily) chose to generate branches instead of cmovs. However, if the benchmark input is better suited for cmovs, then the benchmark will (wrongly) show that the compiler generates "slow" assembly. But that's not a fair test, because with PGO/FDO the compiler would generate equivalent assembly to the "fast" assembly (actually, probably faster). Finally, the human (OP) is using their knowledge of the benchmark data "unfairly" to write better assembly than the compiler.

The takeaway is: most of the time, one can't optimize code/assembly in a vacuum. You also need to know what the input data and access patterns look like. FDO/PGO gives the compiler more data to understand what the input data/access patterns look like.


Thank you this is an amazingly comprehensive answer! Now I wonder what would be the workflow for using these compiler features. Like if I am a normal or bad C programmer and I write my program and use valgrind to check that it doesn't have obvious problems and I compile it with -march native or whatever, then I can add some step to the workflow to somehow re-compile it in a way that uses the branching or access patterns of some examples that I let it process for that purpose?


Yes, Clang supports FDO, but it might be hard to set up (I've never set up FDO myself). You could check out https://github.com/google/autofdo and https://clang.llvm.org/docs/UsersManual.html#profile-guided-....

(People within Google say "FDO", basically everyone else says "PGO".)


I wonder what superoptimizers like stoke and souper would do with this code.


Don't get discouraged by the comments and that others made faster variants. I liked both your articles very much and learned a few new things.


It's a cardinal rule that any time someone utters "XYZ is n faster than C" someone comes along and shows C is actually 2x faster than XYZ.


I had an old compilers professor say something like this once. “If you think you can do something better than the C compiler, I promise you you can’t.”


That is just dumb and overly reductive. There is just lot of properties about programs that can not be expressed to compiler with portable ISO C but could be exploited for optimization. Of course reducing portability progressively improves situation, like using GCC extensions or machine-specific intrinsics. But even then its ridiculous to claim that C compilers could in general case generate true optimal code.


To a point. A modern C compiler generates mind boggingly fast assembler. However, some languages make it way easier to write sophisticated algorithms more easily.

For instance, suppose you're writing a program to find the nth Fibonacci number for whatever reason. In Python, the naive version might look like:

  def fib(n):
      if n <= 1:
          return n
      return fib(n - 1) + fib(n - 2)
On my machine, that takes about 12 seconds to find the 40th number. Altering that slightly like:

  from functools import cache
  @cache
  def fib(n): ...
makes the whole program take about 30 milliseconds total. The 400th takes about 32ms and emits an answer that won't fit in a 256-bit int.

Of course you can do the exact same kind of caching in C! I mean, the main Python interpreter's written in C, so by extension any algorithm you can express in Python you can also express in C. It'd probably be a lot faster, too!

But in practice, if I'm writing that in Python, I can use the obvious algorithm, spent 10 seconds slapping a caching decorator on it, verify that the end result is ridiculous fast and efficient, then move on to other problems.

Any reasonable C compiler will emit assembler that's vastly better than anything I could come up with. Conversely, I personally can write far better algorithms in Python than I could in C, because it's easier for me to express cleverness in that language. Those algorithmic improvements tend to have a far better speed payoff than I'd personally get from a more efficient implementation of a crappy method.


How many brains has the Fibonacci example broken...

You'd unroll it to a loop on both C and Python. Fibonacci doesn't need a cache. It needs K previous values, where K=1.


Yeah, I almost said “suppose you were writing a Fibonacci program because who did you annoy to make this your life now...”

Like, obviously you’re not going to be writing `fib(n)` for real. I still claim that other languages — not just Python, either — make it easier to express cleverer algorithms than C does. You can’t write anything in Rust you can’t write in C, but it’ll probably be easier to say it more efficiently, and more correctly, in Rust. And much of the time, using a better design is going to blow compiler improvements out of the water.

(The professor was right if you limit the scope of the statement to “programs written in C or assembler”, of course. Unless you’re a freaking genius, a compiler’s going to write better object code.)


If for some reason you really wanted to compute Fib(n) for ridiculously large numbers of n, you would probably use that [Fib(n), Fib(n-1)] = A [Fib(n-1), Fib(n-2)] for the transition matrix A = [[1, 1], [1, 0]] and thus [Fib(n+1), Fib(n)] = A^n [Fib(1), Fib(0)] and then use exponentiation by squaring to compute A^n directly and thus Fib(n) in log_2(n) steps.


adding @cache meaningfully changes the algorithmic complexity from O(1.8^^N) (iirc - it's obviously exponential) to O(N).


Yep. That’s what I mean about cleverer algorithms. And while you could certainly do the exact same thing in C, it wouldn’t be a one-line change you could casually add and move on from.


Someone has to teach the compiler how to be clever.


You can also use math to avoid most of the jumps:

    int run_switches(char *input) {
      int res = 0;
      while (true) {
        char c = *input++;
        if (c == '\0') return res;
        // Here's the trick:
        res += (c == 's') - (c == 'p');
      }
    }
This gives a 3.7x speed compared to loop-1.c. The lower line count is also nice.


Nice. The way I read the cmove version, it's more or less this except the trick line goes

    res += (c == 's') ? 1 : (c == 'p') ? -1 : 0
I haven't done C in decades so I don't trust myself to performance test this but I'm curious how it compares. Pretty disappointed that TFA didn't go back and try that in C.


So I actually did try that, but and IIRC it didn't produce a CMOV with either gcc or clang. I didn't put it in the repo because it wasn't an improvement (on my machine) and I decided not to write about it.

Maybe you get different results though?


Compare also https://codegolf.stackexchange.com/a/236630/32575 "High throughput Fizz Buzz" where someone uses assembly to generate Fizz Buzz at around 54-56GiB/s.


Fantastic post, I appreciated that the ASM was displayed in tabs as both "standard" and "visual-arrows"-annotated.

Kept me reading into the follow-up article.

Also, I love the UI of this blog.


Kind words, much appreciated!


Any guide on how a person who uses Python or JavaScript can learn such things? I mean knowing which assembly code would be better, which algorithm makes better usage of processor etc.? :)

Also, how is such optimization carried out in a large scale software? Like, do you tweak the generated assembly code manually? (Sorry I'm a very very very beginner to low-level code)


You do this by first learning C (or similar languages) and then compilers and maybe also operating systems. What you're seeing in this blog is the equivalent result of at least one or two years university level education, so it's not like there is a single book or tutorial you could use to get you up to speed, especially if you have no previous experience in that area. And building a better compiler optimisation in general is a PhD thesis level task. But it's also not necessary if you want to design user applications on today's hardware.


You could try this (in-progress) course: https://www.computerenhance.com/p/table-of-contents


This is pretty much `assembly language the game`: https://tomorrowcorporation.com/humanresourcemachine

It's not a useful architecture, but it teaches the thought process really well, and you end up discovering a lot of optimization naturally.

For this article, I'm measuring every step to see what the performance implications of the changes are, which, along with some educated guesses and some googling/reading other articles, was enough for me to figure out what was going on.

In part two (https://owen.cafe/posts/the-same-speed-as-c/) especially, I didn't know what was going on with the benchmarks for a long time. Eventually I got lucky and made a change, which led to a hypothesis, which lead to more tests, which led to a conclusion.


You learn by doing. Compiler Explorer [0] is fantastic for this sort of thing. Typically you would do this sort of optimisation after profiling and then on a per-function level.

[0] godbolt.org


I think it's straightforward to optimize to a point it's maybe about 10x faster than the "optimized" version. The answer is of course SIMD vectorization.


I experimented with different optimizations and ended with 128x speedup. The improvement mainly comes from manual SIMD intrinsics, but you can go a long way just by making the code more auto-vectorization friendly as some other comments have mentioned. See:

https://ipthomas.com/blog/2023/07/n-times-faster-than-c-wher...


Back-of-the-envelope approach that should eliminate most branching:

  int table[256] = {0};                                                           
                                                                                
  void init() {                                                                   
    table['s'] = 1;                                                             
    table['p'] = -1;                                                            
  }                                                                               
                                                                                
  int run_switches(char *input, int size) {                                                 
    int res = 0;                                                                
    while (size-- >= 0) res += table[input[size]];
    return res;                                                                 
  }


The array lookup approach taken in part two:

https://owen.cafe/posts/the-same-speed-as-c/

But taking the length of the string as a parameter is not, because that changes the problem statement (making the solution vectorizable)

Also note that you'll try to read element -1 of the input. You probably want to change the `>=` to a `>`


Would it be possible to write a code profiler and compiler that work together to optimize code based on real-world data? The profiler would output data that would feed back into the compiler, telling it which branches were selected most often, which would recompile optimizing for the profile. Would this even work? Has it already been done?



Cool, thanks.


I see other people have done minor rewrites, but the post does mention reordering branches, so the obvious question is whether there was any attempt to use PGO, which is an obvious first step in optimization.


A very instructional post. I wish more people had such a level of mastery of GPU assembly and its effects, and would post such treatments on outsmarting NVIDIA's (or AMD's) optimizers.


Having a full-blown predicate support is so nice to have, but it interferes with compact instruction encoding.

Such bloated ISA like x86 might actually handle predicate support, but who will try such a radical change?


AVX512?

Also the original ARM 32 bit instruction sent had extensive predication.


This is such a wonderful post! Heavenly.


Really interesting. The recent HN article on branchless binary search also covered cmov: https://news.ycombinator.com/item?id=35737862


Was the C compiled with optimisation enabled?


Yes, I explained in the `Benchmarking setup` section that I used `march=native`, but I guess I forgot to mention I used -O3.


How fast is forth compared to C these days?


Close to nobody works on forth compilers nowadays, and the compilers that are optimising or even fast is very small.

People say that forth isn't very optimisable for our register machines, but I reckon that you can get pretty good results with some clever stack analysis. It's actually possible to determine arity statically if you don't have multiple-arity words, which are very rare. That allows you to pass arguments by register.

Anyway, I'm not even close to an expert so don't take what I said as facts.


naive q: could one just count one of the letters and subtract it from the total number of letters?


You’d need to count both as other characters are ignored.


naïve q2: does that mean most comparisons are no match 3 times? could one do a bitwise operation and fuzzy test for all 3 in one go?


Is it still ASCII? If so p is 01110000 and s is 01110011(?) but I don't know what \0 is, is it 00000000? Is there anything else know about the data? If the rest of the characters are all numbers, those all start with 0011 but that doesn't seem of much use. 4-9 have either the 5th or 6th bit set.

Only if AND with 00001100 yields zero the other 3 tests are needed.

Ofc I have no idea what opcodes the language provides.


Here is a perfectly useless idea: AND with 00000010 would give 2 for s an 0 for p. (-1 and you have +1 for s and -1 for p as the article describes) Then you have a number that one could just add in stead of jumping to +1 or -1.


I made a variant that is (on my Apple m1 machine) 20x faster than the naive C version in the blog by branchlessly processing the string word-by-word:

    int run_switches(const char* input) {
        int res = 0;

        // Align to word boundary.
        while ((uintptr_t) input % sizeof(size_t)) {
            char c = *input++;
            res += c == 's';
            res -= c == 'p';
            if (c == 0) return res;
        }

        // Process word-by-word.
        const size_t ONES = ((size_t) -1) / 255;  // 0x...01010101
        const size_t HIGH_BITS = ONES << 7;       // 0x...80808080
        const size_t SMASK = ONES * (size_t) 's'; // 0x...73737373
        const size_t PMASK = ONES * (size_t) 'p'; // 0x...70707070
        size_t s_accum = 0;
        size_t p_accum = 0;
        int iters = 0;
        while (1) {
            // Load word and check for zero byte.
            // (w - ONES) & ~w has the top bit set in each byte where that byte is zero.
            size_t w;
            memcpy(&w, input, sizeof(size_t));
            if ((w - ONES) & ~w & HIGH_BITS) break;
            input += sizeof(size_t);

            // We reuse the same trick as before, but XORing with SMASK/PMASK first to get
            // exactly the high bits set where a byte is 's' or 'p'.
            size_t s_high_bits = ((w ^ SMASK) - ONES) & ~(w ^ SMASK) & HIGH_BITS;
            size_t p_high_bits = ((w ^ PMASK) - ONES) & ~(w ^ PMASK) & HIGH_BITS;

            // Shift down and accumulate.
            s_accum += s_high_bits >> 7;
            p_accum += p_high_bits >> 7;
            if (++iters >= 255 / sizeof(size_t)) {
                // To prevent overflow in our byte-wise accumulators we must flush
                // them every so often. We use a trick by noting that 2^8 = 1 (mod 255)
                // and thus a + 2^8 b + 2^16 c + ... = a + b + c  (mod 255).
                res += s_accum % 255;
                res -= p_accum % 255;
                iters = s_accum = p_accum = 0;
            }
        }
        res += s_accum % 255;
        res -= p_accum % 255;

        // Process tail.
        while (1) {
            char c = *input++;
            res += c == 's';
            res -= c == 'p';
            if (c == 0) break;
        }

        return res;
    }
Fun fact: the above is still 1.6x slower (on my machine) than the naive two-pass algorithm that gets autovectorized by clang:

    int run_switches(const char* input) {
        size_t len = strlen(input);
        int res = 0;
        for (size_t i = 0; i < len; ++i) {
            char c = input[i];
            res += c == 's';
            res -= c == 'p';
        }
        return res;
    }


I assume the M1's SIMD registers are wider/more numerous than just the couple of size_t registers used for the loading/masking/accumulating inner loop in your run_swtches().

You can speedup the code by unrolling your inner loop a few times (try 4x or 8x) - it does mean that your overflow prevention limit is lowered (to a multiple of the unrolled grouping number) and run a few more times. But the speedup offsets the increased bookkeeping.

A version I played with showed increased speed by saving the in-progress accumulation in an array and then doing the final accumulation after the main loop is done. But that may be due to the CPU arch/compiler I'm using.


If this code only runs on one compiler version/CPU arch, then ASSUMING the compiler will do the RIGHT THING and auto-vectorize the code is okay.

But if your code will be cross-platform/run on different OSes/CPU arch's, then a SWAR version may be more consistently performant - no need to guess if the compiler's optimization heuristics decided to go with the general purpose CPU registers or faster SIMD registers.

Downside is that the devs are exposed to the gnarly optimized code.


Almost the same as my SWAR version - which is what you're doing.

But aren't you reading off the end of the buffer in your memcpy(&w...)? Say with an empty input string whose start address is aligned to sizeof(size_t) bytes?

I just passed in the string length since the caller had that info, otherwise you'd scan the whole string again looking for the zero terminator.


> But aren't you reading off the end of the buffer in your memcpy(&w...)?

If we go by the absolute strictest interpretation of the C standard my above implementation is UB.

But in practice, if p is word-aligned and is at least valid for 1 byte, then you will not pagefault for reading a whole word. In fact, this is how GCC/musl implement strlen itself.

> Say with an empty input string whose start address is aligned to sizeof(size_t) bytes?

Then the start address is valid (it must contain the null byte), and aligned to a word boundary, in which case I assume it is ok to also read a whole word there.


If I read it correctly, your implementation might read beyond the end of the buffer, and if it crosses a page boundary into an unmapped page, it will segfault. That's one of the many evils of null terminated strings.


If we go by the absolute strictest interpretation of the C standard, yes, the above is UB.

But in practice no one has page boundaries that cross word boundaries, and I align to a word boundary before doing the word-by-word loop.


good point of course!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: