Hacker News new | past | comments | ask | show | jobs | submit login

I am unfamiliar with low-level workings like this, so forgive me if I'm talking out of my ass, but received wisdom from places like KVR Audio suggests that on modern machines, fixed point decimal math tends to be slower than just using floats because the integer registers are already clogged with routine operations like incrementing loops, etc. But some claim that anything that needs a phase accumulator as a wavetable index, like FM, still benefits from being kept as integers due to the expense of converting a floating point accumulator to an int address. FM doesn't have much arithmetic relative to its reads.

I wonder how true this still is. I've been having fun simply assuming it is, and these teardown articles and the other HN discussions around them have been very helpful.




I am very familiar with such low level workings. On a modern, fast machine, the amount of computation required by FM synthesis is so small compared with the machine's throughput that it just doesn't matter.

Yes, on modern chips float math will generally be faster than fixed point. This is not so much because the integer units get clogged, as that there's a huge amount of chip area and optimization that goes into the float units (often SIMD, and a lot of FM synthesis can benefit from this, though feedback creates data dependencies). For example, multiply-and-add is usually one cycle in float, but would always be two separate instructions in integers.

My recollection is that older ARM chips have a special issue with latency of data dependencies originating from the float unit (NEON, which is optimized for SIMD vector operations) to the integer unit. I suspect this is no longer the case, or is less of an issue.


Historically, float to integer conversions in C on x86 were very slow due to the compiler reconfiguring the FPU rounding mode for each conversion.


Historically yes. This is most definitely not true since processors had SSE3 (~2004), using the FISTTP instruction, or I think you can also use the packed float to integer instructions like CVTTPS2PI as far back as SSE (1999).


Cheers for the explanation. I suppose "modern" should be asterisked, since I was parroting posts that date back to the mid-00s. Things have probably changed considerably from then.

I'm specifically targeting an Intel Atom, which is obviously powerful enough for the task, but may not fit all definitions of "modern" now?


> For example, multiply-and-add is usually one cycle in float, but would always be two separate instructions in integers.

Unless the multiply is by a constant 2, 4, or 8, in which case you can use `lea`.


Got me curious regarding ARM latency, wonder if that was related to particular instructions which added more latency or transfers between the registers/memory subsystem internals. Also on the off-chance that you remember, did you inline intrinsics or let the compiler auto-optimize?

Interesting to test out on the ARM Mac, and see if different dependency chains show significant latency penalties / in with reorder buffer.


This is for Cortex A8, which was the chip in the Nexus One. I wrote the original version of sound synthesis directly in ARM assembler[1]. It was very highly optimized, I remember using a cycle counting app that flagged any dependency chain that would cause the processor to stall, and ultimately utilization was in the 90%+ range. Back in those days, processors were simple enough you could do this kind of optimization by hand. By the time of Cortex A15 (Nexus 10 etc), instruction issue was out-of-order and much harder to reason about.

The best current info I could find for the latency advice is [2]. Quoting, "Moving data from NEON to ARM registers is Cortex-A8 is expensive." Looking at [3] partially reveals the reason why: the NEON pipeline is entirely after the integer pipeline, so moves from integer to NEON are cheap, but the reverse direction is potentially a large pipeline stall. This is an unusual design decision that as far as I know is not true for any other CPUs. Edit: I found [4], which is a more authoritative source.

[1]: https://github.com/google/music-synthesizer-for-android/blob...

[2]: https://community.arm.com/support-forums/f/armds-forum/757/n...

[3]: https://www.design-reuse.com/articles/11580/architecture-and...

[4]: https://developer.arm.com/documentation/den0018/a/Optimizing...


Awesome reply, and thank you for the well put together answer linking to resources and for sharing your experience.

For Cortex-A8 from [4] and the others you have linked, It makes sense to me now regarding the instruction passing data between registers, filling out the pipeline and then stalling.

Will have a peek at ARMv8/ARMv9 arch's and see what they did there regarding SVE/SVE2.


Firstly, the DX7 is a hardware implementation, it doesn't use a CPU for arithmetic so there is no inherent notion of resource contention. Hardware resources can be (and are) allocated and/or shared exactly as are needed by the requirements of the target computation. You mention "reads," but once again, in a hardware implementation the memory structure may be quite different from the cache hierarchy of a CPU (the situation that KVRists would be most familiar with).

In a phase accumulator (or any numerical representation of time) it is generally desirable to have uniform precision across the full phase range. Floating point arithmetic does not have this property. On the other hand, floating point arithmetic can sometimes be more convenient than fixed-point if the hardware to hand can execute it fast. But if you're designing hardware from scratch, and especially for FM synthesis, you're probably going to find some other tricks that work even better.


> received wisdom from places like KVR Audio

bwahahaha.

I'm a DAW author. Have been for 24 years or so. My friends write plugins and DSP modules for mixing consoles.

Received wisdom from KVR Audio (and in fact, most online forums) is worth less than the distance than a flea could throw it.

Digital audio software users that know almost nothing about the subject seem highly inclined to spend their time blathering on in these forums, while the people who actually do know about it appear to have better things to do.


> Received wisdom from KVR Audio (and in fact, most online forums) is worth less than the distance than a flea could throw it.

Depends on the forum, I guess. Lately I've been following the developer subforum of KVR, where folks talk about filter and oscillator algorithms using math I'll never understand. Discussions there seem well-reasoned and entirely civil. I haven't really seen them talk about performance optimizations, though. And I can't say anything about the rest of KVR.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: