Thanks for this. I've got a dx9 I paid very little for and the lack of velocity sensitivity has always annoyed me about it. I shall give the firmware a go!
I edited my post to say "more like" a DX7. Unfortunately velocity sensitivity can't be added because the physical keyboard doesn't support it. The firmware does add velocity sensitivity to incoming MIDI data though!
isn't velocity sensitivity simply two electrical connections ("buttons pushed") that get made at slightly different points in the arc of a keypress, and the time difference between them is measured to use speed as a proxy for how hard/loud the key is pressed? (that was both a question and to explain the mechanism to people not familiar ;)
I wonder if it would be simple ot add a sensor to measure that across the keyboard, and then have it do double duty as aftertouch (aftertouch is an effect that measures if you "wiggle" a key after you've pressed it down) Then his DX9 could be better than the DX7 :)
Yeah, thanks, I didn't think that would be the case, but always wondered why it couldn't do it over midi, which is what I'm most interested in. Thanks again!
not pointing any fingers at all, except to point out that if you engaged in industrial espionage you would undoubtedly then come up with a plausible cover story and hope nobody probed the logic of it too much
Reminds me back of my teenage years mesmerised by software/hardware synths, daws and everything and anything related to audio tech. Hours spent trying to understand different waves, oscillations, LFOs, modulation, AM, FM and so on and so forth. I could go on all day. Great memories.
Anyhow, would be interesting to go down a rabbit hole reading up on the waveform representation for the DX7. Now that I remember, I will go ahead and look for SH-101.
DX-7 was digital FM synthesis, so while it uses waveforms to modulate other waveforms (digitally at least), and waveforms come out the other end, it's not based on typical "analog synth" waveforms the way the SH-101 is.
Ken, if you have some spare bandwidth I have some high-res decap photos from the chip in the Casio SK-1!
I worked on trying to reverse engineer the instruction set from that chip, but most of the code is in a mask ROM that probably needs some debug wires to extract.
Correct. Those have been available for ages, but there's a mask ROM on the CPU that is undumpable, at least at this time. I have a huge collection of datasheets and I was unable to even figure out what the instruction set was.
It's likely to be something Japanese, and I know there's been plenty of obscure architectures from them over the years. OKI is known to have made the CPU for Casio, so have you ruled out OKI nX-8?
Scanning through my old datasheets I have one from the MSM65512 which probably means I considered that but it's been a while. I should have kept a devlog.
I found an old email to someone that suggests I thought it was an nX/4.
The demo song from that thing will forever live rent-free in my head. Sometimes I sing it to myself, and then pretend I've switched the lead instrument to a sample. (hi, hi hi hi, hi hi hi, hi hi hi hi hi....)
I am unfamiliar with low-level workings like this, so forgive me if I'm talking out of my ass, but received wisdom from places like KVR Audio suggests that on modern machines, fixed point decimal math tends to be slower than just using floats because the integer registers are already clogged with routine operations like incrementing loops, etc. But some claim that anything that needs a phase accumulator as a wavetable index, like FM, still benefits from being kept as integers due to the expense of converting a floating point accumulator to an int address. FM doesn't have much arithmetic relative to its reads.
I wonder how true this still is. I've been having fun simply assuming it is, and these teardown articles and the other HN discussions around them have been very helpful.
I am very familiar with such low level workings. On a modern, fast machine, the amount of computation required by FM synthesis is so small compared with the machine's throughput that it just doesn't matter.
Yes, on modern chips float math will generally be faster than fixed point. This is not so much because the integer units get clogged, as that there's a huge amount of chip area and optimization that goes into the float units (often SIMD, and a lot of FM synthesis can benefit from this, though feedback creates data dependencies). For example, multiply-and-add is usually one cycle in float, but would always be two separate instructions in integers.
My recollection is that older ARM chips have a special issue with latency of data dependencies originating from the float unit (NEON, which is optimized for SIMD vector operations) to the integer unit. I suspect this is no longer the case, or is less of an issue.
Historically yes. This is most definitely not true since processors had SSE3 (~2004), using the FISTTP instruction, or I think you can also use the packed float to integer instructions like CVTTPS2PI as far back as SSE (1999).
Cheers for the explanation. I suppose "modern" should be asterisked, since I was parroting posts that date back to the mid-00s. Things have probably changed considerably from then.
I'm specifically targeting an Intel Atom, which is obviously powerful enough for the task, but may not fit all definitions of "modern" now?
Got me curious regarding ARM latency, wonder if that was related to particular instructions which added more latency or transfers between the registers/memory subsystem internals. Also on the off-chance that you remember, did you inline intrinsics or let the compiler auto-optimize?
Interesting to test out on the ARM Mac, and see if different dependency chains show significant latency penalties / in with reorder buffer.
This is for Cortex A8, which was the chip in the Nexus One. I wrote the original version of sound synthesis directly in ARM assembler[1]. It was very highly optimized, I remember using a cycle counting app that flagged any dependency chain that would cause the processor to stall, and ultimately utilization was in the 90%+ range. Back in those days, processors were simple enough you could do this kind of optimization by hand. By the time of Cortex A15 (Nexus 10 etc), instruction issue was out-of-order and much harder to reason about.
The best current info I could find for the latency advice is [2]. Quoting, "Moving data from NEON to ARM registers is Cortex-A8 is expensive." Looking at [3] partially reveals the reason why: the NEON pipeline is entirely after the integer pipeline, so moves from integer to NEON are cheap, but the reverse direction is potentially a large pipeline stall. This is an unusual design decision that as far as I know is not true for any other CPUs. Edit: I found [4], which is a more authoritative source.
Awesome reply, and thank you for the well put together answer linking to resources and for sharing your experience.
For Cortex-A8 from [4] and the others you have linked, It makes sense to me now regarding the instruction passing data between registers, filling out the pipeline and then stalling.
Will have a peek at ARMv8/ARMv9 arch's and see what they did there regarding SVE/SVE2.
Firstly, the DX7 is a hardware implementation, it doesn't use a CPU for arithmetic so there is no inherent notion of resource contention. Hardware resources can be (and are) allocated and/or shared exactly as are needed by the requirements of the target computation. You mention "reads," but once again, in a hardware implementation the memory structure may be quite different from the cache hierarchy of a CPU (the situation that KVRists would be most familiar with).
In a phase accumulator (or any numerical representation of time) it is generally desirable to have uniform precision across the full phase range. Floating point arithmetic does not have this property. On the other hand, floating point arithmetic can sometimes be more convenient than fixed-point if the hardware to hand can execute it fast. But if you're designing hardware from scratch, and especially for FM synthesis, you're probably going to find some other tricks that work even better.
I'm a DAW author. Have been for 24 years or so. My friends write plugins and DSP modules for mixing consoles.
Received wisdom from KVR Audio (and in fact, most online forums) is worth less than the distance than a flea could throw it.
Digital audio software users that know almost nothing about the subject seem highly inclined to spend their time blathering on in these forums, while the people who actually do know about it appear to have better things to do.
> Received wisdom from KVR Audio (and in fact, most online forums) is worth less than the distance than a flea could throw it.
Depends on the forum, I guess. Lately I've been following the developer subforum of KVR, where folks talk about filter and oscillator algorithms using math I'll never understand. Discussions there seem well-reasoned and entirely civil. I haven't really seen them talk about performance optimizations, though. And I can't say anything about the rest of KVR.
This is really informative, and the simple FM-synthesis demo confirmed my guess as to what the term meant (since I could never be bothered to look it up). Fun!
Is there a highly-regarded software (or hardware + software) emulator for the DX7?
I would recommend Plogue's OPS7, they have put considerable amounts of time and effort into reverse engineering the original chips. The interface is a little clunky but it sounds fantastic. https://www.plogue.com/products/chipsynth-ops7.html
I'm reminded of the folks working on getting the Motorola M56k DSP emulator working so well that it can boot the firmware for many popular digital-analog synthesizers directly, and pretty much produce word-compatible output from factory patches .. astonishing stuff.
I've unwittingly become a bit of a Yamaha FM Synth historian!
Here are some other contributions to reverse-engineering the DX7:
A fully documented disassembly of the DX7 ROM: https://github.com/ajxs/yamaha_dx7_rom_disassembly
A new firmware ROM that makes the DX9 function more like a DX7: https://github.com/ajxs/yamaha_dx97