This article series is mind blowing to me. The engineering is fascinating. It's like they made another logical analog abstraction layer over the digital electronics. Yamaha doesn't get mentioned much here, but between how they can bring the experience of their instruments, motorcycles, and sound equipment to people everywhere, there is an understated beauty in what that company does. They only seem to make things that are wickedly fun and life's great pleasures. Also, this: https://en.wikipedia.org/wiki/Yamaha_CX5M
It's not mine, so I can't help. But I know iPhone / Safari has pretty terrible support for lots of things I've done having to do with midi and WebAudio, and they don't allow Chromium to run. Not sure why people consider that ok, but such is life.
My DX7 is still one of my favorite synths, sonically speaking. I had no idea how intuitive yet novel the circuitry was until I began reading this article.
In particular, I found it suprising that the sine waves were generated by indexing a wave table - you'd think a sine wave would be easy enough to generate without a table, but the way it implements phase distortion ('FM') on those sine waves by simply offsetting the index into that table is quite interesting. Sorta like having different mathematical functions for 'i' in a for loop.
In DSP terms direct calculation of sine waves is incredibly expensive. Even more so with 1980s 8- or 16-bit fixed point arithmetic without a coprocessor.
Table lookup was inherited from the first computer music software. It was the only way to create any kind of waveform at reasonable speed.
When you have a table, modulating the lookup index is almost an obvious trick. But the Chowning/Yamaha implementation worked so well musically that it took it from a curiosity to a best-selling product.
A different take on it might not have done that.
FM owes a lot to sound developer Dave Bristow. He made sure the DX7 had usable sounds. According to him, the original prototypes had tens of harpsichord-like patches and not much else.
Intriguing to imagine the first time the designers heard the sound their creation could make (presumably there was a TTL or bit-slice prototype before the VLSI implementation). They're sitting around the lab, someone plinking away on the keyboard. One of them says "you know, you could use that sound on a pop record and I bet it would be popular for at least a decade".
The story behind FM synthesis is pretty interesting. A Stanford music professor, John Chowning, came up with the idea in the 1960s and patented it. Stanford didn't think this was what a music professor should be doing and fired him. Meanwhile, Yamaha licensed the patent from Stanford, paying millions of dollars and making it Stanford's most lucrative patent at the time. Stanford changed their mind about Chowning and hired him back, making him a full professor and then department chair.
You may think it sounds crude, but pieces like these took hours of expensive mainframe (PDP-10) time. There wasn't a lot of opportunity for careful sculpting of fine details.
You could easily synthesize something like this in real time now. But not many people do, which I think is a shame.
You could easily synthesize something like this in real time now. But not many people do, which I think is a shame.
I've kind of been following very loosely along with this DX7 series with some (not DX7) FM and phase modulation in my own code. If you're running Linux, you can try something like this in the WIP graph-synth branch[0] of my mb-sound Ruby app/gem (feedback welcome on the DSL syntax):
those early Yamaha FM synths has a sound all on to themselves
For sure, though the parent comment's video link was presumably rendered by software on a PDP-10. I've spent the last week of my spare time trying to write code to make a bass sound that reasonably approximates Lately Bass, and I'm close, but since I'm not even trying to emulate any of the quirks of the Yamaha FM synths it doesn't ever sound quite right.
To be clear - I didn't mean it was a shame that people weren't specifically emulating FM, but that people weren't experimenting with weird synthesis techniques to make weird non-pop music with sounds no one has heard before.
It's the last part that's still really difficult, possibly even harder than it used to be. And that's even with the insane processor power we all have now.
I guess movie soundtracks are the closest to that in widely distributed music, or artists like Aphex Twin.
There's an infinitely vast space of potential sounds, and an infinitely long fractal knife edge of interesting sounds, and it kind of makes sense that finding unexplored areas of that fractal edge between boring and incomprehensible gets harder over time.
c15 on the tx81z, you'd have to model the 12 bit DACs and the steppy-nees of the wave shapes. also the CPUs in the tx81z were slow and some calcs would lag a bit which contributed to its sound... try sending midi data to the TX81z while sequencing it, your pattern with start swinging!
> Stanford changed their mind about Chowning and hired him back
Somehow I feel that if this happened today, no company or university would have the spine to do this, and it would rather turn out as an expensive legal battle if he insisted on getting his part of the patent pake.
Lovely work - i'm wondering if you now have enough info to build a VHDL model from this? It would be very cool to run an emulation on an FPGA.
My interest in these instruments goes back a long way - I reverse engineered my DX7 to build a VST instrument around 2001 or so. My reverse engineering was 'black box' with putting test patches and midi through the instrument to build a model, but was good enough to produce patches which I couldn't tell from the original except at very extreme settings where the different in sample rate became apparent (e.g with aliasing frequencies being very obviously different between the two).
Actually, thinking about it, I also did the same with the TX instrument to produce a simple model written in SOUL a few years back. There's a LatelyBass patch here based on this - https://soul.dev/lab/?id=LatelyBass
I have a suspicion emulating in software specific Yamaha hardware implementation would not only lead to bit perfect reproduction, but also faster emulator. I wonder how slow of an MCU would one need, 20MHz AVR? 30MHz STM32?
Thank you for a good series of articles and some great insight into the inner working of FM synthesis in general.
I have never owned one of the original Yamaha's, but i have been using the awesome Dexed VSTi plug in, that is modeled after the DX7 for several years now.
And I am planning on purchasing a Korg Opsix for its FM capabilities.
But for someone that has only been on the learning-by-experimenting end of FM synthesis, your articles are a great insight into the theory behind it all.
Just want to say that I am incredibly thankful for this series, its amazing to see how one of my favorite synths works not just in theory (FM/PM) but also on the chip level. Thanks!
I’ve never been a huge fan of FM synthesis as I’m mostly reminded of bad electric piano sounds that came out back in the 80s. Having owned a tx-7 I knew it was capable of more than that but then there was the whole issue of programming it that kinda sucks.
That said seeing the tech and how things were implemented has me loading up the Arturia DX7-V and tinkering around with it.
Have any other synths or effects on your list to look at?
On the subject of favorite FM pieces, this 1992 track from Jill of the Jungle (rendered by one of Yamaha's OPL2 chips on a Sound Blaster) burned its way into my brain as a kid. Still a fan: https://www.youtube.com/watch?v=8pcxdYvp8uQ
It takes a while. I got the chip on Nov 1 and have been working on it since then. (Although it's not the only thing I've been doing.) The process is a combination of taking die photos, tracing out circuitry, understanding the circuits, doing background research, and figuring out how to explain the chip in blog posts.
The previous article in the series used III to denote part 3. IV would be next. I guess part 5 could be titled part Five, or maybe Go since yamaha is a japanese company.
I worked in a music store back then. The DX7 was KING for sure but we were a Roland Shop and the D-50 was pretty F'n awesome. I don't think it's apples-to-apples though. Wasn't the D-50 less of a synth and more of sampling board?
Yes and no. I guess the best way to describe it is that it was more of a half-step towards wavetable synthesis in the sense that it utilized ultra-short samples of the attack and sustain phases of the envelope to achieve something with greater fidelity than nominally possible with FM synthesis alone, but well short of what you could achieve with, say, a Korg Wavestation. It was an interesting product of its time.
I did a huge amount of D50 programming. The actual digital synthesis was not particularly impressive (limited architecture and weak filters). Multiple voices ('partials') could be layered in interesting ways - quantity over quality! The synth sounded great because because of the samples and the built-in effects.
You can listen to each sample if you try programming a patch. Some are short 'transient' samples, others are looped. The 'chipmunk' effect is really evident.
The D50 was the first of the new wave of 'S&S' ('sample and synthesis') synths that had nothing of the depth and mathematical interest of FM synthesis.