Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Koelsynth – a simple FM synthesis library (github.com/charstorm)
58 points by graphitout 10 months ago | hide | past | favorite | 22 comments
This is part of my journey on pybind11. I wrote a tiny FM Synthesis library in C++ and a Python wrapper for that using pybind11.

There is a command-line piano app in the examples directory if you want to play with it. Here is the link: https://github.com/charstorm/koelsynth/tree/main/examples/si...

My next target is to attach this to some kind of physics simulation - like a bunch of balls moving around in a box with some internal walls. When the ball hits certain trigger points, it produces the sound.




nice work!

Have you tried to port it to WASM?

Python can also call the wasm with wasmer

I ported https://github.com/chaosprint/glicol for my Python audio project using the same method

for your physics idea, with wasm, perhaps it can be something like this?

https://jackschaedler.github.io/karplus-stress-tester/


I had some fun with the strings in the karplus-stress-tester. Thank you for that link.

I am thinking of moving the core to WASM. But I may have to drop the python part and make the calls directly from javascript.

Glicol looks impressive. I will try it out.


Omg..I have that physics simulation and made crappy sound tones in audacity for it. I want to use this to make better sounds.

I added a music block to Goober Dash level editor:

https//gooberdash.winterpixel.io/

When your Goober, or a physics crate, hits the music block, it plays a note. I didn't release this branch yet. Can I use the piano notes from this lib in our game? I wouldn't be adding the generator code to the game, I just need sound files.


Sure. Audio files generated have no license. Do whatever with it.

Only the code has license, but I have no intention to enforce it.

I am playing gooberdash now.


Fun little game! Nice work


I really appreciate the positive feedback. I think the game will blossom once the community level sharing has the right tools to allow user generated content to rise to the top.


What engine and web stack did you use? Curious about how you built it!


We maintain a fork of Godot ~3.5. We have a netcode module that we coded in c++ that does client side prediction and rollback. It uses websockets under the hood on the web client, enet on native clients.

We use Agones on Kubernetes to allocate game servers, those servers run godot itself which runs the game.

Custom coded golang matchmaker which makes the agones api call to allocate a server when needed.

We use contour & nginx to route web traffic and serve a .html file.

We use github actions to deploy to various environments.

We use heroic labs' open source thing called nakama as a base for our account/auth backend.

We have a bunch of other k8s stuff in the cluster for various small pieces but thats about the crux of it!


Nice, I like simple projects like this.

However, Wikipedia has a better diagram and explanation of ADSR:

https://en.wikipedia.org/wiki/Envelope_(music)


You are right.

I needed that image to explain the parameters in the ADSR configuration I used in the project. Mainly the slevel1 and slevel2 which captures the slow decay during sustain. Most ADSR diagrams show sustain as a constant which didn't really sound that natural when I tested for my projects.


As a newbie I really enjoyed your explanation of the synthesis process and why everybody should Fourier-sum phases rather than amplitudes.


Why is it that FM synthesis was so well-suited to relatively simple early digital hardware (eg. Yamaha's DX/TX line)?

Does that also make for relatively simple software FM synthesis?


The typical way FM synths (and for example function generators) generate a sine wave would be DDS (Direct digital synthesis).

You basically just have an array with all the values of a sine inside (or a quarter of a sine if you want to save memory) and then you jump through that array based on a phase accumulator (basically a counter). The amount by which you increment that phase accumulator (=phase step) determines the pitch of the resulting sine wave that this lookup gives you. So you basically add that phase step repeatedly and read out the current sine value at the current phase accumulator/index with a timer interrupt. Th

This operation is quite cheap, I once made a osciallator with 9 of these on an Arduino Nano (fixed point numbers tho).

Now the clou is that you could just add some amount of the result of another of those sine lookups to the phase accumulator, which results in the typical FM sound. Technically this is actually phase modulation, but hey technically the most famous FM synth uses also phase modulation.


FM is designed to be calculated digitally, and with a few hacks (like a lookup table for a function involving sin() and log()) can be calculated very cheaply.

In the DX7 it was done on custom digital circuitry, but modern CPUs are just as happy to do a few additions and a table lookup once in a while.


I think the reason the DX stuff broke out besides the different sound was they could do it on a cpu at the time and have a lot of voices without incurring the hardware transitor cost for each voice as much. I'm not an audio programmer but I've heard trying to emulate subtractive synthesis can be difficult.

In the github section the author somewhat alludes to this : "Next, there is "subtractive" synthesis (not really a surprise after "additive" synthesis, is it?). It involves taking a waveform with many harmonics (triangular wave, sawtooth wave, etc) and using a time-varying filter to remove the higher harmonics. I tried that too once, but it wasn't great either, at least for the effort involved in coding and tweaking the parameters."


The killer feature of FM synthesis is generation of inharmonic sounds. Classic analog waveforms (sawtooth/square/triangle/sine) are all perfectly harmonic, i.e. the frequencies of the higher harmonics are at integer multiples of the fundamental. Subtractive synthesis shapes the amplitudes of these harmonics but doesn't add any new ones.

Subtractive synthesis is still musically useful, because most musical instruments are harmonic or almost harmonic, but tuned percussion is a notable exception. It's very difficult to get a good bell sound out of subtractive synthesis. FM can produce inharmonic sounds, where the higher harmonics are at non-integer multiples of the fundamental. It's much easier to get good bell/chime sounds out of FM synthesis.

And the "almost harmonic" instruments are very popular (all plucked or hammered string instruments, with the inharmonicity more pronounced with thicker strings, so especially bass and piano). By adding a little inharmonicity, FM synthesis can produce more realistic versions of these sounds than subtractive synthesis.

Additive synthesis can also produce inharmonic sounds, but this requires an oscillator for each harmonic, so back when FM synthesis first hit the market this was unreasonably expensive.


Thanks for the info!

I've certainly never been able to make a decent bell sound with a subtractive synth. As for FM synths, I rarely have to because there's so many good presets out there but I've accidentally made bell patches just messing around.


> I've heard trying to emulate subtractive synthesis can be difficult.

Just filtering frequencies isn't that difficult, but to get a digital filter sounding equivalent to a specific analog audio filter is an art. There's an entire industry trying to recreate the 'character' of certain imperfections found in old synthesizers. Some FM synthesizers also have subtractive filters. There are even analog synthesizers with FM so it's not an either/or.


It's funny, the whole analog vs digital thing I personally can't tell the difference that often with EXCEPT the filter.

I have a digital synth with a digital filter that's great but I feel like I have to put a lot more work into the filter parameters to make it sound good.

I also have a digital synth with an analog filter and I find I don't have to play with the filter much at all. It sounds pretty good in pretty much every case.


FM synths used dedicated ASICs to generate audio. It wasn't possible for mainstream CPUs of the era to do it in real time.


Sorry my mistake, I was trying to say that the individual oscillators/filters/etc add up in cost (at least that's often what analog synth companies say when people ask about low voice counts)

I'm assuming an ASIC has a similar cost savings to a CPU or FPGA?


Additive synthesis was a pain to configure. I had to model and estimate the behavior of each harmonic. It was too much effort.

Subtractive synthesis didn't work that well for me either. I believe my implementation of the variable cut off lowpass filter had something to do with it.

FM Synthesis worked reasonably well without much effort. Implementation was rather straightforward.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: