Hacker News new | past | comments | ask | show | jobs | submit login
Synthesizing a Plucked String Sound with the Karplus-Strong Algorithm (demofox.org)
149 points by ingve on June 16, 2016 | hide | past | favorite | 24 comments



You can play around with a lot of this stuff in the ChucK programming language[1]. ChucK was co-created by one of the early advocates for physical modeling sound synthesis, of which Karplus-Strong is an instance. A few examples are in [2] [3] and [4]. Disclosure: I am one of the current developers working on ChucK.

[1] http://chuck.stanford.edu/

[2] http://chuck.stanford.edu/doc/examples/deep/plu.ck

[3] http://chuck.stanford.edu/doc/examples/deep/plu2.ck

[3] http://chuck.stanford.edu/doc/examples/deep/plu3.ck


Awesome work, I really love ChucK and participated in the mailing list at some point to work out how to embed ChucK into another app. Conclusion: it wasn't easy as the code has a lot of global static state but I had ChucK as the audio solution for a game I never finished.


Nice! If you ever are working on something like that in the future, I am working on libchuck[1] which should make embedding easier, in spite of pervasive global state. Currently iOS/Mac only but plans are in the works to make it cross platform.

[1] https://github.com/spencersalazar/libchuck


Try doing this - I created a full piano synth like Pianoteq working in realtime on Linux. Took me about year to complete (with breaks). You can listen to it here: https://www.youtube.com/watch?v=U4I9SPCZIk4


Looks like the virtual "string" is shaped as a random string, and then suddenly let go to make that sound. I wonder what would happen if you shaped the original buffer with a certain shape--for instance a linear rampup then a linear rampdown at some point, much like how you would stretch a string.

edit: hacked together a string plucking thing here: https://www.shadertoy.com/view/ldVSzd

modify the coefficient inside pluckPoint from 0.5 to 0.01 and hear the difference in timbre.


Nice! Another "trick" is to take the buffer with the impulse response of the body of the instrument you are modeling, e.g. a piano soundboard or violin body, and filter the excitation (noise, or ramp, etc.) before putting it in the buffer. You can hear this so-called commuted synthesis[1] being used here [2]-- not exactly Stradivarius but kinda sounds like a screechy violin.

[1] https://ccrma.stanford.edu/~jos/pasp/Commuted_Synthesis.html

[2] https://youtu.be/U8wjFmLQJT4?t=21s


Accidental downvote, sorry. Thanks for sharing those links!


This algorithm is so simple that I once accidentally implemented a workable approximation of it while experimenting with creating a simulated tape delay in a flexible-architecture DSP system that was designed mainly for sound reinforcement (Soundweb London). I was floored when I fed white noise into a very short filtered delay (with noise retriggered by the tape delay's output level) and started hearing plucked string sounds.


Here's an editable demo written in JS (just hit "preview"): http://tinyrave.com/tracks/67/remix


Sounds way better than the original author's demo!

Original author's demo: https://www.shadertoy.com/view/MdyXRd


The shadertoy was an attempt at doing the algorithm without a delay buffer (making it stateless). The real demo is the wav file and the C++ that generated the wav file (:


This is more or less how the original Macintosh beep was produced, though the ring buffer was initialized with a square wave rather than noise.

http://www.folklore.org/StoryView.py?project=Macintosh&s...


Fascinating. I implemented a small proof of concept of Mathematica and it sounds remarkably well, especially for such an extremely simple idea.

Maybe a small nitpick, but by using the filter described in this article (taking the average with the next sample), the frequency isn't exactly equal to sample rate / samples per buffer, as the filtering method is asymmetric, and it shifts the signal half a sample to the left every pass. With a more symmetric method, like replacing each value by the average of the surrounding values, that doesn't happen. It may sound pedantic, but the difference is quite clear audible


Mathematica is one the most useful tools for this kind of experimental programming. A minimum implementation can be a single REPL line (if you prefer FP style). That's one line from blank page to "play audio"; it's hard to understand how enabling it is to have such shallow barriers to experimenting, it's like cheating.

    ListPlay@ Flatten@ NestList[
       0.996*#& /@ MovingAverage[# ~Join~ {First[#]}, 2]&,
       RandomReal[1.0, {80}], 100]
https://i.imgur.com/aIufLHn.png


Interesting! I gave that a shot and definitely do hear the difference. I believe the difference you hear is due to the fact that it's a stronger low pass filter to average 3 samples instead of 2. In any case, the result is much less harsh on the ears. I'm adding that to the post, thanks!


Oh, changing the filter definitely changes the timbre, but that's not what I meant. Even with a symmetric filter using only two values[0], there will still be an audible difference. The point of using the symmetric filter wasn't to get a more correct timbre, but to avoid changing the fundamental frequency.

Timbre is not the same as fundamental frequency. A guitar, a piano or a violin playing the same note, say, a middle C, will have wildly different timbres, but they should have the same fundamental frequency (261 Hz). That's what allows them to play together in orchestras. However, if you use, say 50 samples per buffer and a sample rate of 8192, with your original filter you won't get a frequency of 163.84 Hz, but 8192/49.5 = 165.49 Hz. That might look like a small difference, but to our ears it sounds wildly out of tune. You can use a tone analyzer app to verify it.

Thanks for the shout out :) I know I'm nitpicking :p

0: like for example, replacing with the average of the two surrounding values


That's great info, thank you! (:


This article is a good / easy to follow explanation of the karplus-strong algorithm.

Here's a nice web audio demo: https://chinpen.net/castro/ that uses this js lib: https://github.com/mrahtz/javascript-karplus-strong/


I have implemented (with a group of two other people) a slightly fancier version of something like this at school, for an exercise at DSP implementation course.

The basic principle was really the same with the circular buffer and a filter, the combo of which can be understood as a long IIR filter. The fancier parts included tunable harmonic bandpass filter (in place of OP's two sample average), and a fractional delay filter for fine-tuning.

By changing the parameters on the bandpass filter it could be made to sound anything from a plucked string to a church bell (minus the reverb since we didn't have that). We were quite surprised how good it sounded.

Unfortunately I seem to have misplaced all the materials related to that course. The hardware we implemented it for was some exotic audio DSP, but we had a matlab model that could produce output samples. If only I could find it.. =/


Also, check this out.

http://falstad.com/loadedstring/

A more accurate (?) version of a loaded string. Requires Java.

This guy also has bunch of other really cool stuff. Go to the root of that website.


Sounds like the yamaha chip that powered the Sega Genesis[0].

0. https://www.youtube.com/watch?v=agmckeYH5NU



Bizarre. And the most bizarre thing is that it works.


In the general case, what you do is you take a wideband excitation - like a buffer of random samples - and use it as the initial value for a system with a lumpy frequency response. In this case the frequency response is very simple but more complex IIR filters work too.

As the article says this is a good analogue for how a lot of musical instruments work. A string is a resonator, and plucking it gives it a wideband excitation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: