Hacker News new | past | comments | ask | show | jobs | submit login
Growing Music: musical interpretations of L-Systems (2005) (york.ac.uk)
78 points by bjourne on Jan 20, 2020 | hide | past | favorite | 10 comments



The drawings are more sophisticated than the generated music.

For example-- there's a musical progression in the 2nd movement of Mozart's clarinet concerto where he has a fairly well-trod major key sequence constructed of a rising fourth in the bass that gets sequenced up in steps until hitting a very obvious cadence. Totally humdrum stuff. In fact you can hear many composers during Mozart's time and after using this progression.

However, Mozart adds a trick-- at the end of each iteration he inserts a descending fifth bass pattern for a minor key that prepares the next step of the sequence. These two sequences progress in lock-step to the cadence, as if there were two completely independent progressions that are interleaved. Think the melody of "Baby, it's cold outside" but with harmony.

The drawing for such a musical game is much more basic than what's shown in the examples. But the musical upshot-- e.g., what you hear as a listener-- is in a completely different universe of sophistication than the musical examples given.

Yet the history of music is absolutely brimming with examples like the one from Mozart that I gave, from composers of all stripes. I think the process outlined here is too low-level to generate any kind of musical pattern of interest.


what stops a composer from doing a mozart and taking the output from this system and modifying it to make it more interesting? my point is that systems like this can build starting points for composers to take further


systems like this can build starting points for composers

They are. See for example http://aiva.ai


Are you saying that good music is necessarily oversimplified? I am not sure that I follow.


Here is an example of a Max/MSP patch that uses l-systems: https://www.youtube.com/watch?v=Z3hoAuS3qzg

If you are curious about algorithmic compositions you may want to check the British duo Autechre. Their 2013 EP, L-Event, is probably a reference between l-systems and eleventh (interval): https://www.youtube.com/watch?v=sKtrcF_Y16Y


This is a fascinating article but I can't help feel the cheesy synth sound with the vibrato and echoes used to render the music detracts from the presentation and interpretation a fair amount. A fairly plain piano sound would have worked better in my opinion. A subjective matter of course.


How did you render the music? I want to experiment with algorithmic music generation too.


dunno what they used however the simple of it can be where your code synthesizes the audio curve ... just a time series of floating point numbers ( or integers ) which is analogous to the wobble of the microphone membrane or your eardrum ... then to render this there are two fundamental notions regarding this raw audio --- bit depth and sample rate --- bit depth drives how granular this curve gets digitized typically you use two bytes ( 16 bit ) to store each point on this audio curve --- sample rate is simply how many of these audio samples you store per second ( CD quality is 44,100 samples per second ) ... this level of raw audio is called PCM ... to render it most easily simply output a 44 byte header which outlines the audio spec followed by the payload which is this PCM audio where each audio sample out putted into the output file is saved across two consecutive bytes, meaning a given audio sample ( a point on the audio curve ) consumes 16 bits of memory which gets saved as two bytes in the output file ... then you have your own WAV file which can get rendered using command line tools like ffplay, aplay, vlc or whatever


Thanks. But what I am looking for is to render an array of notes to PCM (or whatever). I've tried defining the notes as frequencies and using a waveform rendering library (in Python) but it's very slow and clicky - a wave of a particular tone gets cut abruptly right before the next one starts and that produces a click. I need something more intelligent to sort of crossfade subsequent tones into each other or so. Perhaps I should just use a filter...


Isn't this a good case for MIDI? For example using something like https://github.com/nwhitehead/pyfluidsynth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: