Prior to my career as a software engineer, I attended RubyConf (2015, San Antonio) and watched a presentation about making music with Sonic Pi. That night, I made this rendition of "Corridors of Time" from Chrono Trigger: https://gist.github.com/thisismitch/be9287c80903cad151fe
It took me way longer than I'd like to admit! Enjoy!
Not exactly—I actually got a CS degree back in 2005 but got into sysadmin work. I started at DigitalOcean as a technical writer (tutorials) in 2014 then, after 18 months of writing, brushed up on my skills by working on side projects and pairing weekly with one of DO’s engineering managers.
Sonic Pi is pretty cool. I really liked Overtone because I like LISPs, so I have chosen Extempore to do my electronic composing [1].
I first started with CM (Common Music), which also has a one-click download for Linux, Mac and Windows. It also has somewhat of an IDE called Grace, "Grace (Graphical Realtime Algorithmic Composition Environment) a drag-and-drop, cross-platform app implemented in JUCE (C++) and S7 Scheme." [2] The birds example is amazing! They build up a very real-sounding bird call from scratch.
At first glance this looks very cool. There is a lot of good documentations and it seems like a powerful set of libraries around it to make something very neat.
BUT...to put my PM hat on, I'm not sure who is the target audience here?
A - musicians who want to compose music with code.
B - people who want to learn programming by having fun with music creation.
C - programmers who want to learn music theory.
If the intention is a fun vehicle to to teach programming, then I think there are better ways. I looked thought the samples and I think as a kid I would get tired very quickly. There are so many function calls to learn. Right away I have to understand the concept of random and what it does.
I think if the idea is to empower musicians, this seems like a lot of work to create music.
At the end, you have to know some music principle and already understand a lot of different programming concepts to actually do anything.
This combination makes doing programming dependent on knowing music theory and creating music knowing a lot of programming and exploring all the different functions and libraries to then apply your music theory to.
To me it would be frustrating. And for musicians who I guess, can potentially create some sounds that might be difficult with existing tools, just to hard to do.
It would have been far better to have a simple 2D game board (tanks, robots, etc. I know it's been done) as a programing tool than inside a complex world of music theory.
Or at least make the libraries not dependent on music theory and provided some higher level presets with built in abstractions. Example: loop song(type: techno_beat, length:300) (do some stuff here) end_loop
It might be useful to divide the wold in neat venn diagrams of "target audiences" when you are running a business, but it is not good to start taking this stuff too seriously. There's more to life than that.
We are humans beings and we like to create. The digital computer is a meta-tool, in the sense that it is possible to invent entire new mediums of expression within it. This appears to be a particularly interesting one. I think you underestimate kids. I know I would be very excited to have access to this tool back in the day, and would be delighted to be able to use randomness to manipulate sound. Being called a "superhuman musician" for pressing a button would make me want to puke.
Take your PM hat off sometimes and smell the roses.
The creator of sonic pi gave a keynote talk at my company's dev conference last month. He was pretty clear that it was originally created to help teach children programming. It does have other audiences now though.
He mentioned that he tried starting with simple games like you suggest but that kids have unrealistic expectations of what they'll be able to do and it leads to disappointment. They can imagine a commercial quality game quite easily but have no hope of creating one. But with music they're excited to be able to make any sound at all and don't expect to be able to create a film score.
He mentioned that he tried starting with simple games like you suggest but that kids have unrealistic expectations of what they'll be able to do and it leads to disappointment.
Very interesting. I guess that could also make sense.
@pjmlp thanks for clarifying. I had no idea there was such a thing.
@darpa_escapee - thanks for guiding me into that rabbit hole. Csound looked the most promising to me. Pure Data needs more screenshot and samples, the site was very hard for me to dive through and Overtone Clojure it's cool. Wanted to learn that at one point, but I'm sticking first with learning Haskell.
Max/MSP is a popular one, basically a fleshed-out commercial version of Pure Data. As somebody with a background in procedural and OO programming, I personally find that graphical approach immensely frustrating, but I’ve known lots of people to whom it speaks.
For a more code-like experience, SuperCollider seems pretty fun, though I haven’t gotten deep into it. ChucK (http://chuck.cs.princeton.edu) is another neat option, if you really want to feel like you’re controlling every sample that goes by.
It's a species of programmable synthesiser. If it looks like a lot of work, you should see someone playing an analog modular synth. In fact, you should see someone playing classical piano; that's years of practice and a whole chunk of music theory.
I inserted 'classical' because otherwise someone would counter with an example of someone who's a self-taught piano genius with no musical theory training.
One of the problems that the Ruby community seems to have focused on is bridging the gap between code and ideas. I say this as a non-Ruby-programmer.
At the end of the day, computers simply read language and convert that into some basic operations. But still, the average non programmer feels like software is this black box of mystery which takes years to comprehend. And that's not true!
I know plenty of people programming synth's on Ableton, mostly using their mouse to point and drag. And I suspect a lot of them suffer from 'click and drag fatigue'. Most of them would be able understand all of the parameters being exposed here.
Helping bring practical programming to non-programmers is a noble goal. And I hope they succeed... because what programmers want to keep re-writing the same boilerplate for the next 20 years?
> One of the problems that the Ruby community seems to have focused on is bridging the gap between code and ideas. I say this as a non-Ruby-programmer.
As a Rubyist, I say this is pretty well on-the-mark. People ask me all the time why I love Ruby, why I think Ruby is the best language ever made, why I think in a thousand years, Ruby will have eaten all the other languages.
It's because at the end of the day, Ruby is a far more pleasing mental interface to software systems than anything else. If it's not Ruby we're all programming with in a thousand years, what language it is we are programming in, will look a lot like Ruby.
> Ruby is a far more pleasing mental interface to software systems than anything else.
I've heard this argument for basically every niche language, usually things like Lisp and (color) Forth. I've concluded that different people have very different internal mental models of programming.
I've used Lisp, and fell in love with it, before finding Ruby. Ruby is better than Lisp. It took me some time to realize this. Lisp isn't bad, but it's not Ruby.
I wouldn't call Ruby a niche language. You can use it for anything, it's very much a general purpose programming language. Most people only build websites with it, but you do see lots of other kinds of things built with it too, including, you know, a digital audio workstation.
I'd call PHP a niche language before I'd pin that on Ruby.
Software is eating the world. You can simulate almost all the instruments with software, create patterns easily with basic programming skills and create the music that you want in a cheaper and cleaner way compared to the traditional methods. Since this is not a commercial project, there might not be a well-defined target audience but I guess it works both ways. It can attract the software developers to learn some basic music theory and also attract the musicians to learn some programming so that they don't need to do it manually using DAW software.
As a casual user of SonicPi, I would really hate to see Sam give up on this project in favor of another crappy game that's supposed to teach kids to program. I already produce music on instruments and hardware devices. There are so few software tools out there where you can do what SonicPi does at an entry level like I am. It can be really easy to DJ a set w/ this, but you can also get really deep into musical concepts. That's something I really appreciate.
I think you're really underestimating the number of people who are programmers and musicians. As one, I've found Sonic Pi to be fun to mess around with.
Also, how do you make something like this without music theory? At the end of the day, you need an abstraction. Why make a new abstraction for creating music when everybody already agreed on one?
Well...I program and play music for fun. I enjoy making digital music on Linux with JACK and the way it basically turns your computer into a modular recording studio. I've always been interested in ways to combine programming and music. I've checked out a few different music libraries before, but they've always been fairly low level. As in, program your own oscillators for every synthesizer or sound, program your song, compile then playback. Which honestly, isn't very fun. Or, it's a closed, all inclusive package with builtin editor and massive library compatible with nothing else.
Something with a simple api that can be programmed on the fly is pretty cool. This is something i'm personally interested in checking out and playing with.
I played with sonicpi for a while and it was fun until I hit the limitations of my severely lacking music theory knowledge.
I feel like all the best tutorials are written for people who are good with music but need to learn to code. Anyone out there have any good resources for using sonicpi to learn more music theory as a coder?
i am very excited by the idea to create music by means of programming. but it seems that there isn't even a single project with an active community. supercollider seems to be the most popular with regard to GitHub stats. Sonic pi seems to be the most recent endeavor in this area. but it doesn't offer any deb-packages. compiling for Debian/Ubuntu seems to be not documented.
my impression is that this is coming up every couple of years but nobody so far succeeded at actually producing a system that gains meaningful popularity. not to mention how difficult it was too compile/set up the software for the various projects i have tried.
another problem is that very few YouTube tutorials showcase rythms and melodies going beyond something resembling a ping pong match on speed.
I think the relative lack of a strong community around programmable music generation probably originates from a lack of a particularly good use case. To me, as both a programmer and an electronic music producer, applications like Sonic Pi and Supercollider are not all that appealing, and actually come off as downright tedious.
First of all, music creation is too chaotic a process to allow for simply getting things right on the first try. Single notes in arpeggios are changed, entire progressions are taken up and down steps, parameters are continuously played with until you find the right levels, and all of these and more are much better suited to graphical abstraction purely for ease of use. I'd much rather spin a virtual knob to find appropriate levels than type and re-type a variable quantity, especially if I have to wait for that quantity to update every time.
Second, music is all about edge cases. Using control flows to automatically change a piece is nice, but not as nice as quickly rearranging tracks in a visual playlist. Deciding that a particular loop should end in a different way is simple in a visual editor: cut off the tail and put something else in, or make one instance of the loop separate from the others and edit in place. These are processes that take less than a second for me, but would involve careful crafting of conditionals to achieve in Sonic Pi or the like.
All of that said, I think this approach probably has its merits. I've been wishing for scripting in DAWs for a long time, and having a synthesizer that supports writing code to modify waveforms or change how parameters link would be awesome (if this exists, someone please tell me). Projects like Sonic Pi, though, seem to take this past the point of usability.
>> "I've been wishing for scripting in DAWs for a long time, and having a synthesizer that supports writing code to modify waveforms or change how parameters link would be awesome (if this exists, someone please tell me). Projects like Sonic Pi, though, seem to take this past the point of usability."
Reaper has a scripting language. It even comes with a few synths written in it, complete with source.
I want a DAW with the flexibility of Reaper and the UX of Ableton.
If you spring for the full Max4Live Ableton package, you can automate quite a lot via the maxmsp JavaScript object, which gives you scripting access to the Live environment vis the LiveAPI object. It’s kind of an awkward API to use but still much nicer than using the traditional graphical max objects.
While I completely agree with the sentiment, I still wonder what's the difference to graphics. Is drawing easier, for lack of a better word, than gfx-programming? I would argue this comparison is apt and yours falls flat. B flat.
For starters, you could have a skeleton of a script with accessible parameters, given knobs. That would look like a DAW, except for text instead of pseudo design with screws and LCDs that mimic real objects (skeumorphic). Yes, you want buttons, visual programming still sucks. Demo coders like Farbrausch program their own demo tools, eg. Werkkzeug 3, for exactly that reason, isn't it? Considering gfx programming as the comparison, of course textures, models and so on are modeled in an analogue fashion. Nobody programs a human.md3 to evolve from an embryo for fun, but in principle, somewhen it could be done. Music is a lot like vector graphic art, you can do a whole lot with simple shapes and gradients. And you can program complicated sound effects perhaps easier than as a 5 second loop rendered to wav and pitched by the DAW, if you know what I mean.
Note composition as you remark is especially besides the point. The drone noise perspective might be an extremely misleading example, but music programming should be able to paint outside the classical frame. It should allow to define sweet points of resonance, instead of chasing harmony by ear. This does require deep understanding, so instead I'm happy with finger painting ... because it's so close to the metal, err, paper.
It's very sad because I have no idea of the potential. Composition to me is choosing an instrument and arbitrating simple known melodies to complexer ones until it sounds harmonious thanks to obeying the circle of fifths, but that's mostly it and mostly rather superficial, which doesn't matter as long as the instruments sounds niceand if it doesn't I'll split the melody by octaves e.g. and choose two different instruments, alter the octaves to get a high contrast (shout out to my man). Because of the loop nature of pattern based composition, I am mostly not interested in arrangement. This again compares to shader programming. And even big studios basically just stitch together single scenes. ... yadda yadda yadda.
You might also compare the violin to the voice. Far more people can or think they could sing. Making the violin sing is just much more complicated, but not exactly boring.
> I've been wishing for scripting in DAWs for a long time, and having a synthesizer that supports writing code to modify waveforms or change how parameters link would be awesome (if this exists, someone please tell me).
I'm working on a DAW that you can live-code with JS and math expressions if you're interested: https://ossia.io
C++ just-in-time compilation of sound effects is coming in the few next months (JS just does not cut it for real-time audio with per-sample access).
Music coding is conceptually challenging, which immediately limits the community of interest to people who like coding for its own sake, and not usually very rewarding musically, which limits the community further to academics, students who are forced to experiment with code (usually briefly), and the odd hobby experimentalist.
Given all the other tools available, from DAWS to trackers to VSTs to hardware synths, why would a musician - as opposed to a coder - want to climb the incredibly steep learning curve?
Music production with ableton and the like is programming. All the synths and effects are functions operating on a signal and the various knobs and sliders control parameters.
There should be a special name for this fallacy, because it occurs so often on HN.
Just because a domain looks a bit like a trivial mathematical operation to mathematically inclined outsiders doesn't mean that the math really is trivial, or even that the real core of the domain is best summarised as a trivial mapping.
interesting that you come forward with music resulting from this approach as not being very rewarding. but why is that so? I'm not a musician, so maybe I'm naive - but shouldn't music programming be able to produce anything and beyond?
I have a much longer comment above that addresses your point, I think. The gist is this: programming music, as compared to using a graphical DAW, is highly tedious. Unless you know exactly what you want, down to the note, writing music in code would take far longer to produce results.
I've been thinking about the differences a lot, and the basic problem is a misunderstanding of what music is.
To a coder, music looks like a sequence of instructions that make sounds, so of course it's natural to assume that it's just like code. Music is a series of events, so let's write code that makes a series of events. How hard can it be?
To a musician, music is tactile, improvisatory, and sculpted. It's nothing like code. At all.
Even if you're using a DAW with a mouse, you're still shifting elements around in time and sculpting fine nuances of the the sound with controller curves.
So code is a terrible UI for music, and live code is even worse. You have to spend so much time on irrelevant distractions - creating buffers, managing objects, iterating through arrays - that there's almost no connection left between the sounds that are being made and your expressive intent.
So live coding only works if your expressive intent is trite and lacking nuance and depth. The only people who do it are hobby coders and a small community of academics who are trying to sell it as a valid revolutionary activity.
Interestingly trackers, which are by far the most successful coding environment for music, also have the lowest conceptual overhead.
Yea thats a problem for all of the most popular livecoding libs (tidal and sonic pi depend on supercollider). One alternative is to use js (gibber.js) and doing livecoding in the browser. I've been playing with the idea of creating this lib that uses ruby and compiles to js under the hood: https://github.com/merongivian/negasonic, one drawback is performance though
Having tried both Gibber.js and Tone.js I would give the edge to Tone.js - less crackles and pops and generally more a more stable bridge to the Web Audio Api. Give it a look if you like.
> Sonic pi seems to be the most recent endeavor in this area. but it doesn't offer any deb-packages. compiling for Debian/Ubuntu seems to be not documented.
What I'd like to see (or find, perhaps it's out there) is something that's based more on triggering loops and samples (with optional processing) as opposed to synthesis.
Earsketch sort of looks like what I'm looking for, but it's web only and I can't much get it to run?
Sonic Pi is awesome, my son has a great time going through the great tutorial and riffing on the examples a bit. But when we were done with the tutorial I was (and still am) a bit lost as to where to go next. I don’t have any understanding of music theory and Sonic Pi seems like a great way to learn the basics of it, but I couldn’t find anything that occupied that next step after the tutorial.
Sonic pi is great, I used it live as part of a noise / ambient music project I do. It really feels like just using a DAW but having more control and much faster. Unfortunately I fell off it, because I struggled with Linux audio so much and just gave up at some point. But it's good fun and more than a toy for sure.
It is pretty neat and the main coder is doing pretty cool demos. It is not easy to master it, it reminds me the cool time of modules on Amiga and its magic woaaa effect.
Sonic Pi is based on Supercollider for sound synthesis, with a ruby server that controls it, and all of this put in a Qt GUI. Either Supercollider or the Ruby server should be plenty easy to embed
It took me way longer than I'd like to admit! Enjoy!
Edit: Here's an MP3 export of it https://soundcloud.com/mitchell-anicas-project/corridors-of-... for easier listening.