Hacker News new | past | comments | ask | show | jobs | submit login
What Is the Future of the DAW? (djmag.com)
196 points by sowbug 9 months ago | hide | past | favorite | 216 comments



To be honest, I really like existing tools and workflows and I don't really want that to change.

> Open a DAW from the year 2000 and it’s highly likely you’ll recognise the vast majority of the features — both functionally and visually — from any DAW you might use today.

This assertion that "old is bad" really, really needs to die. Change for the sake of change is bad. I don't see why all these new innovations can't be baked into existing workflows in a natural and intuitive fashion.

But then, in some cases, do I really want them? Like generative AI for music. I'm not sure but leaning towards no.


More or less, this.

I've worked in a lot of studios with 2" tape or, like Radar or an HD24. In all of those, the engineer had really great chops. Functionally, for most "traditional" (I do a lot of folk, jazz, and country) I just do what they did, and Logic is just a glorified tape machine + mixer + processing.

I am still mostly doing stuff in single takes until I like what I have and then maybe punching in a note or two here and there. It's a really fast way of working. And the old stuff I have is quite good: API/Neve front end, AKG/Schoeps mics, genelecs.... these are all things that have been around a long time, and I am not seeing any benefits there for anything- they are fast and easy to use well without a lot of tweaking.

Like, you could walk into any studio post 1980 and more or less find things will have a 1-1 correspondence to a modern DAW.

There is nothing wrong with those tools, and so I assume folks will keep using them.

However, I can see a couple of areas where I think that smarter tools might help. A big chunk of what I am doing already involves programmed drums, and a smarter "Drummer" in logic would be an improvement.

Also I (and a lot of other folks) aren't super stoked about harmonies sung by one person overdubbing a bunch of lines. I'd be interested in some workflow where I could, like, sing a harmony and it would change my voice in a credible sounding way.

But as far as workflow goes, I really don't need compositional tools- I could do most of my writing tasks with a pencil and notebook.


>Also I (and a lot of other folks) aren't super stoked about harmonies sung by one person overdubbing a bunch of lines. I'd be interested in some workflow where I could, like, sing a harmony and it would change my voice in a credible sounding way.

This would be a godsend


Really??? Uh, it's probably my fault but as a hobbyist composer (day job software engineer) I can personally say that tools I use absolutely and utterly suck. They're immensely buggy. My workflow includes:

- LilyPond for writing the final notation

- MuseScore 4 for playing around, MuseScore 3 for playing MIDI

- REAPER as DAW, SFZ plugin for midi soundfonts

- Audacity for processing WAV files

- music21 python lib if I need to programmatically process MIDI (which I commonly do). My scripts using this lib.

- Kdenlive for video processing

- Most importantly: my piano Roland FP-30X

Now among these the only one I don't hate is FP-30X. That piano is fucking amazing, it sounds and feels great.

Kdenlive is also not terrible but I don't do a lot of video processing. Just basic notation videos for YouTube.

Everything else is incredibly, incredibly buggy. MuseScore 4 is thrash, almost every interaction I have with it exposes some bug. REAPER is usually fine but there are pretty annoying bugs when it comes to MIDI import/export that involves time signature change.

My workflow is so dogshit that last month I decided either I'll write my minimalist tiny notation/DAW tool or reconsider this hobby. So far haven't been able to do anything major but I'm confident I'll invest some time into developing my own tools in late 2023 or early 2024.

I'm glad people like their workflow. Unfortunately my own experience with Linux audio processing has been nothing other than encountering one bug after the other one.

EDIT: Whoops, don't know why I said Ardour. I never used it, I actually meant REAPER. Fixed now.


> Unfortunately my own experience with Linux audio processing has been nothing other than encountering one bug after the other one.

You kind of buried the lede there...


"To the man who has only a hammer, everything looks like a nail." Imagine, expecting Linux to provide a productive, frictionless music creation workflow, using programs that aren't exactly torture-tested for usability on the platform. I think you're supposed to be grateful/amazed that they work at all. (Not that the UX would be dramatically better or trouble-free on Windows or a Mac, using the same suite of apps.)


> Linux audio processing

Just use Windows and all of these problems go away. When I make music my stack is basically Ableton Live, NI Komplete and Spitfire Audio and I have exactly 0 of these issues.


This doesn't work for me because I do a lot of algorithmic composition and a huge part of my music making involves programming and it's very hard to write code anywhere that's not linux.

Also I truly hate using windows or osx.


Python, vim, emacs, vs code, etc all exist to write code on Windows too. Hell, Windows even has Linux in it too now (WSL).


> it's very hard to write code anywhere that's not linux.

Oh nonsense -- in fact, bullshit. No, it is not. This is a personal problem; seek help.

>Also I truly hate using windows or osx.

And we truly hate hearing about it, and how things don't work well on your platform of non-choice. So suffer it in silence please.


Okay, this is a matter of opinion, and yes I do agree that it being hard to write code anywhere that's not Linux is hyperbolic. But Windows is _truly_ a terrible OS for DX. I dread every single time I have to do any kind of development on it.


I don't have this problem. Reaper, microphone + sax, occasional other woodwind and occasional keyboard. If I write code for music it'll probably be in puredata or perl (day job). Puredata for (generative)? sequencing and audio effects. Lilypond for notation which I'll consume with reaper or timidity. I tried windows - it confused the crap out of me. I don't mind OSx but do all my real work on a debian workstation.

I kind of just use reaper as a dumb recording reel - I don't use much of it's timing features.

At the moment I'm mostly just writing up exercises to fit my current goal of learning everything in any key.


My music involves a lot of time sig changes. When I export REAPER project to midi and consume in Python, there are bugs such as note duration being wrong exactly where there is a time sig change.


If you have the money I'd recommend a monome Norns or Teleype. You also may look into 100r Orca and Midinous both of which run fine on Windows and abstract out the pain of developing an Windows


Look at his nickname…


I really like Logic, especially on my M1 mac mini. The latency is really low, external midi works great, very intuitive interface. Years past I've fought with audio interface and latency issues. Now it all just works so I can focus on my marginal song writing and intrument playing.


I'm pretty content with FLStudio (with various VST plugins) and Sound Forge, and Audacity when I need to work with something in a weird format. Sometimes the plugins can get crashy but I just bounce to WAV and unload the offending plugin.

In fact FL just added a python integration that I haven't even begun to explore yet, but you can use it to generate piano roll scores (among other things?)

foobar2000 handles tagging and DaVinci Resolve/Blender/ZGameEditor handles video.

Every time I've tried to do anything in Linux for music its been a headache. Granted its been a while now. But I'm a little too invested in the Windows audio ecosystem to switch.


There is bitwig that is much more comparable to Live in term of features.


“Now with timelines!”

Heh, been using it for years before that feature (which is admittedly now years ago).

Synth graphs are sweet. But I kept going back to Logic to compose compositions ;)

Habit has stuck, unless I have something bitwig-y in mind.


give any mature, non-Linux DAW a try. it’s cool to try to keep it all open source but with music software you get what you pay for. I recommend Logic Pro


> you get what you pay for

you can pay whatever you want for ready-to-run versions of Ardour (US$1 up). Need it to be better? Pay more! :)


What are some Lily bugs you've encountered?


Nothing too major except I'm about 99% sure \after is buggy. I love LilyPond enough that I'll soon file a bug report. It's possible I'm missing something but I have ample evidence that \after in certain contexts mess up stuff.


"Open a Word processor from the year 2000...", "Open a spreadsheet editor from the year 2000...", "Open an image editor from the year 2000...". This is completely silly; most desktop software still exists in a form that is a recognizable evolution from Y2K versions; DAW is no exception, big deal indeed!


The ribbon ruined office. I bet I could still open office 97 and get everything I want to do wordprocessing wise done, though.


We'll see. Companies like Abelton aren't going to throw out their whole codebase and start from scratch, just to accommodate advancements in technology. But if I can just talk to the LLM, and say "hey, gimmie a drum beat, now add in a bass line, no not like that, like <hums into mic>", do I really need the overhead of the traditional view of separate tracks and midi and plugins? When making music is as easy as describing something to Stable Diffusion, it's not that old is automatically bad, but we don't operate car like the Flintstones. When I'm walking, I use my feet, one at a time, over and over again. When I'm in my car, I don't use that same interface to operate my car. I don't take foot steps to make my car move. There's just a button press, with my foot, to make many many footsteps happen. If AI is able to similarly accelerate music production, the old interface may just simply not make sense anymore.

Is the old in this case the wheel, which is fundamentally the same, except there have been thousands of years of technological improvements surrounding it, or is it the computer, where modern day computers have only a passing resemblance to the original thing.


That sort of thing is only going to accelerate the amount of crap, derivative music that is already flooding the market

The difference between music and driving a car is that music is art. Many of those who labor in the space enjoy creating from scratch. Technological progress does not automatically make things better


Crap music that other people made will flood the market, but music that's made just for personal enjoyment will explode. Writing music, creating/recording it, and arranging the tracks in Abelton is a very time (and money) consuming hobby if not done professionally. Learning to play an instrument in the first place can be far more rewarding because it's "yours". Prompt engineering on an LLM will be a total shortcut to creating music, but it will be yours.

Commuting to work/school/etc in a car might not be art, but driving for sport, at the top of the profession is very much an art form simply because of the technical skill, creativity, and emotional depth and investment. Just as a musician understands the nuances of their instrument to produce a captivating melody, a driver must intimately know their car's attributes to master each track. The racetrack is their canvas, with split-second decisions, adaptability, and emotional connection to the track, car, and other drivers. Like a well-composed symphony, high-level driving's graceful turns and passes resonate with both the drivers and the viewer. It's not just about who crosses the finish line first (despite what the Fast and the Furious movies told you), but the elegance and finesse required to drive at that level is befitting a music virtuoso to educated viewers.

Very much agree on the last point, technological progress often isn't. Electric cars are the future, but the diagonal torque curve of the motor lacks the charm of ICEs.


I’d say the current tools are mostly refined and properly mature. I would rather see innovation with plugins, with DAWs staying pretty much as they are.


Yup. Acid Music and Cool Edit Pro 2.0 are great examples to note…poached of course…

I’ve dropped more than $1000 with Ableton and always recommend Reaper first.


I like the existing workflows. But I'd really love to GPT-define reverb settings, song forms, channel inputs, ... tedious copy-paste tasks.

A local LLM running on x86 or M2 would be great. But API callouts while waiting for the silicon/Cuda migrations are fine too.


Nothing will homogenize pop like LLM inspired filters; the new presets of the dx7.


But that bass is so classic and tight!


Don't you worry; Daniel E. and the streaming cartels will happily plug something like that into your sound hole.


Having started music production only in the late 2010s, I was honestly shocked to see just how similar DAWs from even the 90s were to the current existing workflows. It feels like you could teleport a producer from a few decades ago into the modern day music software iteration and they would figure things out pretty quickly.

Of course things are far, far more convenient and faster nowadays, but the fundamental paradigms are mostly the same.

https://www.youtube.com/watch?v=6OaBkvwx7Hw


TFA starts down an important path - identifying different categories of users - but doesn't really get this quite right IMO.

There are (at least) four categories of DAW users:

1. Professionals who are being paid to make music, and for whom time is essentially money. Tools that speed up the production of that music are both financially valuable to them, and also make their overall lives easier (if done right).

2. Musicians for whom making music is a creative act of self-expression. They are not being paid by the hour (of music, or of effort), they are not under deadlines, but they do want tools that fit their own workflow and understanding of what the process should look like.

3. People who just want to have fun making music. Their level of performance virtuosity is likely low, and the originality of what they produce is likely to be judged by most music fans to be low. They want results that can be quickly obtained and are recognizable musical in whatever style they are aiming at, and they don't want to feel bogged down by the technology and process.

4. Audio engineers in a variety of fields who have little to no interest/need for music composition, but are faced with the task of taking a variety of audio data and transforming it radically or subtly to create the finished version, whether that's a collection of musical compositions or a podcast or a move soundtrack.

The same individual may, at different times, be a member of more than one of these groups (or other groups that I've omitted).

The needs of each of these groups overlap to a degree, but specifically the extent to which the current conception of AI in music&audio can help them, and how it may do so, are really quite different.

We can already see this in the current DAW world, where the set of users of DAWs like Live, Bitwig and FL Studio tends to be somewhat disjoint from the users of ProTools, Logic and Studio One.

TFA acknowledges this to some degree, but I don't think it does enough to recognize the different needs of these groups/workflows. Nevertheless, not a bad overview of the challenges/possibilities that we're facing.


5. People who record, but have nothing to do with music whatsoever. For example, I have literally no musical skills whatsoever, but have at least 15k hours in my DAW. I'm a Voice actor, who worked hard to customize my DAW to get rid of as much music making stuff as possible from cluttering my interface (Reaper FTW)

It's a Digital AUDIO workstation, not a Digital MUSIC workstation


I was trying to cover that with #4, but i put too much 'music-y" stuff in there. You're precisely one of the examples I was thinking about there.


I can dig that, and thanks for clarifying.


If you're only recording lines, why isn't Audacity a lot more convenient?


Audacity is not more convenient. Not by a mile. Voice actors have tons of workflows that only a DAW can help with.


"Only recording lines"? There may be more to it than you think.


I figured, I was curious exactly what


I use Ardour when I do sound design. A lot of the time you want to be able to automate volume changes, or add a reverb or an EQ(With real-time controls instead of the annoying batch "Change it, then test it" workflow), etc.


Can you share your workflow? I've tried using Resolve and Audacity yet it all feels awkward and is painfully slow


I'd be happy too,it's mostly templates: project templates, individual fx presets, fx chain templates for corrections, sweetening and mastering, export/render prsets, filename templates, And lots of keystroke macros that help me speed up my work. Things like specific keystrokes for punch and roll, quick edits and ripple delete. It's an amalgamation of lots of small optimizations. Reaper is also really good at UI customization, so you can hide grids, measures ,snapping, and really optimize things. ChatGPT is also reasonably good at writing Lua scripts for reaper called ReaScript that can leverage the API for automations more complex than what you can do with SWS additions to the immense actions list. If you have something specific, feel free to reach out directly. I love this stuff.


Any pointers on where to start with all that? I can grok Audacity (which is a handy but terribly basic program). When it came to Reaper, I found it very confusing, and never managed to do much with it. My needs are basically voice only.


Yeah, of course! But, this will be a thing that I hope doesn't get flagged. I have a totally free course for configuring reaper for VO at academy.boothjunkie.com. it goes through all my initial optimizations for reaper. If you're comfortable with your VST setup, this will go a long way in getting you started, and you can incorporate your VO chain.

If you want to take this out of HN and talk more specifically, my website is in my profile and I'd be happy to help however I can. If you want to dig deeper into anything specific, I'd be happy to share any experience I have.


NB: occasional self-promotion, particularly in context and with an appropriate resource is more than fine on HN. The fact that the course is free should count for something as well:

It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.

<https://news.ycombinator.com/newsguidelines.html>


I think that's cool. I asked you for something, and you kindly responded with helpful information.


I like your taxonomy. I fall into #2.

Logic Pro X is the DAW I'm most familiar with and while not "AI", it's "Drummer" plug-in is uncannily good. So good it's indistinguishable from AI. I want more of that. Give me "Bass Player" and "Keyboardist" and "Guitarist", etc, with all the options that "Drummer" currently has, to select style/genre, kit sound, etc.

Another wish list item: Let me point the DAW to a 4/8/16 bar section of multitrack original music I've created, and suggest n number of directions to take it, spitting out each of the individual instruments on their own tracks, so I can mix/match/edit. My imagination is limited; that's where I'd like AI to help.


There's a couple other taxonomy of DAW users that you're ignoring, which are people creating audio content that isn't music.


See my #4 ... but also feel free to expand the taxonomy!


I would say that there's a pair of taxonomies, one being of audio content (social audio, radio, film, multimedia art, music, etc) and one being of the level of user (hobby/beginner, student, pro, academic, etc) and the problem with DAWs are that they generally gravitate towards where the money is, which are pro users in music and film.


I think that another axis to take into consideration is the extent to which audio will originate outside the computer. The needs & desires of people recording actual performances on some kind of instrument (even an electronic one) are going to differ significantly from people working, as they say, entirely in the box.


>> problem with DAWs are that they generally gravitate towards where the money is, which are pro users in music and film.

Is that the case though?

Ableton and such don't charge per revenue as far as I'm aware. They charge same whether you are scoring a $500mil movie or fooling around after hard day of coding.

And I feel in sheer numbers, latter outweighs the formers by several orders of magnitude. All forums I've been to are filled by, at best, "enthusiasts". Thousands upond tens of thousands of us with some disposable income we give to synths and software to tinker with :-).


I think that's a solid observation. I just like to add a 3rd dimension to the taxonomy, which is the relationship of the user to the finished work (is it for money? is it for anyone else? is it for fun? is it meaningful?) because I think this impacts the user's relationship with the tools.


Any recommendations for #3? Particularly with low barrier of entry for kids? Something you can use a midi keyboard and a mic with?


Take a look at Reaper. It's a professional quality tool, but easy enough to get started with for kids too, its license is very friendly and the trial version isn't limited in any way. The Windows version always worked for me under Linux using WINE with very low latency, but they made a Linux port which is great. If you use the Linux port, you may want to use Yabridge to load Windows VSTs in a transparent way.

http://reaper.fm/

https://github.com/robbert-vdh/yabridge


I'm a heavy Reaper user since I love the experience of editing with it, but when I'm fooling around writing songs, I use Garage Band specifically because it has so many great instruments and sounds, and also because I find I interact with it much differently than with Reaper or Pro Tools because of the simplified interface and I don't get sucked into fiddling with the details of what I'm making.

Before Garage Band, I used Tracktion (I think its now called Tracktion Waveform Free) in the same manner. It's been ages since I used it but if you're a Windows or Ubuntu user I think it'd be worth checking out.


Ableton has like a 3 month trial and honestly the stuff it comes with out of the box is way more than enough to determine if you want to continue with such a hobby or if it's not something you'd be interested in long term. The tutorials are plenty and easy to follow as well.

I use Reaper as well, but it takes a while to get that "useable" for modern(ish) music production. The benefit is there's plenty of free virtual instruments/VSTs to download. All of them have downsides though as does reaper itself. In Ableton I can make an EDM track relatively fast given the out of the box presets - especially synth drums - but in Reaper using a free VST like HELM makes it kind of a pain to use. YMMV.

No matter what you choose, I do HIGHLY recommend downloading Spitfire LABS though - the free instrument packages are massive and highly customizable. It's truly amazing.

Here's some good VSTs for Reaper:

https://plugins4free.com/instruments/ (when the site works)

https://web.archive.org/web/20181203014924/http://sonic.supe...

https://guitarclan.com/best-free-vst-plugins/

EDIT: oh also trying to master a track in Reaper with free plugins is frankly pretty bad for a beginner vs Ableton's preset limiters and other utilities. The Cuckoos plugins are messy to deal with in my opinion.


Ableton and Reaper are way too complicated for kids. I'd say they are 14+ software.


Don't undersell kids. If they get interested in something they can learn it scary quick.

But having said that, sometimes the thing that grabs the interest is recording your voice in the windows built in recorder then playing it back backwards. Try audicity?


Some cool (albeit pricey) devices to toy with are the AIRA compact series by Roland[0].

In the same vein, the Novation Grooveboxes[1] offer some expanded capabilities that don't require a computer. Second-hand pricing is quite reasonable for both.

[0]: https://www.roland.com/au/categories/aira/aira_compact/

[1]: https://novationmusic.com/categories/samplers-grooveboxes


I always find GarageBand to be an easy-to-pick-up app that can be used to generate fun sonic blurbs in a short amount of time. It has its own limitations, but the lack of complexity contributes to its ease of use.


When I was trying to figure out something on GarageBand I ended up on YouTube looking for tutorials and was astounded to see what people are doing with GarageBand on their phone. They play the DAW itself like an instrument and build songs in realtime. It was very humbling to see.


Absolutely, REAPER. While it may be a little obtuse to learn, it is absolutely as powerful as the big guys in music production such as ProTools. Plus it works on both Windows and Mac, unlike Logic.

There is a big community around REAPER also, and tons of YouTube videos around it. Plus, you can download it and work with it for free, but after a while, it will want you to pay for it, which is only $60...but you can keep using it if you don't (though I would encourage you to pay for it if you like it).



Reaper works on Linux, but a warning to anyone thinking of trying that, DAWs are all about adding and using plugins. And you'll need to confirm your favorite plugins run on Linux as well.


Check out Korg Gadget![0] It's got an Ableton-Live-style clip launcher that's easy to compose in, and a variety of "gadget" instruments that produce different kinds of sounds. If you balk at $30 for an app, it goes on sale for half price a good 3-4 times a year, usually around holidays. You can hook up any midi devices (BLE midi or via the USB camera kit adapter, if not on a usbc iPad). If you've got an iPad with a headphone jack you can pick up an iRig clone for <$10 that gets you line / mic / guitar input, too.

If you've got an iPad that's probably the best start (it'll run on any iPad 2 or above). It will run on iPhone's but it's a bit harder to play. There's also a Nintendo Switch version (it's more limited, eg no audio recording or export), and a mac version (but it's pricey). Annoyingly, the Mac and iOS versions are separate, but at least the iPhone and iPad versions come together as one purchase.

[0]: https://www.korg.com/us/products/software/korg_gadget/


GarageBand - it's super quick to throw something together, has a wealth of virtual instruments and the user interface is in the same vein as professional DAWs, so there's a growth path if this is something kids enjoy and want to pursue further.


Back in the 90s kids used a variety of sample trackers on their Amiga home computers:

https://en.wikipedia.org/wiki/Music_tracker#Selected_list_of...

Oh, and samples? Kids used hardware samplers to rip or record their own:

https://news.ycombinator.com/item?id=37376675

And how'd that work out? The end result was a piece of music called a module ("mod"). Strangely enough, I can't find exact (or even approximate) numbers. A snapshot of the MOD archive from 2007 had 120k mods:

https://en.wikipedia.org/wiki/Mod_Archive

So, yeah, a very low barrier of entry... ;-)


I commented elsewhere, but my goal is to hide the DAW as completely as i can and give them a midi keyboard, Korg nanoKONTROL2 and some background config or automation that lets them choose synth sounds and record themselves play. As someone who played piano to a decent level up to my teens, I've always felt that throwing up a DAW just gets in the way of making music. Especially when I'm getting back into it after a long break. Complicated interfaces just sap my time and energy and I eventually find that I've wasted a lot of time without striking a key on the piano.


I believe BandLab is being used in schools to teach music making now. It’s web based, I’ve not used it much myself so can’t comment on how good it is but may be an easy starting point


Sadly, its the group I pay the least attention to. I suspect that today or next week, something browser based might be the best choice, but I can't tell you what. Apologies.


Latency matters a lot. I can’t see a browser-based DAW ever being very compelling.


Depends. For mixing it barely matters. For performance, if you're playing a MIDI controller live, then perhaps not (though browser latency is not that bad these days). But if you're working with clips & samples, it isn't likely to matter much.


Related question: any suggestions on what to try next after LMMS?

I was planning to try Reaper or FL Studio.

My biggest complaints with LMMS are: doesn’t support VST3, can’t see notes for multiple tracks at the same time (although I saw a “ghost notes” patch someone was working on for this scenario).


Honestly, try the trial for both. Give Ableton and Bitwig a go if you have time as well. Or at least check out all 4 on youtube and see if a particular workflow strikes you.

Unfortunately the open source DAWs don't hold a candle to any of the paid ones. But once you're paying they're all pretty solid. It's like asking if you should move to vim or emacs or jetbrains after starting with Notepad++. They're all good and everyone will have their own favorite. Many people also use multiple DAWs the same way people use multiple text editors. Personally I use Ableton and Reaper


Thanks! Yeah, it's subjective, but I thought the chances of someone making the same progression (LMMS to ???) on HN were pretty high. Of course, I left out tons of context that might have helped (e.g. some people want to make music live--I do not).

Really, it might make the most sense to just find music similar to what I've made (or want to make), and then ask/research what they're using. Edit: I think that's how I originally found FL Studio and Reaper, come to think of it!


On Linux, try Ardour (Free) and Bitwig (between $99 and $399-ish but they have a 30day trial)


Anything "free", Cakewalk, Garageband, Waveform, MPC Beats, or whatever license that comes with your midi keyboard. All these are already crazy powerful for a hobbyist.


Garageband - my kids love it.


DAW version control is one of my dreams. If I could have a really tight git equivalent for reaper projects, it’d be so cool. Plugins are, as usual, the biggest barrier there. Doesn’t seem like there will ever be a good way to deal with different people having completely different plugin collections. Unless someone makes “Netflix but for plugins” or something.

The other thing I saw the other day that I thought was cool was a reverb plugin that uses your GPU. Seems like the next step for modeling could easily be in that direction. Especially since the bar there is low, pretty much just the positively ancient UAD hardware acceleration cards, although UAD themselves seem to be going the opposite way and pushing native stuff now.


Splice actually started out with version control for collabs - it was called Splice Studio and was killed off recently. Don’t think it ever found PMF (doesn’t mean there won’t be a product for this eventually) - https://cdm.link/2021/04/splice-studio-is-free-backup-versio...

Turns out their pivot to being Netflix for samples/plugins is more in demand :)


Yeah +1 on version control being a dream. If real, functional version control for any DAW in the manner it sounds like you have in mind exists I will start making music with DAWs again!


Sounds like you're talking about both version control and project portability, which are different concerns. But with respect to version control, many systems like Cubase use XML project files. As long as you don't physically delete any audio from your disk, basic version control on your machine using git should be possible.


It’s true that you can dump a project into a git repo, but the real problem is diffing. It would be pretty miserable trying to reconcile any significant changes or conflicts. What I think would be cool is a tool for auditioning and merging specific changes that’s backed by git somehow. I’m not really sure what it would look like though.


In the mid-late 2000's, Ardour (and a couple of other DAWs) had support for branching undo/redo histories.

We (Ardour) abandoned it, because the universal experience of non-programmers was that they had no idea how to even begin to use this sort of feature. The majority of DAW users don't come ready to deal with the complexities of a branching workflow, or even a desire to learn it.

There is at least one band out of Madison, WI that uses/used git with Ardour during the height of the pandemic to facilitate remote collaboration on new pieces. They gave a talk (and played) at the Ubuntu Summit in Prague last year.


That’s wild, super cool that you guys were trying to make that work so long ago. I can see how people unfamiliar with that way of thinking would be completely lost. Especially in the context of a daw which is basically a wall of buttons and switches.

As a very entrenched Reaper user, I haven’t tried Ardour, but I’m glad it exists and continues to exist. Thank you for your work :)


There are plenty of apps like Figma and Google Docs that have a kind collaboration and version control that non-programmers are able to understand.


I was specifically referring to branching workflows, not version control in general.


I'm right there with you. I once tried managing ableton projects via git. Was a dream from a simplicity standpoint but was not effective in the long run. I don't remember why it didn't end up working well but I abandoned it shortly after trying it the first time. Something like this in modern DAWs would be incredible.


if files are getting managed by blobs, git doesn't really scale well with that since it stores diffs.

`bup` notionally does this a lot better, or git-lfs.

https://github.com/bup/bup

https://raw.githubusercontent.com/bup/bup/main/DESIGN

https://git-lfs.com/

git really needs textual representation for any kind of meaningful commit, and binaries totally break that.


> git doesn't really scale well with that since it stores diffs.

This is precisely what git does NOT do.


Re: point 1, I feel you! Any chance you have checked out Splice or other rent-to-own plugin providers? It's not quite Netflixy, but less steep compared to upfront VST purchases.

[0]: https://splice.com/plugins/rent-to-own


FLStudio lets you save a new incremental version with CTRL+N, I use that as a sort of version control so I can roll back to an older version of an ideas if needed. Doesn't provide any equivalent of merging with head or branching or anything but it gets the job done for my needs.


In the open source world, a DAW could really just include everything. You'd still need plugins occasionally, but I don't see why there couldn't be a curated collection that covers all the basics.


For a lot of NN-based analog modeling you don’t even need the GPU. I trained a model on the API 550A EQ and SIMD is fast enough for real-time inference.


What would a DAW-diff look (sound?!) like though... Though surely any sort of XML (or JSON etc.) based format would be fine for using along with git etc.


I’ve been working on a next gen DAW for a few years and this article misses all of my priorities. AI will encourage mindless replication; instead modern tooling should be analytical, precise, deliberate, flexible, and have layered complexity.

My priorities are more in line with turning music production into an integrated and interactive development environment using modern design principles. Sub-modular capability, A/B testing, git integration, non-local collaboration, scientific visualization, notebook style experimentation, integrated synth build / play, web embedded optional interface, social sharing & tutorials, polyglot open source interface (primarily Rust), programmable behavior / macros, higher order signal dependency optimization, algorithmic mastering, targeted oversampling, creative process reusability, etc. You can solve the plugin issue by just synchronizing the output audio by the user that has it installed.

Quality music production is an opague art and everything is way more daunting than it needs to be. Most producers just mess around until it sounds good and that gets people stuck in a local maximum of clarity. If it takes too long to experiment then you won’t get through the effort of trial for understanding. I have spent 15 years building tools as a research quant dev and also a dj (Extrn). There is a huge unaddressed gap in the audio space and huge barriers to entry in accessibility and cognitive burden.


Spot on. It's telling that most DAW UIs are based on archaic ideas (like mixers/tracks and MIDI/step-sequencer grids). Even BuzzTracker (2009) outshines many of the current DAWs with DAG instrument/effects-chaining and arrow-key/cursor navigation.

My post-graduate research concerned signal-rather-than-event-based generation/transformation of compositional data, integrated with textural/timbral synthesis.

My current focus is building a DSP framework for this purpose in C++20 [1].

In any case I'm interested in following your progress, and happy to contribute code/ideas if you feel like collaborating (links in profile).

[1] https://github.com/synthetic-methods/xtal


Fascinating! Is there anywhere where we can follow development of this?


If anyone else is as frustrated as I was with the article mentioning “the DAW” 73 times without defining once what the actual acronym stands for, it’s “Digital Audio Workstation”.


In the same way I don’t expect a biologist writing for biologists to explain “DNA” stands for “deoxyribonucleic acid”, it’s probably not necessary for a music producer writing for producers and engineers to define “DAW”.

Users here probably feel the same way about HTML, FIFO, DAG, etc


No. The main audience for this article already know what a DAW is


The fact that the article was in DJMag might have been a clue?


Defining DAW is like defining SQL. If you need the definition, you are definitely not the target audience.


DJ mag aint what it used to be. The top 100 is pretty much a joke to most people who care about music.


Yeah the top 100 is super weird, it’s all these commercial EDM DJs but the weird thing is the magazine doesn’t otherwise really seem to target that audience. I don’t read it but I have come across some good long form pieces like this from them online, so actually I think they are trying to do some good stuff.


I'd imagine that it's because the DJs ask people to vote for them. A lot of the DJs I follow do that every year. If the big commercial DJs with the biggest following do that, then they would naturally land at the top.


Yeah I guess what I mean is that it seems to go against the rest of their brand, the magazine usually covers slightly more underground dance music it seems - not super underground, still big names, but not stadium EDM stuff.

Maybe their philosophy is that the Top 100 should be an open thing and they shouldn't restrict who can enter based on music style... to me, it makes DJ Mag way less credible, but I guess they probably make money out of the Top 100 being so big.


+1 Feels like they don't care who's their readership. Felt like they told me: "If you're not in the industry, Google it."

DAW : Digital Audio Workstation https://www.masterclass.com/articles/what-is-a-daw


> +1 Feels like they don't care who's their readership. Felt like they told me: "If you're not in the industry, Google it."

Caring about their readership is exactly what they're doing, just that you happen to not be what they think of when they imagine the typical reader. The typical reader is already into music production and with a 99% certainty know what a DAW is.

I wouldn't expect every tutorial on "Google's Official Android Developer Blog" to explain that "JVM" means Java Virtual Machine, some resources really are for people who already know a bit about the subject area.


In the time it took you to Ctrl+F DAW you could have googled it.


100 percent


D'OH!!!


I wasn't in this instance, but am in general. For industry folks they probably don't even realize it's not a word--surprised it hasn't lowercased to daw by now /s.


What I really miss in DAWs is a drum arranging tool that goes beyond scores or xoxo grids. Those are good for editing patterns in atomic ways, but I'd love something to quickly record new patterns played on drum pads on the fly, with arbitrary duration, create a new pattern that inherits parts of the one I just recorded (to make variations and fills) just by hitting one key and/or sending a MIDI note from a controller, etc. Then after having a good number of patterns, I'd like something that shows them as nodes on screen so that they can be connected dynamically according to a predefined flow with possible variations triggered by MIDI notes (pedal switches) during live performances so that the flow can be altered either by prolonging/shortening a series of patterns, jumping here and there based on conditions, but the visual node representation would be vital to have a quick feedback that shows what is going to happen say 5 measures from now, for example by highlighting the nodes and flow that will be followed under the present conditions.


Most of what you’re talking about is pretty easy to do in Ableton.


Have you ever looked at Non? It has a recording mode that is a lot like what you described if I understand you right. Development is kind of moribund but that can be an advantage in some ways.


Unfortunately it requires Jack, which for me has always been a source of frustration. I can get very low latencies and excellent stability using Alsa alone, so don't feel the need to change. Unfortunately as much as Jack is incredibly powerful, its UI to organize and connect stuff together is an example of how terrible usability choices can ruin an otherwise great software.


It works fine on pipewire's jack emulation, which is much less of a headache


Maybe https://github.com/ahlstromcj/seq66? Spiritual successor of seq24.


ableton note phone app might be in the ballpark https://www.youtube.com/watch?v=smJZcWwJsOw&pp=ygUMYWJsZXRvb...


I could see this opening up a new way to generate ideas from which you can start building, or what to add/remove when you're stumped. Maybe a way to create coherent arrangements from a curated list of one shot samples and short loops. And I wouldn't be surprised at all if the automated mixing/mastering services (of which there has been a bunch for years) take a giant leap forward because of better technology.

But in general I'd imagine written language to be a pretty infuriating tool for describing what you want musically, when the most interesting parts of music are just about always the ones that you can't really capture with language. You can kind of outline things with written language and traditional music theory, but it's usually just a blurry version of why a specific piece of music resonates.

I think that AI tools for music will most likely just stay as plugins within the more traditional DAW structure. There's only so many ways to represent an audio file, and a fader that controls the volume of a track or some other parameter.

Like mentioned in the article, most of these additions take quite a bit away from the amount of control the artist has over the music, and lowering the amount of 'input resolution' in this sense is a block that's almost impossible to overcome.


Agree with the text input being annoying and possibly more of an impedance. Personally i would prefer to speak out the sound i want (think beat boxing noises) and then have the generative ai give me an array of similar sounds and then i can just drag and drop where i want it

Writing into a prompt feels opposite to the creative process but as a first pass its a cool tool, kudos


You can kind of do something like this with the Vochlea microphone and their corresponding software: https://vochlea.com

I don't think it's combined with AI yet, but I have to imagine it's on the horizon. At the moment it basically just MIDI-fies your voice. You can raise or lower your voice pitch to turn a parameter knob, beat box to lay down percussion notes, etc.


The umbrella topic to this is one of my favorite topics in all of software and one where strangely we as an industry don't seem to have internalized its lessons: That the right approach to building a software program (like a DAW, NLE, bitmap/vector editor) emerges early.

This is why these applications have lifespans measured in decades, and it's extremely rare for a new player to be able offer anything new, different, and valuable because the design space has already been solved for the problems these applications are solving.

I wrote a piece on this subject, e.g., why, how, when software transitions do happen for these kinds of apps: https://blog.robenkleene.com/2023/06/19/software-transitions...


I don't think this is true at all. For several different reasons:

First, "the right approach" to building a software program is wildly unspecified: it could refer to the UI/UX aspects and/or the internal design, and these both have dramatic impacts on long term evolution.

Second: the "right approach" for "making music" in the early days covered things as distinct as MIDI sequencers, trackers and early ProTools. It was far from obvious whether all 3 would continue to exist or some hybrid would become dominant (that's actually what happened - early ProTools did not do MIDI; the eventually archetype for DAWs turned out to be a blend of ProTools and MIDI sequencers, and trackers were discarded).

Third: As I alluded to in my comment here about user groups, the right approach is going to differ for different workflows and use cases. FL Studio is not used by many audio mastering engineers; ProTools is not the choice of beat producers.

Fourth: the goalposts keep moving with increasing compute power. The current idea of infinitely elastic audio that has become common among the most popular DAWs would have been unachievable in the early 2000s. Network bandwidth may have a similar impact.

Fifth: the right approach (especially visible today) for some people who are generally "in DAW space" isn't a DAW at all, but hardware designs that bypass most of the functionality associated with traditional DAW design. The Elektron and similar h/w sequencers of the last 5 years are in some senses closer to plugins than they are to DAWs.

Sixth: plugins - the ones associated with compositional elements (you could say sequencers but it goes beyond that) - have long been where the innovation has been taking place. These have evolved quite differently and more diversely than the DAWs that host them. For many users, plugins are the real workhorses and the DAWs are just the scaffolding around that. It would be hard to take a look at compositional plugins and conclude that the "right approach" emerged early.


Not sure which point you think I'd disagree with here, I guess the core thing I didn't add is that yes, the design space changes over time as computers get more powerful. The original paradigms have proved to be remarkably durable though, hence the note in the piece about many pieces of software being the first ever in their category continue to be the market leader:

> I started thinking about this question, of whether software transitions ever really happen, when I noticed just how common it was for the most popular application in a category to still be the very first application that was ever released in that category, or, they became the market leader so long ago that they might as well have been. The Adobe Creative Cloud is a hotbed of the former: After Effects (1993, Mac), Illustrator (1987, Mac), Photoshop (1990, Mac), Premiere (1991, Mac), and Lightroom (2007, Mac/Windows) are all market leaders that were also first in their category. Microsoft Excel (1987, Mac) and Word (1983, Windows) are examples of the latter, applications that weren’t first but became market leaders so long ago they might as well be (PowerPoint [1987, Mac] is another example of the former).


In the DAW space, ProTools continuing-but-diminishing semi-dominance (at least at a professional level) is rooted in hardware rather than software. When they started, you could do not realtime audio on the CPU, so you got a DSP box with the software. The sort of hardware requirement was invaluable to Digidesign in establishing and locking in their early users, and it really didn't go away until sometimes in the mid-2000s when everybody started noticing that you really could do a remarkably large amount of processing on the CPU itself.

So in this world at least, the longevity of the first mover has more to do with actual and imagined barriers to entry rather than anything especially good about the software itself (and indeed, many of its users used to complain endlessly about the software).


I highly encourage folks to check out a new DAW called Blockhead. It upends a lot of typical ideas of how a DAW should work. There’s no MIDI events and no global tempo.

Everything is represented as blocks of samples on rows of timelines which are then effected by transforms placed on the rows above the rows containing samples. Edits and transform adjustments all happen in real time, and everything is continuously rendered to a scratch buffer that can also be dragged in to the project as a new block of samples.

It is truly a very creative approach and when you see it you will be wondering why nobody tried this approach before. The developer, Colugo, has a new video on YouTube showing how it’s main features work.


This sounds like a real time synthesizer for performing rather than an editor for instruments piped in via VSTs or MIDI? Pretty cool, like a DAW-inspired tracker for samples. On Linux, I would still pipe it into ardour to create a mix, from what I can tell. So much money is invested in pre-existing plugins, they really need to implement that if they want musicians to adopt it. The send/receive bus will be like jack on other OSes, but is more like a programming environment than a plugin; that seems cool too.


> Colugo, has a new video on YouTube

A video about Blockhead (experimental digital audio workstation)

https://www.youtube.com/watch?v=P5fWPBOdrY8


DAWs haven't converged on a "best" approach. Most use a traditional linear workflow that's essentially a digital multitrack tape, but Ableton and Bitwig use a clip-based workflow that tends to be better suited to improvisation and live performance.

I think the factor you're missing is path dependence. The approach that becomes dominant isn't necessarily "best", just a stable equilibrium where switching costs are greater than potential benefits for most users. I'm typing this comment on a QWERTY keyboard, but I don't believe for one second that it's the optimal layout - I just can't be bothered to learn Dvorak or Colemak or whatever.


Clip-based workflow was facilitated by technology progress (e.g., Moore's law) which unlocks new design space (similar pattern to why Lightroom was "invented" so long after Photoshop). I.e., doing things real-time or non-destructively as computers get faster is really common, so that happens then the design space gets exhausted again. The key point being this happens quickly once the new approaches are possible.

The piece I linked to makes all these points, as well as addressing your others.


DAWs should be rebuilt from the ground up, component by component, feature by feature, in 3D for XR(VR/MR/AR) where ultimately you can see the waveform as it is --- a 3D object , and interact with it like a sculpture or a Theremin experience. 2D screens, Keyboards, and a mouse are not the best fit.


That would be really fun for sound sculpting but this is a pretty small part of composition. Also one thing to remember is that anytime you want to optimize a workflow you want to reduce the number and amplitude of physical movements. Which is a big limitation of any XR tools for now that are nowhere as precise, high velocity and usable for long stretch of time than kb/mouse/screen/controllers. I really see hybrid approaches winning here instead of full blown replacement of current tools.


I think that the audio space hasn't evolved because there isn't a lot of money in it and it's a bad environment for learning proper technique. The cross section of expertise is quite rare: music theory, instrumentalism, signal processing, software engineering, performance optimization, architecture design, research tooling, musical perspective, user interface, sound design, etc. The fact that things haven't effectively changed in decades means that the cost of changing is so large that something has to be radically better to convince people to alter their process. People generally are not capable of seeing what is possible outside of their scope of familiarity.


How can you write this article only interviewing a bunch of startups? Tim Exile is a very interesting and visionary dude, but without serious contributions from Ableton or Apple on the DAW side this article feels a bit listless - none of these others are real players in the space.


You might want to consider what that sounds like if you rewind it to say, 2002. Ableton is a small startup in Berlin. The dominant DAWs include several that don't even do MIDI. Apple hasn't yet acquired Logic by buying Emagic.

Why would expect the next steps to come from "real players in the space", when the previous next steps did not?

Look at how quickly Presonus managed to build Studio One to a crazy-level of credibility (it helps that they hired someone who had already done it twice).


Because it’s not 2002? As you say, music production capable workstations didn’t really exist. To count out extremely capable incumbents in favour of reporting the opinions of bandlab who bought Cakewalk in a fire sale and haven’t really had much of a dent from what I can tell seems strange.

No solid quotes from presonus either.


> To count out extremely capable incumbents

The article mentions those "extremely capable incumbents" are now stuck with legacy codebases.

> music production capable workstations didn’t really exist

Logic and Cubase were highly capable in 2002. They haven't progressed much in 20 years. The midi drum track editor in Cubase today looks the same as the 90s version.


> As you say, music production capable workstations didn’t really exist.

Didn't say that. They absolutely did!

It took a small startup in Berlin to introduce the idea that maybe everything should be tied to the groove, always. That was not 100% revolutionary (Acid could do some of that), but it upended computer-based music production entirely.

The incumbents are very, very unlikely to have access to people with a deep/serious interest in "AI & music" technology (maybe Apple?). Startups in this space are started by those people.


Feels like "what is the future of the word processor" but with some weird expectation that you're expecting to see leaps and bounds.

Look, FWIW, my favorite music making "stack" is

- Fruityloops (as in, sure I'll use FL Studio but it was pretty much solid for me when it was called that)

- Sony ACID. Still unbeatable to me for quickly layering/previewing multiple tracks of loops

- Cool Edit Pro/Adobe Audition. Still much nicer than Audacity. I don't know why Audacity remains so popular, it's way clunkier than the above

Ableton/Bitwig are also fun to play with and I could see them being indispensable for some, but what more do people need that you couldn't theoretically get "incrementally" or "with plugins?"


> I don't know why Audacity remains so popular, it's way clunkier than the above

Presumably because it’s free, but I’m with you on the clunkiness. The UI looks very dated.

I can recommend OcenAudio, which is free though not open source, and for me offers just the right level of functionality in a clean user interface. It’s very similar to how I remember the old versions of Cool Edit pre-Pro, which I thought were amazing bits of software.


> The UI looks very dated.

If the UI is clunky to use, that's something that needs to be fixed. If the UI works okay, but only looks dated, that's ... fine and there's every good reason to leave it as it is.

Why? Because altering the look of the UI is strongly coupled with altering the usability of the UI. This leads to regressions in usability, making the product "look better" but "work worse".

Leave things alone. I don't care if the UI looks like Motif or TCL or a TUI, but if it works in a smooth, intuitive and pleasing way, there's no reason to change it.

The VIM interface is the very definition of "extremely dated", and that's fine.


Yeah, I didn't just mean in looks though; e.g. Audition and Ocenaudio behave more like, e.g. a normal text box or other "editing space in a computer." Cutting/Pasting/Trimming etc are easier and more intuitive, with fewer clicks and less thinking about tracks and such.


Yup, was going to mention it, though it was crashy for me last time I used it.


I remember spending days at a time in Audacity and have no crash (talking over a decade). Now everytime I use it I find a new way to crash it and it didn't evolve that much (compared to things like Ardour that went really far in the last decade including in stability).


Ah interesting. Don’t think I’ve had it crash ever but I don’t use it all that often relative to e.g. Ableton


In my opinion advanced DAWs like Ardour, Bitwig, Reaper or Live bring automation to the table, more complex routing and a single tool for a big part of the production. They are also extremely useful to play live (and the recent addition of clips into Ardour makes it closer to the commercial competion now).

It is a bit (just a poor analogy) like saying you could program anything in assembly or C. Sure but sometimes it is just much more efficient time-wise to just use Python, Kotlin, C# or any higher level language and not have to reinvent the wheel or deal with repetitive complexity.

In the end, what I always recommend is to try things and see what sticks, test new tools and retest the ones you didnt select regularly and when your workflow evolves just switch to the more adapted tool. I also find that really nice to improve myself in my current favorite tool as I can often bring back new tricks that were more obvious in other tools.


Of course, that's the hook they used to convince artists they need generative AI in their creative process.


I wish someone would have put a bullet in Pro Tools two decades ago…


Despite (or maybe because of) following this space for more than 20 years, I would love a (good) book on the inner (recent) history of the music software industry (= DAWs/plugins). I fail to find even one book on the subject. Anyone ?

There is this quote : "To know your future, you must know your past". I agree the cloud/AI combo is revolutionary, but we also need to dig a little deeper into the past to understand what is coming.


Very much agreed! If anyone knows of a good one I would love to read it as well.


I'm still getting used to the "new" scene-based model (which is I guess actually pretty old at this point) but it seems mostly unavoidable now that Ardour has switched to it in v7. Though I also think there's a lot of space on the low end of simple sequencers that the tracker renaissance is starting to fill.


We haven't "switched to it" - it's an addition, not a replacement.

ProTools just announced their version too.

And yes, this model is roughly 20 years at this point (Ableton Live started at about the same as Ardour).


Sorry, "switch" was the wrong word there, and I absolutely love the clip launcher. It's just kind of like the difference between editing text in a dumb editor vs an IDE or something like slime; lowering the "cost" of trying stuff absolutely changes how you think about your music and what you do with it.


As long as your music is "in the box", yes. If you're a classical oud player, not so much.


Ooh, this is an interesting set of comments. I'm currently trying to turn a really old Midimam Keystation 61 into a toy synth for my 3x 10 year old nephews (triplets). I'm using a spare Raspberry pi 4 so it's all gonna be Linux based. But I actually want to make it completely headless because DAWs are too complex and, IMHO, get between the person and the noises they're making. My goal is to hook up a Korg nanoKONTROL2 and use it as the only extra "surface" between the kid and the music. I'm at the very early stages of it right now but I can choose one of 16 midi channels, hit a key and make a noise. I haven't actually done anything with the nano.


I use an RPi4 as the sound generator for the fully weighted 88 key controller my wife plays at home. It runs just one synth - the proprietary physical modelling synth Pianoteq, available for linux, windows, macOS, x86_64 and ARM. It does have a 10" touch screen because the synth has parameters worth controlling (e.g. what piano model). No DAW complexity, but I don't see why you'd ever get a DAW involved with "a toy synth for my 3x 10 year old nephews". It was a little tricky to get the sdcard set up initially, but since I did, it's just power-on-and-play-it.


I think that kids might be interested in more than just a piano, so I wanted to use something like zyn with 16 hand-picked instruments. The nanoKONTROL2 has buttons to play/record/stop, and 8 separate control sections, each with a slider, rotating knob and 3x buttons. I could use them to record into separate tracks and play back solo or together. I just don't know if it's possible to make that usable without a display. That's what I'll be experimenting with. I know that the synth can operate as a VST inside a DAW (eg Ardour) and that many DAWs can run headless (eh hardour) so it may be possible to do this without a mouse, keyboard and screen. The nanoKONTROL2 has LEDs that can be used to indicate status etc.


I've been using Reaper as a digital mixer for live streams and recording podcasts. Great way to set up virtual channel strips for everyone and do proper mix-minus so we don't have to rely on software echo cancelation. Works a treat with netjack2 on Linux.


DAWs will need to refocus on managing recordings of live performances -- young kids are starting guitar bands again, and in a few years I think we will enter yet another period where electronic sounds are out of fashion.

I'd also like to see more music creation nuts & bolts built into DAWs, like integrated lyrics, a quick way to shuffle pieces of a song like "bridge" and "verse", visualizations of key center shifts, chord builders and so on. A lot of that stuff is out there but it's definitely an afterthought.


> a quick way to shuffle pieces of a song like "bridge" and "verse

Present in the about-to-be-released Ardour 8 (using a design very similar to Studio One).


> DAWs will need to refocus on managing recordings of live performances

They never stopped


PreSonus Studio One has what you're describing. Im switching from logic pro to a new DAW and so far studio one has the most of my boxes checked.


What does it do that you prefer?


It runs on window, is not a subscription service, has a lot functions that I'm used to and has a lot of automated workflow possibilities. Theres a lot more it can do then I'm able to mention but here are some of the cool traits that stood out for me. It can group a chord progression and create a variety for multiple tracks without getting into the midi editor. Additionally, I can globally save those chords and use them down the line in any other project easily. Theres a lyrics function too where I can run parallel to a vocal track so the vocalist can pick up exactly where they need to and see what's coming. I've done a few demo projects and I really like it so far.


I have wondered when the backlash, or maybe a sidelash, would start tasking place. I’ve been into many forms of electronic music and production since the 90s and right now if I was in my youth I might reach again for my guitar. Even though there is a lot of amazing stuff being produced that is not commercial/festival bound, there is a lot of money in EDM, probably too much, that would seem to push many people in the opposite direction away from the corporate driven machine.


A DAW is an all-in-one tool for composing, recording, arranging, mixing, and more. There are several factors that make this coupling necessary - the high performance requirements of DSP, the lack of better standards, and subject matter expertise.

To innovate in the space, we need an ecosystem of interoperable tools which can be used seamlessly to perform the same tasks reliably.

I think of the paradigm shift between apache http server and nodejs. Does the server run the code or does the code run the server?

If DAWs switched to the nodejs model, what would be the entry point?


The only thing I want out of some of these tools is either teaching or giving me out of the box configurations that help me get a reallllly good mix. Why shouldn't every record today sound pro? Talent aside, I think they should be implementing some the typical tricks sound engineers/producers use automatically.

I mean we're really talking about wave analysis then cutting of frequencies, boosting others. This can be automated based on a desired "genre/sound".


There is quite a bit more to it, distributing sound in time frequency and space, sidechain effects that apply changes on some tracks depending on the sound of other tracks (one super common example is to silence slightly instruments to leave room in volume and frequency for a kick or drums in general). Frequency splitting, parallel fx chains. It is a bit tricky to automate and is really based on a feeling as each person doing the mixing will have a different vision and experience, and every song will have a different frequency/space/time complexity even in the same genre. But most advanced DAWs with integrated collection of effects like Live or Bitwig will have presets that you can start to build on. (I have not tried reaper for that, that's on my list)


I strongly prefer a DAW that I can run locally without an internet connection.

For me the most natural way to incorporate AI would be as "ai people" that have particular skills.

"Master producer Eric" "Loop maker Ryan" "Guitarist Wendy"

You can interact with them in much the same way as human session musician, producer, synth programming etc.


DAWs just dont handle the jamming part very well ... Rythm generation, patterns, manual real time tweaking of sound and rythm ...


I have been doing live coding for a while, but DAW is still important to me.

I recently bought a Arturia keyboard in order to make some Trap and pop songs, and I immediately realised how important a seamless experience is between the DAW software and hardware. That kind of experience can't be replaced by AI.

I am sure if Arturia makes a DAW, it would be a great one.


> DAWs are built on legacy code that can’t easily be pivoted as user expectation adjusts,

As opposed to "non-legacy" code which is obviously malleable and easily adjustable to whatever new user expectation may arise?


In some ways, this is like saying "What Is the Future of the Piano?"


True - but there’s been a ton of stylistic and technological advancement of the basic 88 key device over the years - ranging from the development of key synths to (neo)classical reimaginings of how to use the hammers and strings to do something novel, not to mention AFX going nuts with little servos and player pianos, plus the development of jazz, ragtime, etc.


I want more standalone daws integrated into pianos like the key 61, I resent needing a computer to select my instrument or arrange my sequencer.


What is the future of the rotary phone? The typewriter? The floppy disc?

None of these have dissolved completely; the rotary dial, the keyboard layout, the swappable storage medium still exist, especially in music production hardware.

It just takes a company like Bitwig to come along and show how else these pre-existing interfaces can be modularised and combined, and some iterations of the Push controller[0] to uppend the traditional fingering required to play them.

[0]: https://www.ableton.com/en/push/


I am a musician that mainly plays guitar and synths. I have a large eurorack, have tried nearly every MPE compatible synth & controller, and have a massive pedal board. I am also an embedded software engineer and have made audio plugins for VCV Rack. I have written blog posts [0] about MIDI to try to explain it to a technical-ish audience. I have written many, many more posts about why the way we interact with computers is ... sub-optiminal [1] and why new controllers are making it better [2]. I've also written a post reflecting on how AI Art tools like Stable Diffusion have/will affect artists [3] [4]. On my site, I have a list of DAWs and a simple Pro/Con view of each [5]. I have reviewed Emergent Drums, an AI drum-sample tool (and gave it a poor rating) [6]

All of that credential waving is to say, I think about this a lot. The software workflow is so bad that I have resorted to buying an insane amount of hardware, so, what makes it so bad? * Latency correction hell. - Only an issue for musicians not working entirely digitally * Limited I/O (albeit this wouldn't be an issue if I could get away with using less hardware) - You can't use more than 1 audio interface at a time in Windows * Interface navigation hell - VSTs all load as individual windows - There's often still issues with DPI scaling * DRM hell on those virtual instruments * Software stability hell - Not all DAWs sandbox plugins * Complex routings are often complicated - In audio-contexts, we often want to pipe signal around in branching - diverging & merging - paths, often with signals from unrelated chains changing things (ex: sidechain compression from a bass drum)

But I can almost forgive all of those issues. The real issue is and will continue to be MIDI. * 127 values for knobs? Really? That's quantization you can hear. * No good support for microtonal - Current hacks often break down realllllly bad with big scale. If you're in 31EDO, for example, you only get 4 octaves of range. * Clocking is a disaster, making swing, polyrhythms, etc. more of a mess than they need be.

Those are issues that legitimately limit the expressiveness of all music creation. You can't fine-tune a value in MIDI, you can't play between notes (barring MPE, which itself is a hack and not supported in all DAWs), and - combined with the latency problems above - it means *both* your traditional instrument (ie Guitar) and digital interfaced (MIDI) will have weird latency issues.

AI is not the solution and there's more than one problem. There are a few things which could help: * MIDI 2.0 / OSC / Literally anything with more resolution * Designing with more than just western music theory in mind (microtonal, swing/complex grooves, etc.) * A standardized framework for UI

I think VCV rack [7], albeit it absolutely isn't a general purpose DAW, does a good job of addressing many of these issues, but it can't handle more basic use cases either. Sequencing a long track in it is like pulling teeth and performance is quite bad due to processing everything as float's sample-by-sample, not as a buffer.

I do have some hope that change is coming. The Blockhead DAW [8] rightly has many musicians following this space excited for it's novel approach of treating audio in the timeline as a much more fluid source for quick modification rather than baked content you have to modify elsewhere.

There is, ultimately, an incredible amount of complexity that music software has to account for. I could muse(score) about why this is the case: it could audio having a temporal element, unlike a text editor or maybe just the inherit complexities that arise from trying to create with a medium where you have to consider such an extreme amount of information that caries with it cultural context and an ease of offense to the senses that no other medium suffers (Bad audio is much less tolerable than bad video).

No matter the case, it means that a DAW and the interacting with it needs to be very tightly integrated and intentionally designed with good standards while also being highly flexible. We nailed the flexibility, but totally dropped the ball on intentional design and standards that work for us. I don't think that can be un-done now, not without throwing away support for legitimately amazing tools.

While I'm risking going pure-tangent, I also want to mention that there being no good method for hardware acceleration is a limiting factor in the audio world, along with better standards for UI and data transport, we're in desperate need of an open hardware acceleration standard that's easy to use and not crazy expensive. I should be able to throw in a DSP card just like I can throw in a big GPU and have enough umpf in my system not be constantly worrying about freezing tracks or having audio underruns. This is an extra reason for having as much hardware as I do: A lot of DSP distortions eat CPU and sound really bad.

[0] https://opguides.info/music/midi/ [1] https://opguides.info/other/hci2/intro/ [2] https://opguides.info/music/instruments/#expressiveness-and-... [3] https://opguides.info/posts/aiartpanic/ [4] https://opguides.info/posts/ai2/ [5] https://opguides.info/music/software/daw/ [6] https://www.youtube.com/watch?v=Zpq7g9CcGv4 [7] https://vcvrack.com [8] https://twitter.com/ColugoMusic


> On my site, I have a list of DAWs and a simple Pro/Con view of each

You reviewed three DAWs. Glad you put the link to Admiral Bumblebee's page on yours, because that's actually "a list of DAWs".

Re: MIDI - you can already "play between the notes" using either MTS or even just simple pitch bend - the issue with the latter is building a controller that makes this better than the wheel on most keyboards, the issue with the former is finding synths that understand MTS and use it.

> no good method for hardware acceleration is a limiting factor in the audio world

DSP processors have been available for this since before ProTools, which was fundamentally based on hardware acceleration. In the present era, UAD and others continue (for now) to carry that torch, but the principle problem is that during the period where Moore's Law applied to processor speed, actual DSP could not keep up with generic CPUs (every DSP system was the speed of 1 or 2 generations ahead of generic CPUs). Current generic processors are now so fast that for time domain processing, there's really no need for DSP hardware - you just need a bigger multicore processor if you're running into limits (mostly - there are some exceptions, but DSP hardware wouldn't fix most of them either).


> You reviewed three DAWs.

There's definitely more than 3 reviewed there? Sure, a lot of them still say [TODO] on getting a review, but then I don't want to review something I don't have a significant amount of 1-on-1 time with. Maybe you didn't realize there are click-able tabs?

Also, the way you phrased this came off as quite rude. There is someone on the other side of the screen, and you'd do well to consider that in the future.

> Re: MIDI - you can already "play between the notes" using either MTS or even just simple pitch bend [...]

Pitchbend is global to the track, if you play a chord and bend, all of the notes bend equally. With MPE it is possible to bend a single note, but then MPE isn't supported in every DAW (FL Studio doesn't have it) and my gripes with MTS are explained already: You're still working with only 127 possible notes so you limit your octave range. Worse, with all of the microtonal solutions, the UI will still typically look like a 12-tone piano. This is sort of okay for 24TET, but It's immensely confusing for anything else.

> DSP processors [...]

They really haven't been true for audio for a while now. Moore's law isn't dead, but definitionally "Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years" doesn't equate to all work loads seeing an improvement. Audio is a pretty latency sensitive, mostly strictly sequential workload. Making audio code that uses multiple cores often isn't even possible. Audio needs clock speed and IPC gains, which we have gotten, but that's not always enough. I have a 5900x and still hit limits. What would you recommend I do, get a Threadripper or Xeon so that I can have even more cores sit idle when making music? If anything, the extra cores have been a hindrance lately - on my 3900x I had before at high loads I had to pin the processes to one chiplet or I'd me more likely to get buffer underruns. It's not as if anyone is arguing that CPUs getting faster so quickly means that we don't need graphic cards.

UAD exists but then you're limited to their plugins and their accelerators are quite expensive for not being really all that powerful. I'm also not convinced that kind of accelerator is even the right approach. Field Programmable Analog Arrays, FPAAs, for example, could be setup with a DAC and ADC on either end. Or we could make DAWs/OSs capable of handling digitally connected analog "plugins" better - think effects like the Big Muff Pi Hardware Plugin [1] or Digitakt with Overbridge [2] (These are the only two examples I know of!). Using the word "Acceleration" was wrong, what I really meant is offloading. We need a way to offload the grunt work to something we can easily add more of or better fits the task. I think this is particularly true of distortions, as slamming the CPU with 4 or 8x oversampling to get a distortion to not sound awful hurts.

[1] https://www.ehx.com/products/big-muff-pi-hardware-plugin/ [2] https://www.elektron.se/us/overbridge


Sorry, my mistake. Four DAWs: Live, Bitwig, FL Studio, Reaper. We'll have to agree to disagree on whether or VCV (which I use very regularly) or trackers are a DAW or not.

I agree with you about pitchbend, but you're narrowing what "play between the notes" means: you seeem to mean "polyphonic note expression", which is a feature that quite a few physical instruments (not just piano) lack.

MPE doesn't need to be supported by the DAW, only by the synthesizer. It's just regular MIDI 1.0, with different semantics. It's more awkward to edit MPE in a DAW that doesn't support, but not impossible. Recording and playback of MPE requires nothing of the DAW at all.

> the UI will still typically look like a 12-tone piano

We just revised the track header piano roll in Ardour 8 as step one of a likely 3-4 step process of supporting non-12TET. Specifically, at the next step, it will not (necessarily) look like a 12-tone piano.

> Audio is a pretty latency sensitive, mostly strictly sequential workload

It's sequential per voice/track, not typically sequential across an entire composition.

IPC gains are not required unless you insist on process-level separation, which has its own costs (and gains, though mostly as a band-aid over crappy code).

If you're already doing so much processing in a single track that one of your 5900X cores can't keep up, then I sympathize, but you're in a small minority at this point.

Faster CPUs don't help graphics when the graphics layers have been written for years to use non-CPU hardware. Also, as you sort of implicitly note, there's a more inherent parallelism and also decomposability of graphics operations to GPU-style primitives than there is for audio (at least, we haven't found it yet).

Offloading to external DSP hardware keeps popping up in various forms every year (or two). In cases where the device is connected directly to your audio interface (e.g. via ADAT or S/PDIF), using such things in a DAW designed for it is really pretty easy (in Ardour you just add an Insert processor and connect the I/O of the Insert to the appropriate channels of your interface. However, things like the BigMuff make the terrible mistake of being just another USB audio device, and since these things can't share a sample clock, that means you need software to do clock correction (essentially, resampling). You can do that already with technology I've been involved with, but there's not much to recommend about it. The Overbridge doesn't have precisely the same problem in all cases, but it can.


I can't agree more with this post! As a middle eastern producer trying to incorporate maqam, the workflow is custom...and frustrating...and funnily still better on a workstation. And CV control is not viable as a control for acoustic sampled vsts.

Your opguide site looks like an incredible resource, thank you for creating it!


I'm the lead author of Ardour [0], and I'd very much like to hear more about your frustrations, since over the next 1-2 years, paying attention to non-European musical culture is one of the things I hope to focus on during development. You can reach me via the email address in my profile, or maybe use our forums at discourse.ardour.org. Thanks.

[0] https://ardour.org/ <= a cross-platform open source DAW that has been around for more than 23 years


Hey Paul,

I'd love to chat! I'll make my way onto the forums. Hopefully there's a relevant spot to post about this :)

PS. Not sure if you're aware your email is not currently in your profile.


There's no existing topic for this, but do feel free to create one.

paul@linuxaudiosystems.com for email


I've been going down the rabbit hole of using all physically modeled synths instead of sample based VSTs so that I can fully take advantage of MPE controls instead of trying to get samples to be expressive though arcane incantations of key twidding in Kontakt. I think it's probably the best option for doing things in a DAW right now, unfortunately.


Would love to find out more about what you're doing here. Not all physical modelling options allow dynamic retuning. Also, which software will allow a double reed like the mijwiz. We really need modartt to crack the remaining instrument classes :) if you're open to chat, do you mind dropping an email to the address in my profile?


DAW innovation will only come with inventing new algorithms.

There needs to be significant number theory and computer science algorithmic work related to sound and how we represent sound with data.

GPU's currently cannot work with sound data in the processing chain, and multicore is basically just used to scale horizontally (ie to have more plugins or instruments)

New algorithms are needed to scale out audio processing, as well as make use of new hardware types (for example, using the gpu)


You seem to think DAW makers don't already specialize a ton in DSP, algorithms, and concurrency - I can assure you that that's definitely the case, and that innovation and optimization happen at a very healthy pace. There is significant market pressure to run tracks and plugins in a highly optimized way. Several DAWs have a visible CPU-usage meter, and some allow users to directly configure a process isolation model for plugins.

However, audio has a very different set of constraints from other types of workloads - the hallmarks being one worker doing LOTS of number crunching on a SINGLE stream of floating-point numbers (well, two streams, for stereo), that processing necessarily happening in SERIAL, and getting the results back INSTANTLY. Why serial? Because for most nontrivial audio processing algorithms, the results depend on not just the previous sample, or even chunk of samples, but are often a rolling algorithm that depends on a very long history of prior samples. Why instantly? Because plugins need to be able to run in realtime for production and auditioning, so every processing block has a very tight budget of tens of milliseconds to do all its work, and some of them make use of a lot of that budget. Also, all of these constraints apply across an entire track as well - every plugin on a track has to apply in serial, one at a time, and they need to share memory and act on the same block of audio.

One thing you might notice is these constraints are pretty bad conditions for GPU work. You're not the first to think of trying that - it's just not a great fit for most kinds of audio processing. There are some algorithms that can run massively parallel and independent, but they're outliers. Horizontally scaling different tracks across CPUs, however, works splendidly.


To be fair, some of the things currently piquing people's interest are suitable for offline "massively parallel" processing ala GPUs. Source separation and timbral transfer would be the first two that come to mind.


Yah these are all well understood, however, you don't get to nuclear reactors without inventing some new math supporting subatomic physics.

Saying that audio has "different set of constraints from other types of workloads" and giving up on fundamental algorithm research is just defeatist, throwing in the towel, and frankly really insulting to human advancement.

Come on, we need some new algorithms and just saying "whelp it can't be done" is kind of ... not the hacker spirit.

It could be that we need quantum algorithms for parallel processing what previously was thought to be serial. Just from reading your well reasoned paragraph, I can see we desperately need fundamental algorithm research in sound processing.

An imaginary scenario might to to invent an algorithm to convert/transform sound/pressure wave information into another domain, one that is not dependent on serial time, and then do the operations, and then re-convert it back to the time domain that we usually associate with sound processing. Where, within this alternative domain, parallel processing is possible.

We do stuff like this all the time in other disciplines. Even stuff like the FFT was an attempt to transform and make certain "unsolvable" problem solvable in another form.

That's the kind of math research that I'm referring to.


> An imaginary scenario might to to invent an algorithm to convert/transform sound/pressure wave information into another domain, one that is not dependent on serial time, and then do the operations, and then re-convert it back to the time domain that we usually associate with sound processing. Where, within this alternative domain, parallel processing is possible.

We already do this. It's called FFT, which transforms the data from the time domain to the frequency domain. You can, if you want/need to, parallelize frequency domain processing. There's oodles of interesting audio software that does this.

But again, parallel processing is only interesting for speed. And we mostly have plenty of speed these days.


Parallel processing isn't "interesting" except as a way to do things more quickly.

But for 90% or more of the things people do in DAWs (and currently want to do in DAWs), current processors can already do it fast enough.

So the sort of innovation your dreaming of/imagining isn't going to come from new algorithms - it's going to come from people wanting to do new things.

This is already happening to some extent with things like timbral transfer, but even there, the most important part of it is well within current processing capabilities.

> Come on, we need some new algorithms

If you don't have a "why", that doesn't make much sense. Start with "Come on, we need to be able to do <this>" and then (maybe) the algorithms will follow.

Necessity is the mother of invention, but so is desire. What do you desire?


The desire would be to work with sound artists and acoustic designers and product designers and engineers to make certain types of audio spaces.

A lot of the cutting edge of acoustic research is from people wanting to make materials and spaces do certain things with sound.

For example, this article describes a system that lets sound through one way This was described in https://www.scientificamerican.com/article/a-one-way-street-... ("an acoustic circulator") - the need will come from things like this.

And for musicians, artists, and instrument makers to make use of materials, devices, and spaces like this. That would be my answer to your question of where the need/why will come from.

Imagine needing to write a DAW module to deal with "one way sound" that results from using an acoustic circulator in a musical production or a song.


> GPU's currently cannot work with sound data in the processing chain

https://www.nvidia.com/en-us/on-demand/session/gtcspring22-s...

> New algorithms are needed to scale out audio processing, as well as make use of new hardware types (for example, using the gpu)

What kind of "scale out" are you referring to here if not "to have more plugins or instruments"?


It’s still pretty new, but people are taking advantage of GPUs for audio these days: https://www.gpu.audio


The GPU separates vocals and percussion from the instruments on my DJ laptop (also why has nobody made spleeter an LV2 yet?)


I can see VSTs being replaced with AI that will create instrument sounds or effects based on description text


That's just another VST with a different UI.


No mention of Bitwig, which just keeps adding new features and modularity.


No mention of Bitwig, which just keeps adding new features and modularity.

Bitwig is mentioned three times in the article.

That said, I don't think the article author would consider Bitwig to be substantially novel compared to the other mainstream DAWs that inspired the sentiment behind the article.


I want to love Bitwig, but the UI needs serious work as does compatibility with external controllers like the Novation Launchpad (which is what I sequence all of my music through). I’m hoping the next release addresses both of these.


The Logic screenshot with "Untitled" is triggering me.


The Future of the DAW is DAW-less.

A Synthstrom Deluge, a humble Eurorack (morphagene!) and a microFreak ..

A Zynthian! Holy molies, live Zynthian compositions are binoculars!

1010music Bluebox+Blackbox.

SonicWARE SmplTek.

Monome. Oh dear, monome is a hell of a platform for next-generation DAW tomfoolery!

Notice a pattern? Its the hardware. You can no longer call it a workstation when, after all, it behaves like an instrument.


Let us know about the utility of those (excellent) tools for:

* a four part acapella group

* a traditional western european orchestra

* someone who wants a recording of the call to prayer

* anyone who needs to make a recording of anything at all?

The original purpose of DAWs like ProTools was recording, editing and mixing. Composition and instrumental performance crept in later. Even if that aspect were to be completely replaced by hardware (possible, but unlikely), it would leave the original DAW functionality just as in in demand.


>* a four part acapella group

Yes, that'd be a microphone setup per singer, probably, or at least two channels of audio.

>* a traditional western european orchestra

Yes, tons more channels ..

>* someone who wants a recording of the call to prayer

Only one channel needed really, but could be multiple if warranted.

>* anyone who needs to make a recording of anything at all?

As a professional designer of recording devices, I can tell where you are going - not that you might have missed the notion that there is literally no reason that a DAW-like instrument cannot be multi-channel capable - but also that portability to any environment is key.

Physicality is the new frontier for digital capture.

Recording/editing/mixing no longer belong in the workstation.

Portable instrument-like DAW's, with extraordinary new and interesting interfaces to accomplish those goals you've set (although they are all basically the same thing), are already on the market.

For example, the 1010Music Bluebox, alone, can be applied to any of those scenarios. Just add microphones (and USB powerbank...)

Incidentally, PaulDavis: acknowledging your position as the originator/BDFL of the ARDOUR workstation software, I mean no disrespect for your stature and point of view -- just that I believe era of the DAW-less approach is upon us, and there is a rather large opportunity for device-makers, such as me, and software-makers, such as you, to align ourselves...

The workstation is dead. The kids want portability, reliability, and power. This can all be done on non-standard operating systems, in a bespoke case, for fun and profit.


I think it is awesome that these "DAW-less" tools are showing up and getting good. There's a lot to be said for them in many contexts.

However, I don't think that they are going to eliminate the current concept of "a fairly big piece of software running on a relatively general purpose computer that is used for recording, editing and mixing".

Por que no los dos, eh?


DAW = Digital Audio Workstation.

I expect the future of the DAW is a Xeon-powered machine from Dell or HP with Adobe Creative Cloud (CC.)


I too have been wondering what the future is for the Detroit Auto Workers....


It would be great to see more innovation like AI in DAW tools, but there are some challenges. The main constraint is it needs to process in real time, allowing just a few ms to process a sample. Very few neural methods can work with that constraint, without it, they can't fit into the standard DAW workflow where you string together many plugins, each processing the signal in real time.

There are some AI tools that work outside the main workflow, like for mastering after you're done with the DAW. But it's quite difficult to improve and bring new ideas beyond the typical signal processing modules without completely revamping the current workflow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: