Hacker News new | past | comments | ask | show | jobs | submit login

Sorry, my mistake. Four DAWs: Live, Bitwig, FL Studio, Reaper. We'll have to agree to disagree on whether or VCV (which I use very regularly) or trackers are a DAW or not.

I agree with you about pitchbend, but you're narrowing what "play between the notes" means: you seeem to mean "polyphonic note expression", which is a feature that quite a few physical instruments (not just piano) lack.

MPE doesn't need to be supported by the DAW, only by the synthesizer. It's just regular MIDI 1.0, with different semantics. It's more awkward to edit MPE in a DAW that doesn't support, but not impossible. Recording and playback of MPE requires nothing of the DAW at all.

> the UI will still typically look like a 12-tone piano

We just revised the track header piano roll in Ardour 8 as step one of a likely 3-4 step process of supporting non-12TET. Specifically, at the next step, it will not (necessarily) look like a 12-tone piano.

> Audio is a pretty latency sensitive, mostly strictly sequential workload

It's sequential per voice/track, not typically sequential across an entire composition.

IPC gains are not required unless you insist on process-level separation, which has its own costs (and gains, though mostly as a band-aid over crappy code).

If you're already doing so much processing in a single track that one of your 5900X cores can't keep up, then I sympathize, but you're in a small minority at this point.

Faster CPUs don't help graphics when the graphics layers have been written for years to use non-CPU hardware. Also, as you sort of implicitly note, there's a more inherent parallelism and also decomposability of graphics operations to GPU-style primitives than there is for audio (at least, we haven't found it yet).

Offloading to external DSP hardware keeps popping up in various forms every year (or two). In cases where the device is connected directly to your audio interface (e.g. via ADAT or S/PDIF), using such things in a DAW designed for it is really pretty easy (in Ardour you just add an Insert processor and connect the I/O of the Insert to the appropriate channels of your interface. However, things like the BigMuff make the terrible mistake of being just another USB audio device, and since these things can't share a sample clock, that means you need software to do clock correction (essentially, resampling). You can do that already with technology I've been involved with, but there's not much to recommend about it. The Overbridge doesn't have precisely the same problem in all cases, but it can.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: