MIDI controllers and a computer can definitely have their own advantages and can nicely compliment stand-alone hardware nicely.
But stand-alone hardware has some advantage of its own over a MIDI setup.
Well designed stand-alone hardware modules can turn on instantly, you don't have to keep their software up to date, you don't have to do any performance tuning on them or muck around with latency, they don't have to compete with other software running on your computer as software synths do, I can be pretty sure that barring hardware failure they'll still work exactly the same way 5 or 10 years down the line, etc.
In sum, stand-alone hardware offers me a lot of reliability, consistency, convenience, and performance that's often hard to match with a computer and MIDI controllers.
Of course, a computer and MIDI controllers have their own advantages and conveniences, like being able to save settings, having a virtually unlimited amount of functionality by using various software packages or writing your own, allowing the user to change or choose their preferred UI, etc.
So they're really not the same. It's best to have both!
One thing you forget to mention, but were kind of indirectly saying (I think), is the "reliability, consistency, convenience, and performance" leads directly to creativity in a way that in-the-box stuff can't replicate. Personally, I need to get to the sound I'm designing quickly for my creative flow to continue, if I fight the computer's UI (and let's face it, the vast majority of DAWs and plugins have terrible UIs) then it kills it for me. I end up self censoring (not even bothering to try) and the music that I end up making ends up more 'rigid'.
I think if you're the kind of person has infinite patience, then yeah sure, you could probably achieve similar results, but I think for many that's not going to work.
Another aspect is the true analogue modules, which software still struggles to emulate well. This matters for some modules/synths more than others, but there is a quality difference. Again, with time, one could make a digital synth/module sound good, but for me it's the same problem of killing the creative flow.
The line is blurring a bit. Back in the 1990s, hardware modules booted directly from ROM. These days, you’ve basically got a whole computer inside which boots like other modern computers, i.e., slower than it should.
There are also some good examples of software stability. Having used Logic and Reason since the early 2000s, you can see all the care that they’ve taken in never removing anything, and only adding new things.
That's the extra insurance that open source buys you: instead of relying on the proprietary software maker to do the right thing, you ensure that you can always (1) actually run the version you had way back on Feb 22 2020.
(1) yeah, I know things are not quite that simple.
My experience is that the Linux audio APIs have been so volatile that I often can’t run older audio software, even when I have access to the source code.
I have actually composed and recorded music with open-source software, but the cost of doing so was very high, so I switched to proprietary software. Even worse, the open-source software for various reasons has had interoperability problems—you can’t export in standard formats, or that kind of thing. With proprietary software, I was less tied down, since I could export data from one program to another and continue working.
So I’m slowly working on converting the old songs I wrote with open-source tools into standard formats.
The ALSA API (the fundamental device driver layer) hasn't changed in any significant way in more than 20 years. There was one trivial change that might require a 1 line change in a program that happened to use that particular call, but not all Linux audio apps did so.
I don't know what you mean by "cannot export in standard formats", unless you're referring to something like AAF.
AAF is a closed-source proprietary standard for "session files" that fundamentally relies on utterly closed Microsoft APIs. There are some incomplete open source implementations that use a GNU efffort to reimplement those APIs but AAF is also the prime example of design-by-committee. I will never put time into trying to support AAF. Note also that AAF support is missing in major DAWs and users need to rely on 3rd party translators such as Chicken Systems AATranslator (RIP).
If you're referring to standard formats for audio files (WAV, AIFF, CAF and more), then I really don't know what you're thinking of.I cannot think of any open source audio project that has problems with exporting to "standard formats".
ALSA may not have changed, but programs variously have used ALSA, ESD, JACK, OSS, PulseAudio, and aRts. Depending on your hardware, some of these options have been variously unusable at points--too many developers assume that you have a hardware mixer, for example, which is true for standard PC audio output but not true for any pro interface I have ever used (even the dirt cheap ones). At various points, I ended up with audio that only worked with exclusive access. I tolerated it for a while but now I consider it a dealbreaker.
I'm 100% willing to pay Apple to make this problem go away for me. Core Audio is an amazing API.
For cannot export in standard formats--talking about MusicXML. Until a couple of years ago I also had problems listening to music on my Linux system; I think it was because of some patent licensing issue.
All of the above APIs except ALSA run on top of ALSA, except OSS which was deprecated more than 20 years ago. The CoreAudio API has changed just as much in that time period.. there are huge chunks of the CoreAudio API that existed back in 1999 that no longer work or even exist.
Also, note you can't use the audio APIs from macOS "classic" on macOS/OS X (and never could - Apple banned it from the start, so every single audio app that existed for macs pre OS X had to have its audio I/O completely rewritten. Apple didn't pay for this - 3rd party developers did).
I write pro-audio/music creation software for a living, and I can assure you that from this perspective, CoreAudio is definitely not an amazing API. That's without even taking into account the differences between CoreAudio on iOS and macOS.
The only Linux distributions that had problems playing patent-protected formats over the last 20 years were the ones that stuck to a strictly libre software policy. If you picked one of them (e.g. Debian), it's hardly Linux' fault that you chose a distribution one of whose raison d'etre's was to exclude any non-free formats of any kind. I used RedHat starting in 1997 and it never had a problem playing mp3 files.
Agreed, although a midi controller by its generic nature does have to adapt to very different hardware, so that for example the knobs, switches etc. placement doesn't reflect functionally the hardware it is connected to, which sometimes would help immensely to ease the operators work.
I wish there was a controller made of pluggable micro modules, that is, say 2x2 cm modules receptacles that could host a potentiometer, a pushbutton, a few leds, a sensor pad etc. where no matter where the module is placed, it will link to the backplane processor getting its identification number and function, then the user can move them around to replicate the functional arrangement of an instrument panel to make programming much easier.
It's more or less the same idea I'd like to develop for computer keyboards, where pluggable keys (pots, haptic transducers and sensors etc.) could be moved around still keeping their programmed function. With todays technology it wouldn't be so hard to develop; very likely the most expensive part would be the plugs/receptacles hardware rather than their digital hardware or firmware.
There's a Waterloo start-up that spun out of a Fourth Year Design Project, Palette [0]. Their main sell was integrations into various creative tools (ex. Adobe Creative Suite, Ableton). They recently rebranded as Monogram [1][2] after announcing new hardware.
https://www.kickstarter.com/projects/electro-smith/daisy-an-... is worth a look as well. They have dev platforms in Eurorack, desktop synth, and guitar pedal form factors that could be used for something like this, though they have enough DSP to actually run the audio processing the module, but it seems like it could be used for something like a hybrid setup as well with VCV rack with some software work.
Yeah, but a MIDI controller has fixed button, knob and display positions.
We need a new form MIDI controller with movable button and knob and LED placement. Then one could make a physical MIDI controller for any MI software instrument by placing the controls and the LEDs on a backplane in the right place.
If it's not already done, I bet it will be in the next 5 to 10 years.
There are no issues with using MIDI in a DAW because of jitter: jitter is a problem of device drivers, hardware and badly implemented MIDI handling inside the DAW. It works remarkably well.
AFAIK it's only capable of 14 bit resolutions for things like modwheel. CC can only have 128 values, right?
Also, isn't it safe to say that by the nature of USB MIDI interfaces, they are not well suited for MIDI operation? As in: Since USB drivers buffer messages, it causes jitter. Is that correct? Are there USB MIDI interfaces / drivers that don't have these issues otherwise?
Added to that: MIDI is mostly used for interfacing with external hardware, and as such you're dependent on the external hardware's MIDI implementation, which isn't always superb to say the least unfortunately.
No, there are many parameters with 14 bit resolution. Precisely how many depends on exactly how you interpret the MIDI specification.On the order of 64, plus or minus depending.
Sure, USB interfaces might be problematic but it's very dependent on the precise hardware. My MOTU Ultralite AVB is basically unusable because of jitter; my MidiSport 2x2 (also USB) is extremely usable and has extremely low jitter. So you cannot make blanket statements about it being caused by using USB for MIDI.
I've been fantasizing a little about a new modular format, or at least a new category of synth controllers, taking advantage of the higher resolution of MIDI 2.0 for some modulation signals. I could be wrong, but I thought MIDI 2.0 was supposed to potentially be faster/more timing accurate too.
I'm not overtly familiar with MIDI 2.0, but from what I've heard it does fix most of the issues of the old MIDI. However, if you transport your messages over DIN ports, all bets are off.
You can also look at Open Sound Control, as it was once intended as a replacement for MIDI.
OSC has never, ever been even remotely close to being a replacement for MIDI.
Every MIDI message (well, almost every MIDI message) has a defined semantic associated with it. The receiver of the message might choose to reinterpret that, but the default semantic is still defined by the standard.
There is not a single standard OSC message. The only thing that is standardized in OSC is the format of a message. Both the receiver and sender have to agree to a specific set of messages, with a specific set of semantics.
Trivial example: you want a MIDI-driven synth to start playing middle C. You send it a MIDI NoteOn event with the note number 60. There is no equivalent for this with OSC, not even close. You must know precisely what messages the synth will accept and which ones will mean "start playing middle C".
OSC is cool but it isn't and almost certainly will never be a replacement for MIDI.