Hacker News new | past | comments | ask | show | jobs | submit login
MIDI 2.0, first major overhaul to music interface standard from 1983 (reverb.com)
264 points by davio on Jan 29, 2020 | hide | past | favorite | 105 comments



What will Midi 2.0 mean for musicians?

I suspect: frustrations with shit not working with other shit, like it used to with MIDI, and a big decrement in DIY hackability.

USB is in the mix! Pretty much `nuff said, but I will say it anyway. USB is a complex beast with documentation that is a good fraction of a foot high, if printed out in all its versions on standard letter sized laser paper. If you bring that into any standard, that is now part of your standard.

USB connectors do not have the optical isolation of MIDI current loops; USB interconnecting will bring in noisy ground loops that will have musicians and recording engineers pulling out their hair.

The clever thing in MIDI is that a device which sends messages over MIDI to another device drives current pulses (not voltage). These current pulses activate an octo-coupling device in the receiver such as a phototransistor. There is no galvanic connection between the devices; they don't share any ground or anything.

All sorts of talented musicians have done incredible things with MIDI. The resolution of MIDI has been just fine for people with real chops. MIDI 2.0 isn't going to solve the real problem: talent vacuum.


> All sorts of talented musicians have done incredible things with MIDI. The resolution of MIDI has been just fine for people with real chops. MIDI 2.0 isn't going to solve the real problem: talent vacuum.

I find it difficult to believe that someone with even a passing knowledge what MIDI does would have this opinion. Most of the variables are only 7 bits of resolution which produces jarring jumps when you try to adjust parameters in real time.

I remember taking a college class 20 years ago where we talked about the deficiencies of MIDI and what MIDI 2.0 should look like. It's been 20 years since that conversation and it's mind boggling to me that MIDI is only getting updated now.


Note that more bits don't eliminate jumps on their own. You need to also send changes at a higher rate to take advantage of those bits, which in turn translates to the need for higher speed encoders, more processing time spent dealing with the data, etc.

A different way to eliminate jumps is simply to low-pass filter the values on the receiver, and read out values from the filter at whatever rate your synthesizer engine can handle. The precision of most controls does not matter that much; you just want to eliminate zipper noise, and this does that.

(Of course there are some controls which need the extra resolution. Filter cutoff comes to mind… even 10 bits I've found limiting. Strangely, even though MIDI 1.1 specifies some 14-bit CCs, filter cutoff is not one of these.)


> Note that more bits don't eliminate jumps on their own. You need to also send changes at a higher rate to take advantage of those bits, which in turn translates to the need for higher speed encoders, more processing time spent dealing with the data, etc.

That presumes a continuous information stream being sampled. But the sample-depth problem affects discrete notes, too—it's pretty easy to notice how coarse-grained the quiet end of variation is on a MIDI keyboard or drum controller's attack pulse.


Yeah, the 7-bit amplitudes really wreck things. I switched from MIDI to OSC basically because of the better resolution, though I eventually gave up due to the lack of support for the protocol.


7 bits can encode 0 == "off", plus a 127 dB amplitude range in 1 dB increments.

In a musical mix, -20 dB down, plus "off", is all you need; anything turned down more than about 20 dB relative to everything else disappears.

+/- 20 dB of cut and boost spread into 127 bits is ridiculously good resolution.


So I get what you're saying, but this isn't my experience.

There's no room to use small changes in volume for expressiveness at the low end of the volume scale, since attacks/sustains/release shape is more quantized. So if your piece has fff and ppp in it (which is probably a full 40dB range) the ppp part will sound super flat while the fff part might sound great.


Also, MIDI has the nice, round speed of 31,250 bps. Since it uses start and stop bits, that's 3,125 bytes per second. A "note on" message to start playing a note is three bytes long: a 4 bit "this is a note on" field, followed by a 4 bit MIDI channel number, then an 8 bit note number, then an 8 bit velocity ("how hard I hit the key") number. "Note off" messages, sent when you want to stop playing a note, is identical except for the first 4 bit status field. So, if everything's perfect, playing and releasing one single note will take 6 bytes out of the available 3,125 available each second, or 1.92ms. That's why a lot of so-called "black MIDI" songs are probably literally unplayable through an actual hardware MIDI interface.

But forget about playing and releasing notes. Say you're triggering a MIDI drum machine and a synth. Sounds like violins have a slow "attack" - that is, you don't go instantly from "no sound" to "full sound", but ramp up over a short interval. Imagine a violinist that has to start moving their bow, or a trumpeter that has to start blowing. It doesn't matter if you send a synthesizer a set of "note on" messages saying "play a middle C major chord" for violin sounds and they don't all get there simultaneously, because it was going to take them all a little bit to start playing anyway. Drums are a different story. If you expect a kick and hi-hat to play at exactly the same time, you don't have that many milliseconds between their starts before a normal human listener can start to really notice it.

So, the worst case scenario is that you'd have a piece of sequenced music that plays two drums, a piano chord, a bass line, and a violin chord at the same time. This is were sound engineers start getting hyper nitpicky about stringing the equipment together so that:

- The two drums fire off in adjacent time slices so that they sound as simultaneous as possible.

- The piano notes come next, from lowest (because if it's a sampled sound, low notes will be played back more slowly and therefore have a slower attack) to highest.

- The bass sound comes next because those don't usually have a super aggressive attack.

- Violins come last, and it doesn't really matter because they're lazy and they'll take a few hundred milliseconds to really kick in anyway.

The worst case scenario is:

- One drum fires off.

- The rest of the instruments fire off in reverse order of their attacks, like high piano, bass, high violin; medium piano, medium violin; low piano, low violin.

- The other drum fires off.

Because MIDI is so glacially slow compared to every other protocol commonly used, it's going to sound absolutely terrible.

MIDI is amazing in so many way, but it has some very severe technical limitations by modern standards. I can't believe it's taken this long for a replacement to come along.


> playing and releasing one single note will take 6 bytes out of the available 3,125 available each second, or 1.92ms.

Rounding this off to 0.002 s and taking speed of sound to be 340 m/s, we can work out that sound travels 68 cm in that time.

So if you're positioned next to a drum kit such that the snare drum is somehow 70 cm farther from your face, and the drummer hits both at exactly the same time (down to a small fraction of a millisecond), you will hear the snare drum 2 ms later than the high hat.

You're assuming that all of the MIDI events in the entire show are multiplexed onto a single serial data link. That means all the controllers and instruments are daisy-chained, in which case your latencies may be actually worse than you imagine because any piece of gear that has store-and-forward pass through (receives and re-transmits MIDI messages) adds latency.

The obvious way to avoid all that is to have a star topology: have the events flowing over separate cables from the controllers to the the capturing MIDI host, or from the host sequencer out to instruments. Have little or no daisy chaining going on.

Now if you have lots of MIDI streams concentrating in some host and it wants to send all of them to another piece of gear (like a synth, to play them), then maybe yes, the regular MIDI serial link might not be the best. I'm sure we can solve that problem without redesigning MIDI.

> I can't believe it's taken this long for a replacement to come along.

Almost forty years tells you that this is a solution in search of a problem. Industries don't stick with forty-year-old solutions, unless they really are more than good enough.

True, some of it is conservatism coming from the musicians: lots of people have gear from the 1980-s that speaks MIDI, using it on a daily basis.


We used to deal with the serial problem using a hack: bump events back/forward by one or two quantums of time to ensure that they go out over the wire in the order that you want. It's laborious and I am looking forward to the next generation never having to worry about it. (That _will_ be fixed, right?)


If you really had to send the data from multiple sources into a single MIDI destination over a single cable, then if a small global delay were acceptable, a smart scheduling algorithm with a 10-20 millisecond jitter buffer would probably take pretty good care of it so that the upstream data wouldn't have to be tweaked.

(Note that if you stand with your guitar 5 meters from your 4x12 stack, you're hearing a 15 ms delay due to the speed of sound.)


Unfortunately, because of the differences in instrument attack, which a MIDI controller would have almost no knowledge of, I think a random jitter would not fix the issue.


An interrupt controller has no knowledge of device semantics; it can just prioritize them based on a simple priority value. The scheduler could do the same thing. It could even be configuration-free by using some convention, like lower instrument numbers have higher priority.

Also, the physical layer of MIDI could simply be extended into higher baud rates while all else stays the same.

I can't remember the last time I used a serial line to an embedded system in the last 15 years that wasn't pegged to 115 kbps. Bit rate is a relatively trivial parameter in serial communication; it doesn't give rise to a full blown different protocol.

115 kbps is almost four times faster than MIDI's 31250. Plain serial communication can go even faster. The current-loop style signaling in MIDI is robust against noise and good for distance. 400 kbps MIDI seems quite realistic.

This would just be used for multiplexed traffic, like sequencer to synth; no need for it from an individual instrument or controller.


> USB connectors do not have the optical isolation of MIDI current loops; USB interconnecting will bring in noisy ground loops that will have musicians and recording engineers pulling out their hair.

Isn't MIDI... digital? What do ground loops matter as long as the signal decodes?

Or do you mean that they'll put current into the analogue signal chain?

IMHO, the correct response to that is to do all hybrid analogue+digital signal processing in the digital domain with opto-isolated pre-amp ADCs, no?


It matters a bunch. All kinds of noise can get picked up over cables and bleed into your audio path on a digital rig. It happens all the time just with power cables which is why every musician carries a handful of "ground lifts" even though they're technically illegal in a lot of places. That said, MIDI over USB is kind of necessary in this day and age. Hopefully instrument manufacturers will be rigorous about isolating any interference it could pick up.


MIDI connects all sorts of gear, a lot of which contains analog signal paths. That's why its design is the way it is.

For instance, rack-mounted synthesizer with analog outputs going to a PA.


Ground loops can end up with a surprising amount of current. They're also very good at emitting hum into other nearby devices.


From the article:

> When you connect devices together, the Capability Inquiry will immediately determine what each instrument is able to do: Your new MIDI 2.0 controller will automatically know which pieces of your rig are equipped with MIDI 1.0, which are capable of 2.0, and tailor its messages accordingly.

So hopefully backward compatibility Just WorksTM.


USB midi is already a thing though. And its a real pain to use if you want to connect 2 devices together where one of them isn't your computer.


MiDI 2.0 is not isolated? Crap. Literally any of my guitar pedals when connected by USB instantly injects noise into my electric guitar signal chain, and isolated USB hubs are practically non-existent.


So buy a cheap USB isolator ala https://www.aliexpress.com/item/32965730354.html ?

Or do you need High Speed (480Mbps)?


The isolator most likely won't help. Odds are the pickups on the guitar are picking up the extra noise from the pedal, thus it will always pass the noise along the entirety of the chain, isolated hub or not.


I have no idea why we don't have ADCs at each instrument (if required, otherwise just send the digital output) and DACs at the speakers/amps only with a fully digital mixing and distribution chain/network... it seems silly to be stuck on analogue audio distribution and mixing networks where these things are still problems.


I'm sorry people are downvoting you without explanation... I used to think the same thing myself until I actually got into music and the engineering behind why things work.

The reason why we don't do that is acoustic and electrical coupling. Sound is AC, and because it is alternating we have to deal with impedance matching. air to physical objects actually has a poor impedance mismatch because of the difference in density. Electrically with pickups, when one system has a poor impedance match to another system some really interesting effects can occur. When you overload a downstream device sometimes you can produce interesting interference that just happens to produce harmonics that are musically pleasing to the ear (3rds, 4ths, 5ths). Electrical guitar amps are a great example of this; you can actually design a tube amplifier to produce even or odd order harmonics by the electrical structure of the amp.

It's the "less than perfect" analog devices and their complex interactions that make what musicians to refer to as "tone".

Fortunately there is actually an alternative: it's called balanced transmission. The standard for audio is unbalanced unfortunately. But essentially you get the best of both worlds: noise rejection from third-party sources yet analog transmission and coupling. Ironically most digital transmissions eventually travel over an analog balance transmission.


MIDI is just the protocol and then there are transports.

One of these is ethernet (RTP midi), which should not have this problems. Or Bluetooth, or WiFi (very bad latency).

Or USB or DIN 5.


> The clever thing in MIDI is that a device which sends messages over MIDI to another device drives current pulses (not voltage). These current pulses activate an octo-coupling device in the receiver such as a phototransistor. There is no galvanic connection between the devices; they don't share any ground or anything.

I don’t understand why this matters at all. The engineers who made 1.0 had different sets of constraints that we no longer have.

Now days we shove gigabits a second over cheap twisted pair wire. MIDI could do a lot more on modern or even decade old hardware...


> Now days we shove gigabits a second over cheap twisted pair wire.

Yes, and when you listen to your PC's "line out", you can hear the unwanted effects of all that sort of thing.


Can't confirm, my newest motherboard's "line out" has actually been one of the best audio devices I've used in a while. And it's just a budget board.


Ethernet is isolated (transformer-coupled).

USB in its most common form has shared ground.


USB interconnecting will bring in noisy ground loops that will have musicians and recording engineers pulling out their hair.

I just switched to a bus-powered USB-C (Arrow) audio interface and have picked up a nasty ground loop/dirty power noise problem in the process. Current setup is a MacBook with the Arrow directly plugged in, and MacBook powered by a second thunderbolt 3 connection to an OWC Tb3 dock, and I am assuming if I power the Mac with its dedicated power supply the noise issue will go away, but if it doesn’t... well, I don’t know what else I could do to fix it.

In the past I’ve solved all similar issues by using a powered USB hub between the problem devices and laptop.


> All sorts of talented musicians have done incredible things with MIDI. The resolution of MIDI has been just fine for people with real chops. MIDI 2.0 isn't going to solve the real problem: talent vacuum.

Wouldn't basic economics suggest that talent vacuum would lead to most talented musicians making good money? -- And I'd argue the opposite is the case - there's a lot more musical talent than the world "needs" (and thus is willing to pay for). Therefore most musicians are poor (and many with additional talents stop being musicians). -- Or did you mean the "talent vacuum" in a different way?


"Getting paid in the music industry" and "having your music respected by other musicians" are two thinly-related different things.


Do you think we should just stick with limited protocols from the 80s?

There's no reason you couldn't make an optically isolated USB hub. With USB 2 it would be trivial. USB 3 is harder but I doubt you need that for MIDI.


> All sorts of talented musicians have done incredible things with MIDI.

I mean Beethoven did some incredible shit without even MIDI 1.0, I’m not sure that’s a sensible line of reasoning.


For me MIDI 1.0 served my needs. I may look into 2.0 if the need arises. It is however great to know that it will be backwards compatible:

MIDI 2.0 will be backwards compatible, meaning all new MIDI 2.0 devices will be able to use and send MIDI 1.0 data. And if, in time, you create a hybrid setup of old 1.0- and new 2.0-equipped machines, your rig's MIDI 2.0 models will interact together with the fresh capabilities of the new spec, while the MIDI 1.0 models will continue to work as they always have.


This is the most important feature. MIDI lives in a context where the computing world's conception of obsolescence would be even more hostile to owners than it is now; decades old hardware is still used and loved, keeping it part of the protocol is key.


This. I have switched to hardware and going DAWless purely because of the software culture of upgrading for upgrading's sake


I remember people discussing the MIDI 2.0 standard back in 2005 and the arguments since then haven't changed. No one needed it back then and the idea that it would become a breakthrough 14 years later is beyond me because no one has needed it since.

Wikipedia talking about that the protocol having been "researched" since then, that gave me a good chuckle.

For reference in 2005 the iPhone didn't exist was two years away and people were wearing layered polo-shirts

https://en.wikipedia.org/wiki/MIDI#MIDI_2.0


for me backwards compatibility would be the main priority for midi 2.0 any thing lessening it should be sacrificed


This quote from the section on the Capability Inquiry:

"The type of instant-matching that is, as of now, still based on proprietary messages between Ableton hardware and software (or similar systems from other companies) will instead be universally available through MIDI 2.0"

is misleading. The "matching" between (say) Live and a Push 2 are not based on proprietary messages sent between them, but merely on both ends knowing which messages to send. That's why an open source DAW like Ardour can also interact with a Push 2, in the same "instant-matching" way that Live can.

Since MIDI is an open protocol, it is always possible to determine what messages are being sent. The capability inquiry is a good idea, but it doesn't replace the sort of carefully-built match between the hardware controller and the software that already exists.


Midi works fine for anything keyboard based. As soon as you deviate from that it becomes an exercise in frustration.

Much better article:

https://www.midi.org/articles-old/details-about-midi-2-0-mid...


Not that I’m familiar with the subject, but this article[1] suggests that Roli’s Seaboard would need MIDI >1.0 to transmit per-note pressure and pitch bend info, and the Seaboard is definitely keyboard based.

[1] https://reverb.com/news/roland-unveils-first-midi-2-ready-ke...


I can't say specifically what the situation with the Seaboard is, but it is possible to apply poliphonic expression with traditional MIDI, albeit with a major caveat. For example, the QuNexus keyboard that I have here can do it.

"MIDI Poliphonic Expression" (MPE) essentially involves only playing one note per channel, which means that things like pitch-bend (which apply to every note on a given channel) can therefore be applied per note. The major downside of doing this with traditional MIDI is you're still limited to 16 channels, but with MPE it then also means you're limited to 16 notes.


Pitch bend has been on MIDI controllers since forever, more or less. I'm super excited for this synth though, with per-note pitch-bend and multiple instruments reacting to key pressure: https://www.youtube.com/watch?v=UjZ6SuWxBFg


Coming from strings, that makes so much sense. Learning some keyed instruments I always found myself attempting to bend/vibrato in that way without thinking.

Also, that man looks a lot like Garth Hudson... https://pbs.twimg.com/media/DjtfCgOUwAA6K7o.jpg:small


Wow, he actually does. That man is Cuckoo, he is a quite well-known synths YouTuber, I quite like his thorough reviews of some synthesisers :)


Huh ... it's a cheaper way to get into the Haken Eaganmatrix too, vs $3500 for a half-size Continuum. That's a really trippy synth and given what the Continuum needs, a great choice for the Osmose


I'm not sure everyone replying is familiar with the Roli Seaboard. It's keyboard shaped, but it's a bit like trying to play a piano made out of jello and encased inside a wetsuit. The movement & pressure of the gel gets interpreted as pitch, vibrato, velocity etc. It's trying to do more than your generic MIDI keyboard.

https://roli.com/products/seaboard/


MIDI 1.0 supports polyphonic pressure and aftertouch messages - just not a lot of keyboard manufacturers supported it, since its pretty processor intensive - not to send, but to sense...


We have an organ in our office warehouse with 8 ranks of real pipes, and several dozen virtual ranks and it is MIDI controllable. MIDI does a poor job of mapping organs to MIDI messages. There's no good way to control stops (i.e., what stops are selected on which manual/pedal) and couplers without "overloading" a lot of the meta commands. And there's no way of defining which stops (which can be on any manual) are under expression. There's no way to represent a "tutti" stop, etc.

And even for conventional "piano" instruments, which you'd think MIDI would work well for, it's lacking. It doesn't have pedal velocity or position (often the damper pedal is held at some in between position) and it doesn't have key position. Advanced "player" systems, like the Bosendorfer SE or Disklavier Pro overload other MIDI messages to account for this. Anyone who plays a real piano will see MIDI's problems.

At the very least, every keyboard instrument like Organs and Pianos should be 100% controllable from MIDI.


> It doesn't have pedal velocity or position (often the damper pedal is held at some in between position)

It does have pedal position. Pedal data is transmitted as a control change message (e.g. CC#64 for damper) with a 7-bit data value. Many recent digital pianos support half-pedalling including transmitting and receiving the pedal position via MIDI.


Indeed, this is true.

I've often been super frustrated by things that receive MIDI: they don't expose certain controls to CC, or they don't respond to multiple channels, or they don;'t pass certain information to the Thru port.

But often what is frustrating is that it's do-able in MIDI, it just wasn't implemented well on the device.


That's super cool that you have access to a pipe organ. I spent a lot of time in my youth building a pipe organ with my dad and the church we went to. They are amazing instruments.

I'm having a hard time understanding how you'd have a hard time implementing an organ control setup over MIDI. Like I get that there there is no specific tutti setting, but why couldn't you just have all the ranks be CC channels and then have the various settings be program numbers, so tutti then is jsut calling up the program with all the stops open?

I do play a real piano, and I haven't been seeing problems with MIDI in the time I've been using it. Like, I've seen a lot of issues with piano synthesizer/sampler implementations... but the basic interface of the piano seems (at least to me) fairly easy to represent in MIDI...

Can you help me out by describing what kinds of problems you're seeing?

Like I say, could just be my own lack of experience, but you've bade me curious what I'm possibly missing.


I'm not the OP and I've had the bad habit of living in small urban apartments with no room for a MIDI-enabled acoustic piano, but one problem is resolution. 127 velocity levels can't really cover the detailed expression of playing an acoustic piano, and something like a glissando or a ten-fingered rolled chord might not have enough time resolution to sound right.

I believe there are messages that add another 7 bits to velocity, and aftertouch can represent key position, but I haven't used a MIDI-enabled acoustic myself to know if those are adequate.


Key position is important if you want to keep some individual keys depressed (or partially depressed) in order to keep the damper up. Or if you want to release a key/keys slowly for damper effects. This is different from velocity.

Also, support for Sostenuto pedal (i.e., the "middle" pedal) is spotty and frequently doesn't get captured or preserved on MIDI piano controllers and sequencers even if a MIDI'd piano records it.


For sure. I made sure that the digital piano I bought had continuous pedal and realistic key release. Still really want a player acoustic some day though.


Kind of strange. This seems to be intended for the case where you connect your devices with something like USB, not with physical MIDI cables, and use MIDI as just an event/messaging protocol on top of the generic data connection. And it's true that MIDI has some limitations in that setup. But for that use-case, Open Sound Control (OSC) already exists, and is supported by almost everything on the software side, plus a growing number of things on the hardware side. Why not just use that?


MIDI is not "just an event/messaging protocol", although it could be used like that. MIDI describes an application-level schema for "channels" which allow for the dynamic control of "notes" including pitch, poly after-touch, etc. It also provides for channel-level control parameters, various special application-level events. Global clock transport, bulk data transfer, etc. It's true that you could implement all of these with OSC, (indeed there is a standard embedding of the short MIDI messages into OSC), but OSC is in general silent on application-level semantics. This makes OSC a much better choice if you want to implement some other semantic, but you still need an application level schema for commercial music devices, and that doesn't exist.


Whenever I talk friends who are interested in such things, they tell me that Open Sound Control ( https://en.wikipedia.org/wiki/Open_Sound_Control ) is the new MIDI.

It seems to support a lot more than traditional MIDI, especially in terms of control and synchronisation.

Does anyone have any idea how well MIDI 2.0 and OSC will compete with or complement each other?


OSC is a message transport protocol. It describes how to packetise messages, but not what they mean -- it doesn't define an application-level semantics. This is both an advantage (making it flexible, and malleable to requirements of ad-hoc projects) and a disadvantage (places a limit on seamless interoperability between COTS hardware). In general, OSC needs either (a) the sending and receiving endpoints to a priori agree on an application-level protocol (message schema), or (b) some kind of glue/mapping layer in either the sender or receiver that can translate and map schemas. Such a mapping layer is easy to construct if you're using a programmable environment. OSC has support in pretty much every programming language and many music environments and, like MIDI 1.0, is a viable DIY protocol.

By contrast, MIDI (both MIDI 1 and 2) are flexible application level protocols. For example, among other things, MIDI 1.0 describe schemas for musical notes, parameters, transport control, and time synchronization. Built-in application schemas allow devices that fit the application model to communicate in a relatively seamless way. I believe that MIDI 2.0 provides a more extensive schema, that includes (for example) device discovery and capability queries, and removes some limitations of the old schema. I'm not familiar enough with the details of the final MIDI 2.0 spec to say much more than that.

As I recall, some of the features of MIDI 2.0 (e.g. capability queries, discussed elsewhere on this page) were proposed for an "OSC 2.0", however the fine people at CNMAT who produced the OSC 1.0 spec didn't have the resources to sponsor 2.0 development, and no one else stepped up. In contrast, the MMA (MIDI Manufacturers Association), who sponsored the 2.0 spec, have all of the major music corporations as members (e.g Roland, Korg, Yamaha). That said, as I understand, the MIDI 2.0 process was open to anyone, and I know of at least one independent developer who was involved.

Will they compete? I suspect that the situation will continue much unchanged: commercial hardware will support MIDI (1 and/or 2), and as is currently the case, few commercial music devices will support OSC. OSC will likely continue to be the protocol of choice for custom projects using custom hardware, software and application schemas. Perhaps with time, as the tools improve and we get API support for MIDI 2.0 in operating systems and embedded libraries, it might become easy enough to develop MIDI 2.0 software to choose between OSC and MIDI 2.0.

Edit: clarity.


I am working on VJ software, we are missing the capacity to associate preview icons for music events. I hope OSC 2.0 would allow that with application-level protocol by example... We tried to convince the music community 10 years ago but could not make it happen.


I wish they would have removed SysEx messages. They cause way more trouble than they're worth, now that we have property exchange/profile configuration built into the spec.


What's the problem that SysEx messages cause? I've only limited experience with them (having used them only to send config data to a device, in a way that didn't need the device to keep functioning while being updated, from a programming point of view, I found them quite convenient and simple, except perhaps the fact that you only get 7 bits per byte so may have to pack the data).


[For the benefit of HN: SysEx is a special manufacturer-specific MIDI message which is undefined, so a synth manufacturer can use it for whatever he wishes]

I build a lot of open source patch editors for older synthesizers. My beef with Sysex is that every manufacturer uses completely different approaches to defining their own proprietary messages with it. For example, nearly every synth in the universe has a sysex message for dumping a patch (a full set of parameter settings) from a synth to another or to a computer; but they define their messages is radically different ways, so I must construct an entirely different set of parsing and emitting tools for every single synthesizer, even within a given manufacturer. It's a nightmare.

So continuing this example, if the MIDI association had gotten together early on and said that MIDI dumps should have a header that looks like THIS and then all the parameters in order, two bytes per parameter with no bit packing whatsoever, no two's complement, and end with a specific checksum, then I'd have written 10x more patch editors so far. I wouldn't have to write custom parsers and dumpers: I'd just provide a list of parameters and their bounds.


(Disclaimer: been working in synth industry for decades now..)

Most SYSEX dumps are just dumps of the plain ol' structs that the synth engines are using to drive their output. A lot of synths don't have the processing power to do more than just dump the struct.

So, it wouldn't really make much sense to have them all use the same struct - this can't be enforced too well. Forcing synth mfr's to all use the same struct means that, even if they have their own internal plain-old-structs, they'd need code to dump the SYSEX according to the standard.


> Most SYSEX dumps are just dumps of the plain ol' structs that the synth engines are using to drive their output. A lot of synths don't have the processing power to do more than just dump the struct.

I don't think it's processing power: it's stingy RAM utilization. Many bad actors (Kawai, Casio, later Yamaha) did crazy bit-packing of parameters rather than just keep them in a simple array, while the more sane (Oberheim, E-mu, Waldorf, early Yamaha) at least tried to pack in a consistent way. Other bad actors (ahem Korg, as late as 2000) decided to use, shall we say, creative parameter encodings, going even so far as embedding textified versions of parameter numbers into sysex byte streams. And many used all sorts of crazy schemes for banks and patch numbering, most of which are incompatible with one another.

And it's not just encoding: basic synth patch dump features are missing from different models. There are five basic tasks that most synth editors require:

- Change patch - Request a dump from working memory - Dump to working memory - Dump to patch RAM and save - Send a single parameter change (for any parameter)

Manufacturers couldn't even agree to make machines which supported all five of these. Some machines (Yamaha) have no way to write to RAM. Some machines couldn't do single parameter changes. Some machines can't properly change patches in a consistent manner. Some machines have no patch request mechanism. Many machines can't dump to current working memory: only to patch RAM!

The situation is only getting worse. Whereas in the past manufacturers at least attempted a complement of sysex messages, now many manufacturers can't even be bothered to allow access to their machines (Korg, Roland). Others treat their sysex messages as proprietary secrets (Arturia, Digitech, Alesis).

There is only one truly good, shining actor in the open MIDI spec space, and that is Sequential. Which shouldn't be a surprise given who runs it.


"Send a single parameter change (for any parameter)"

This makes no sense. That would also imply a way to discover (and name, and probably provide semantics for) all parameters. That's a huge ask if MIDI (even MIDI 2.0) is the only communication protocol available.

Yes, the first 4 of your list are common. The first one is covered by the core MIDI spec. The 2nd and 3rd have no standard msg, but your complaint seems to be about the contents of the message, which is no business of the requestor. The 4th assumes "patch RAM", which cannot be assumed, as you note, and that seems correct to me.


> This makes no sense.

Why? It's highly standard. About 90% of the synthesizers I've written patch editors for provide exactly this facility. In fact some (PreenFM2, Korg Microsampler, Futursonus Parva) provide only this facility.

> The first one is covered by the core MIDI spec.

Actually it's not. Program Change only works for 128 patches. If a synth has more than 128 (and many do), they must rely optionally on Bank Select, but their definitions of "banks" vary because a bank is not a formally defined concept. Some rationally treat banks as divisions of the patches. Others treat banks as media choices: cards versus RAM versus ROM. Some require that Bank Select be immediately before Program Change with nothing in-between; others do not. Some ignore banks entirely and instead define a "Program Change Table" of 128 slots pointing to arbitrary patches in memory, and then Program Change indicates which patch slot to use.

And there are several major synthesizers (Yamaha TX81Z and DX11 are famous examples) where Program Change is in fact broken and requires unusual workaround hacks. Further, most synths require a program change prior to a patch load: but others (notably the Oberheim Matrix 6 and 1000) require a program change after a patch load. It's a mess.


I don't think it's fair to say that "banks are not a defined concept". What has happened is that many MIDI device manufacturers have stretched and ignored the clear intent of the MIDI 1.0 specification. Reading that, it is quite obvious what the relationship of banks and program changes is. But that hasn't stopped various companies from playing games with the two (as you note) for their own purposes. Nothing will ever do that, fully.


It's not standard at all. There is absolutely no MIDI standard for the contents of a patch. I really don't know what you're thinking of.

Back in the 90's, when things like "MIDI Librarians" were common (and widely used), each new device needed to be added to the MIDI Librarian's code to deal with the specifics.


> It's not standard at all. There is absolutely no MIDI standard for the contents of a patch. I really don't know what you're thinking of.

I think you may have misread what I had said. I didn't say that patches had to be the same format or content -- that would be insane.


>I don't think it's processing power

What I mean is that, if they have to reformat their internal struct to conform to a standard, they don't have the processing power to do this munging. Not that they'd care, as you have noted elsewhere.


The point of Sysex messages is moving complexity, such as disentangling bitfields, from the lean hardware of a synth, entitled to do whatever is more convenient, to a deluxe computer that can afford the plasticity of software.


I can't know how big a SysEx message is until run time, which makes buffering them somewhat complicated when you don't know what's in the SysEx message.

This isn't uncommon in serial protocols, but MIDI has been lifted several layers above the UART it was designed for, and one of its strengths for everything but SysEx is that you exactly how big messages are going to be and preallocating space for them is trivial.


> I can't know how big a SysEx message is until run time, which makes buffering them somewhat complicated when you don't know what's in the SysEx message.

That's the reason why you want to break compatibility with MIDI 1 spec? Really? Just like for an HTTP stream, you don't need to know how big the stream is to process it, just when it ends, there is no new problem to solve here.


No, I want there to be only one way to do something in the protocol and to remove arbitrary binary exchange, this is one pain point. It turns your "standard" into a Frankenstein's Monster of data exchange, and you'd think we'd have learned over the last few decades that ambiguity in specification of a communication protocol is just going to lead to headaches and incompatible/buggy hardware.

It's also weird to say SysEx is needed for backwards compat with MIDI 1.0. You can just require a translation layer between MIDI 1.0 sysex and MIDI 2.0 configuration/property exchange for backwards compat. Opaque binary as a part of the protocol does not encourage compatibility among hardware.


The MIDI spec was successful for 30+ years for a reason and manufacturers managed with SysEx without any issue, there is no justification for any "compatibility layer" whatsoever here. The problem wasn't the spec obviously.


As a musician and software developer I'd disagree. Sure, manufacturers are getting on fine because they can do whatever they want, but it makes the setup for anyone who isn't the manufacturer much more bespoke and challenging, as others have said in thread.

Leaded gasoline was successful for decades too, but there comes a time to take things to the next level.


> As a musician and software developer I'd disagree. Sure, manufacturers are getting on fine because they can do whatever they want, but it makes the setup for anyone who isn't the manufacturer much more bespoke and challenging, as others have said in thread.

MIDI 1.0 was successful for 30+ years because of its simplicity, so I'm not sure what you are disagreeing with. Building a MIDI payload is extremely simple and serious manufacturers will document their MIDI implementation in public documents.

What part of the MIDI spec did you have hard time with when developing software using the MIDI 1.0 protocol?

Your comment about gasoline has absolutely nothing to do with MIDI.


There are some bits that are problematic.

Allocating memory (by asking the OS) in a realtime context is a no-no. This means guessing the maximum size of a sysex ahead of time or breaking RT programming rules.

The other tricky issue is the way MIDI 1.0 defines the transmission of the LSB and MSB for 14 bit messages. They got it backwards, requiring the receiver to use a timer. Might have been sensible for baked-in-h/w MIDI gear, not so great for general purpose computing.


It's been kind of a headache in Firefox's WebMIDI support plans (since SysEx is sometimes used for firmware updates and it's hard to explain "this page may hack your keyboard" in a permission request dialog): https://github.com/mozilla/standards-positions/issues/58#iss...


Lots of 1.0 instruments have only SysEx to load/save patches. A single patch may have hundreds of parameters. Whole libraries of patches exist; they can be loaded/saved at once with Sysex. So, no way.


I don't really see why that matters for a new protocol that fallbacks to MIDI 1.0, it's not a superset of MIDI 1.0. If your device only takes MIDI 1.0 SysEx for patches, anything that would send those patches via MIDI is going to do it on top of the fallback.

What I'm saying is a new MIDI 2.0 device probably shouldn't be able to use SysEx. Either opt out of MIDI 1.0 and use the paradigms established or fallback to MIDI 1.0 and its limitations. Otherwise we're just going to get a mess of different implementations, and MIDI 2.0 will fail to be a successful standard.


I'm so glad this happened, this may put an end to the many, many bespoke midi implementations I've come across.

I participated in a piano competition over a decade ago (I believe it was sponsored by YAMAHA) which recorded all participants through an extended MIDI format that increased the resolution and bumped almost everything up to 1024 max from 127 max. With MIDI 2.0 this wouldn't even be required, all the functionality is included.


> I'm so glad this happened, this may put an end to the many, many bespoke midi implementations I've come across.

Maybe it will, but I'm not terribly optimistic that we'll avoid the scenario where the various manufacturers implement the parts of MIDI 2.0 that they care about, and we'll have another mess of partial implementations that aren't entirely compatible with each other.

It might help if someone puts out an open-source highly portable reference implementation that everyone can use rather than every manufacturer writing everything from scratch.


I really hope the background compatibility is idiot-proof. Because it's really been great. My Roland EP-9 from the mid-90s is easily the oldest device I have that I can connect to my iPad and have it just work with modern software. (Granted, a dongle or two is involved...)

And it's worked with every computer I've cared to connect it to in years prior.

That strikes me as a sign of a standard well-done.


I am hopeful, but skeptical.

All you musicians who can't feel the MIDI 1.0 delay need to play on some accoustical insturments and get what you have been missing. That couple of ms between each note make every chord a rapid appregio, drum hit a flam, and it gets worse as control data (much less sysex!) is added.

While I was hoping for a timing-agnostic standard, what we seem to be getting is not terrible from what I can see. Does anyone have a link to an actual spec sheet or prototype implementation?


Back of envelope: With running status, a note on message is two bytes on a 31250 baud connection. That means the latency from transport is on the order of half a millisecond. If you’re feeling latency, it’s in the gear, not the protocol. MIDI implementation quality has long been wildly inconsistent, and I don’t see a reason to believe that will change with MIDI 2.0.


> That means the latency from transport is on the order of half a millisecond. If you’re feeling latency, it’s in the gear, not the protocol.

As long as you don't transmit anything else. When you have 16 channels with many note and controller messages, latency and asynchronicity between channels can become noticeable.


I remember using 8x8 midi interfaces to get around some of that - not just for the additional midi channels, but also to thin out the traffic for each connection. i.e. each midi channel would get it's own cable/connection.

However that approach didn't work when I wanted to send 16 channels to a single (non USB) multi instrument sound device like my trusty JV-1080. It became an excuse to buy more synths! :-)


I wish my timing was that good. I've been using MIDI for decades and not ever noticed it when stuff was setup correctly.

I mean, I play guitars and pedal steel, banjo, dobro, accordion, etc. all acoustically (or, more likely, "in the analog domain"). Maybe I just need to practice more or listen harder.

Do you have any suggestions on how I can improve my timing and/or hearing to experience this?


Ya. As someone who plays acoustic piano, guitar, ukulele, and ocarina...using MIDI 1.0 to connect my electronic gear hasn't created any latency that's noticeable to me.

Sure if you're sending things to and back through a DAW or whatnot it can get noticeable, but that's processing/software time, not MIDI 1.0.

I can't tell the difference between a 320kbps mp3 and FLAC either, but I'm not worried about it.


It's not latency that GP was bemoaning, it's the sequential nature: two note-on measures cannot arrive at exactly the same time. If it was just constant latency, GP probably wouldn't mind (or criticize that, but it would be a completely different argument). Latency alone can actually be worse with real instruments because electrical signals easily outrun sound. (Nerve conduction velocity however is the worst of all, it's a wonder that we can function at all with that shitty data transmission)


No serial protocol with "real time" semantics can ever deliver two events at the same time. To do that requires scheduling of events before they are due to occur. MIDI doesn't work that way, and probably never should.

Even in highly time-aware percussionists, sensitivity to timing is way below the timing delays that MIDI causes (e.g. with note smearing in a chord).


With a defined artificial latency headroom and a message parameter to (partially) override that latency it would be possible. With reasonable low maximum concurrency and a very high bandwidth/message size ratio the required extra latency could stay well within the range of centimeters at speed of sound (a very high bandwidth/message size ratio would certainly also lessen the cost of not taking that feature).

The next roadblock on the way to truly concurrent chords would probably be controller readout. I know nothing about how those are typically implemented, but I strongly suspect sequential readout.


The existing spread for a typically chord (triad) is already not audible (despite the claims of some people who refuse to double blind test it).

Changing MIDI to have "defined latency" is a fundamental re-engineering of the entire protocol.


For me, one of the highlights is the higher resolution of values, for example for control messages, from 7 bits (0~127) up to 32 bits.


Because MIDI 2.0 is bi-directional, will it allow us to exchange preview icons for each individual MIDI note? I am working on VJ software and we want to display the video loop associated with each MIDI key...


I got to see it in ADC19.

For me the "2.0" part is pure marketing. However, since the industry has moved on (e.g. MPE) it's nice to standardize on something.


> Is this important to me?

maybe.

> Is this something that's worth potentially getting rid of something I love for new capabilities that may or may not be compelling?

never


There's a standard? I googled a few weeks ago and obviously my google-fu was terrible because I didn't find a standards document for it



But nothing seemingly available for 2.0?


https://www.midi.org/articles-old/details-about-midi-2-0-mid...

The actual spec sheet is yet to be released, but this is a really great article about it.


"...the new specification hasn't been fully ratified by the members of the MMA and the AMEI, many details and implications of the new spec are still unknown."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: