Or speex narrowband or others. I think the tendency to pick Opus is just because it has a newer date on it -- its design goals were not necessarily to optimize for low bitrate; Opus just happened to still sound OK when the knob was turned down that far.
One other point I intended to make that is not reflected in many listening/comparison tests offered by these presentations -- in the typical applications of low bitrate codecs, they absolutely must be able to gracefully degrade. We see Mlow performing at 6kbps here; how does it perform with 5% bit errors? Can it be tuned for lower bitrates like 3kpbs? A codec with a 6kbps floor that garbles into nonsense with a single bit flip would be dead-on-arrival for most real world applications. If you have to double the bitrate with FEC to make it reliable, have you really designed a low bitrate codec? The only example we heard of mlow was 30% loss on a 14kbps stream = 9.8kbps. Getting 6kbps through such a channel is a trivial exercise.
My understanding was Opus was specifically developed with the idea of replacing both Speex and Vorbis. "Better quality than Speex" is literally one of their selling points, so I'd be interested to hear more details.
> how often is significant numbers of bit errors a problem, or when does that come up?
It depends on the transport. If you are going over something like TCP you will have a perfect bitstream or you will have nothing, so your codec doesnt have to tolerate bit errors or loss. If you are pushing raw bits over the air with GMSK modulation with no error correction, you'll have to tolerate a lot of errors.
In real world applications you almost always have to consider the tradeoffs on what things you want to leave to the codec and what things you want to leave to the transport layer.
At very low bitrates, the overhead required to create reliability and tolerance for errors or omissions become significant enough that the entire system performance matters a great deal. That is to say that the codec and transport have to be designed to be complimentary to one another to achieve the best final result.
From the presentation they show us mlow at 6kpbs and then again at 14kbps with 30% packet loss (effective datarate 9.8kbps). They do not say if the loss is random bit errors or entire packets, but let's not worry about that. Let's just assume the result of both of these is that you get final audio of about the same quality. This means that mlow has some mechanism to deal with errors on its own, but is using an obscenely high overhead rate (133%) to accomplish it. They also dont let us hear how it actually degrades when exposed to other types of transport errors. These numbers and apparent performance just aren't very good compared to other codecs/systems in this space.
One other point I intended to make that is not reflected in many listening/comparison tests offered by these presentations -- in the typical applications of low bitrate codecs, they absolutely must be able to gracefully degrade. We see Mlow performing at 6kbps here; how does it perform with 5% bit errors? Can it be tuned for lower bitrates like 3kpbs? A codec with a 6kbps floor that garbles into nonsense with a single bit flip would be dead-on-arrival for most real world applications. If you have to double the bitrate with FEC to make it reliable, have you really designed a low bitrate codec? The only example we heard of mlow was 30% loss on a 14kbps stream = 9.8kbps. Getting 6kbps through such a channel is a trivial exercise.