> Due to budget constraints, many shows end up using Mac minis. Historically speaking, the Mac mini’s computing power has been a bottleneck for electronic music designers on Broadway. In a perfect world, we’d all like to use the best-sounding sample libraries for our work, but that was never feasible with the Mac mini. Thus, the compromise was always to reduce sound quality to fit within the Mac mini’s compute constraints.
What is the complexity with sample libraries? Until now I thought they were just big collections of categorised MP3s, and surely Mac Minis can handle those. I guess I'm missing something.
They use samples as a base, but do heavy amounts of DSP to get the final product, these days up to simple ML models.
Think being able to create new virtual backup singers who you can play like a piano, pretty much on a whim, who sound convincing to the kinds of people who work production on broadway.
To be fair most sample-based instruments are not so data-heavy, but the very best-sounding high-quality ones are. They usually include some degree of software processing to dynamically alter the sound to make it sound as realistic or organic (or whatever) as possible. That's before any post-processing effects are layered on, per-instrument.
Sample libraries are usually WAVs or (non-MPC) encoded audio files. Mac Minis simply don't have the processing power to handle the DAWs, which are generally CPU-intensive and can be memory-intensive depending on the VSTs or plugins used.
Sample libraries are pretty efficient compared to digital synthesis VSTs. They really don't hit the processor nearly has hard.
But M1 designs that max out at 16GB don't have the memory to handle plenty of sample libraries, so I don't understand how a Mac Mini is supposed to be up to the job.
It's not just about raw cycles but about cached access to the samples. The biggest libraries can run up to 1TB and you'll probably have more than one. Obviously you don't keep everything in RAM at the same time, but even so - 16GB is a serious limitation for this kind of work.
And if you're using a computer instead of a synth rig you cannot afford to have problems, because any stuttering or glitching is painfully obvious and distracting in a live setting.
It also makes no business sense for a Broadway show that may be grossing $25m a year with a multi-year run to cut costs to the bone on its musical hardware. Considering the cost saving involved in replacing real players (for better or worse...) it makes far more sense to spend twice as much initially for a no-risk professional setup than to pinch pennies and risk glitches.
Actually RAM is not the issue. Most samplers only load the attack portion of samples into RAM anyway. SSDs are so fast now, we can stream the rest without issue. The problem has always been CPU-related as we have to set a low buffer size to minimize latency. The previous Mac minis we've worked with struggled with some of the more high-CPU plugins (VSTs as well as FX), so we had to compromise in many cases. I did some testing with an M1 MacBook Air, and it blew the old Mac minis away in terms of performance and stability. Very much looking forward to see how the M1 will be used in these live production situations.
You beat me to mentioning the sheer size of those libraries. If they reside on multi TB SSDs connected via Thunderbolt, taken along for the tour, i can easily imagine, that a Mac mini would a good solution. Apple embraced Thunderbolt early, it was even exclusive to them IIRC. If i were responsible for playing those sounds, i would want one stable platform too, instead of several diverging implementations on standard pc hardware and the fun (/s) that i'd have (drivers, windows updates etc...).
The first "gigabyte" multisampled libraries appeared in the 2000's, when memory was even tighter and spinning disks were the norm, so you're underestimating the technique here - it's always been streaming-intensive, and the software is doing a lot to mask I/O latency. A faster disk goes a long way in this respect, letting you run more instances with smaller buffers.
Memory does pose a bottleneck for huge arrangements in the studio, but in the live setting you literally don't have enough performers at the keys for the same constraint to apply. The stuff they might trigger can be bounced out into multisamples, so the remaining bottleneck is with effects processing.
I get that all else being equal, it's better to put everything in RAM, but what point is it just poor software design? 1TB of RAM is nothing to sneeze at. Is there really such a great need for sub-100us read latencies that commodity NVMe SSDs are insufficient?
It's not that there's a hard OSX requirement, but Mainstage (for live mixing) dominates this field. It's incredibly low latency, and there's really nothing as good for this specific use case. Combine Mainstage with on-demand sequencers like Doggiebox, and Apple really owns this market.
Yeah, this is the main reason. We use MainStage for all shows. Also, Mac minis are quickly replaceable. If we’re in a city and a computer breaks, we can grab a new computer from the Apple Store right away.
Besides software and familiarity, I'm sure the Mini's form factor has a lot to do with it.
Also, if you need to replace or duplicate a unit with an identical one, you know you can always run out and easily find replacement Mac Minis. If you used some random ultra-SFF PC, can you be sure you can get another identical one easily if you need to? What about 3 years from now? Changing out hardware always introduces a possibility that something might go wrong - exactly what you don't want on a broadway show, a few hours before a performance!
The 2018 Mac mini is, from all reports, a pretty decent machine for DAW usage. Many replaced their 2010 Mac Pro rigs with 2018 Mac minis + Thunderbolt chassis.
I'm a MacOSX plugin developer, and in touch with others also facing this situation.
I develop on a VERY old machine in order to support backward compatibility way way farther back than Apple will allow: my current plugins will run on PPC machines because those can be used as music DAWs. As such, the machine I'm compiling on is not producing 64-bit AUs that will work, directly, on MI Macs. They work on literally everything up to that point, but Apple finally shanked me, at least w.r.t that compile target. Until then I was able to support PPC to present day with one three-target fat binary :)
Another dev, Sean Costello, told me that older builds of his stuff (pre-2017?) weren't running on M1, but everything built past a certain point (a new version of XCode, that had long abandoned things like PPC and possibly 32-bit support) was automatically working on M1 through the Rosetta layer.
So, depending on the build environment, Apple arranged that the audio plugins don't even have to be updated. Depending on the libraries the plugins rely on (a vulnerability for some of the big names that use bespoke but OLD libraries to do things), some of the plugins might need only a recompile to be native to M1 architecture. And some might be really intractable.
What is the complexity with sample libraries? Until now I thought they were just big collections of categorised MP3s, and surely Mac Minis can handle those. I guess I'm missing something.