Hacker News new | past | comments | ask | show | jobs | submit login
The impact of Apple Silicon Macs on Broadway (brianli.com)
242 points by da02 on Dec 25, 2020 | hide | past | favorite | 280 comments



They're going to have to wait until the synth industry gets their act together and ports to Apple Silicon though. The DAW/music ecosystem is notoriously bad at these kinds of transitions. Due to how VSTs work, if any of your plug-ins are not ARM compatible, you have to run the whole plug-in host and all plug-ins under Rosetta, with the performance drop and stability compromises that that entails. And a lot of plug-in makers are going to want you to pay for an upgrade to the latest version that is released with ARM support.

That said, 4-5 years down the line when this whole transition is over on macOS, it's going to be great. And for anything not relying on live synth/processing, you don't need many/any plug-ins at all, so that'll work well today.

Personally, I'm really looking forward to doing real-time / live audio set-ups on my M1 Macs under Linux as soon as next year. The design of the M1, beyond just performance, is almost certainly much better than any x86 box for real-time processing, and likely capable of much lower latencies (due to x86 junk like SMBIOS and their power management approach killing worst case latencies; x86 can't match ARM embedded systems for real-time stuff, but M1 is from the embedded world). And since it's Linux, all the open source stuff is already ported to ARM, and the wine hack I use to run a few Windows VSTs ought to be compatible with shoving them under qemu-user without disturbing the rest of the set-up.


Saying that synth makers need to 'get their act together' is a bit unfair. Nobody asked Apple to invalidate all binaries, they just went ahead and did it.


I work for an audio company. Indeed, that's unfair. Unlike the PPC to Intel transition. This time it's a little easier since most modern toolsets require much less changes in code.

Having said that some consideration to keep in mind:

- audio code needs to be optimized for real time thread constrains. Many optimizations usually made by vectorizing rather than threading that would've lead to locks and synchronization not always possible for real-time processing. So not all SIMD code can be compiled just by changing a flag.

- machine specific code. While rare. Some companies still got such code for various reasons. And needs more complex transition.

- Not all companies were able to obtain DTK. We for example, got our first M1 machine 3 weeks ago.

- Backward support. While we'd like to have universal builds, musicians use their systems for years. We still support 10.7. With Big Sur Apple seems to break SHA1 signs making builds from Big Sur work reliably only on 10.11 or newer. (The first release to support SHA256 codesigns)

- some companies already got Universal builds. REAPER, FabFilter and Adobe Audition are few I can remember.

Keep in mind electron, docker, Homebrew and even other "devs" tools still not fully Apple Silicon ready.

So indeed the above statement is unfair for a small industry (vs finance or other software markets)


We've already released a "test" build of Ardour for M1. Because we already support Linux on ARM, it was relatively trivial - the main work was getting our build stack to compile first. Other than that, it just works. The same codebase supports Linux, Windows and macOS/OSX back to 10.8 and PPC.


Bravo, love Ardour.


Check out sse2neon as a way to substitute your x86 SIMD instrinsics for ARM neon. Perf was good enough to ship in one of the past projects I worked with.


Thanks! Interesting. I'll look it up. What I know one of our devs didn't find equivalent to is ippsPowx_32fc_A11


> With Big Sur Apple seems to break SHA1 signs making builds from Big Sur work reliably only on 10.11 or newer. (The first release to support SHA256 codesigns)

Affinity Photo is an Arm/Intel universal binary that’s also compatible with OS X 10.9. Maybe look into what they’re doing?


I guess they're building on Catalina as we expect to do so until Apple or us/some other devs would find a way to use prdocutsign/codesign with backward compatibility on Big Sur


I'm not saying I expect it to be done now, I'm saying I expect a good subset of audio companies to still be dragging their feet a year from now, judging by history. Yours may not be one of those :-)


You can produce two signed binaries, one for <10.11 that is signed with a SHA-1 and one for newer systems that is signed with SHA-256. Your update framework would have to know which one to pull, but that seems surmountable.


I believe Xcode 12 can only target back to 10.9. (Xcode throws an error when you try to go older, I think the command-line tools can still go back farther though)


1. Also 10.9 needs SHA1.

2. We've been able to input down to 10.7 and didn't see any issues. The issue is productsign/codesign linking Security.framework which I assume changed.


> if any of your plug-ins are not ARM compatible, you have to run the whole plug-in host and all plug-ins under Rosetta, with the performance drop and stability compromises that that entails

Is this your personal experience or are you speculating? I haven’t noticed performance drops or stability issues under Rosetta so far. A truly impressive feat of software engineering.


M1 emulating x86_64 is typically faster than any Intel Mac below a Mac Pro, so in the context of a Mac Mini it’s a safe bet on emulation from a performance standpoint.


I believe they mean performance drop versus native.

Let's say you have a DAW with ARM and x86 binaries but a VST/AU/RTAS that's x86 only. You need to run the DAW as x86 under Rosetta, which will result in reduced performance versus a native binary, assuming the native processor was capable of equal performance.

Given how zippy the M1 is, that performance penalty may well still put you ahead of where your x86 perf was, but still slower than native.


That’s what I’m saying. There is no performance drop compared to x86 macs, the M1 is _that_ fast.


Well yes, x86 macs in those classes were using underpowered Intel CPUs :-). That said, I've yet to see any serious benchmarks of this kind of workload. We know the M1 is fast for single threaded UI code (fast as heck atomics sure help with ObjC reference counting), but that says nothing about SIMD and floating-point heavy audio processing. Rosetta adds overhead to that, potentially a lot vs native ARM code, depending on the code. There is no way to make a blanket "the M1 is so fast it overtakes Intel even under emulation" claim.

I'd personally be more concerned about stability and drop-out issues. Apple sells Rosetta as an "ahead-of-time" translator, but that is necessarily best effort (because true transpiling is not a solvable problem, e.g. self-modifying code). Thus, there is always the possibility that the JIT gets invoked in the middle of the audio processing thread, and that won't end well for real-time guarantees. There are also programs that just don't work under Rosetta properly (for unclear reasons).


See https://lemire.me/blog/2020/12/13/arm-macbook-vs-intel-macbo... for SIMD performance. M1 is fairly competitive even under Rosetta. Also, if your audio code is self-modifying, I really have no idea what you could possibly be doing.


Self-modifying code is just one example. The point is there is no way to positively identify all basic block entry points on x86 (especially due to its variable-length instructions). That means that there is always the possibility that the AoT didn't catch everything, and then your plug-in takes a novel uncached codepath at some point, the JIT fires, and you get a real-time constraint violation and a drop out.

Also, audio companies sometimes like to use "fun" DRM/copy protection systems.

Besides, audio apps themselves doing JIT would not be unheard of. For example, that'd be a very efficient way of implementing a modular synth. Those apps could JIT in a realtime-safe way; Rosetta can't (it doesn't have enough info).


These kinds of questions are why the post you're replying to asked if it was personal experience or speculation.

You responded with some technically-detailed speculation, which, fair enough. I guess we'll have to wait and see.


To clarify, when I say native, I mean both arm and x86.

Switching to x86 emulation for compatibility, will be a performance hit against running native ARM


Well if it’s just as fast, what is there to be excited about?


Insert "not sure if trolling" GIF here, but you get that what the OP is saying is that when M1 Macs run x86 code in emulation they're often running it as fast as comparable x86 processors run it natively, right?


Right, but if the conversation is about how this is going to change Broadway or anything, running just as fast means nothing changes (the case if you have to emulate)


The conversation started by the linked article is clearly suggesting it can change Broadway when software is recompiled to run natively with Apple Silicon, though, and this is a perfectly reasonable take. An M1-based Mac mini that's actually less expensive than the Intel one it directly replaced can be a lot faster.

The arguments in the thread about "but a lot of software right now runs in emulation" and "who knows when the software companies will get around to transitioning" aren't wrong, per se, but they're also arguments that long-time Apple users have heard variants of in the PowerPC to x86 transition. And in the (original) Mac OS to OS X transition. And in the 68K to PowerPC transition. It turns out that when transitioning your software to the new platform doubles your performance and/or is necessary to keep selling your product, you have a fairly strong motivation to do the work.


20 hours of battery life...and no fan noise.


It would be perfectly possible to run the VST/AudioUnit in a separate x86 process and communicate with the host using shared memory. Thankfully the endianness is the same so data structures are compatible.

Doesn't Logic Pro already support this?

Admittedly it does mean the plugin runs in a separate process&thread, requiring IPC to render each chunk of audio through that plugin.


> Doesn't Logic Pro already support this?

It does. Apple provides all AU hosts ability to run Rosetta2 AUs in native DAWs


That's because AUv3 is an out of process plug-in architecture, with the performance overhead that that entails.

See here for why that doesn't scale to serious productions with high track count:

https://ardour.org/plugins-in-process.html


> Doesn't Logic Pro already support this?

It didn't support it during the 32/64 bit transition, don't know about Intel to ARM. Of course even if Logic does it, there's the other DAWs.


The endianness being the same does not mean data structures are compatible. There are many, many more details about data structure layout that vary between architectures (and even compilers of the same architecture, especially once you delve outside of C).

VST2 is probably doable - what I use to run Windows VST2s on Linux is something similar with an out of process wrapper - but I can see VST3 with its C++ API being a major, major pain in the ass. And even then there are downsides. Running plug-ins out of process has significant context switching overhead. It's fine for one or two or half a dozen, but compare running 50 plug-ins in process and out of process and you'll notice a massive performance difference. You can fight that with larger buffer sizes, but that adds latency.


Have you looked at the continuous testing, e.g. [0] for AMD and [1] for Intel? They consistently deliver latencies below 300 us, and for the most part, even lower. Intel largely below 50 us, AMD more around 170 us.

Remember, DAW deployments are not actual hard realtime. If you drop one chunk of samples every night, no one cares.

[0]: https://www.osadl.org/Latency-plot-of-system-in-rack-c-slot....

[1]: https://www.osadl.org/Latency-plot-of-system-in-rack-0-slot....


Yeah, it'll be a while before VSTs get caught up. I did some testing with MainStage concerts using Apple's built-in stuff and my wife's M1 MacBook Air performed just as well if not better than my $4000 MacBook Pro. This is why I'm so excited for Apple Silicon computers for this sort of work.


> ...you have to run […] under Rosetta, with the performance drop and stability compromises that that entails.

Have there been stability issues with Rosetta? I haven’t seen reports but I’m not switching right away so may not just be looking closely enough.


Couldn't you run a VST host in the Rosetta environment for your x86 VSTs and run everything else native in ARM in a separate host? Or does the VST model require that all the VSTs loaded run synchronously within a single process?


It’s within one process.

There are ways to bridge it but basically you have a native plugin shim that talks to a process running the target arch and essentially passes protocol messages back and forth. Was common for a while on windows when many plugins were 32bit only and wouldn’t run in a 64 bit host


It's possible, but it would have to be a built-in feature of the DAW to be convenient, and could seriously compromise performance even if optimized.

Let's assume an ideal implementation where there is only one process per architecture. Take a 100 track project with two plug-ins per track (say, compressor and EQ). If the compressor is x86 and the EQ is ARM, that's 200 context switches per audio period. That makes it impossible to run at small buffer sizes. A very smart flowgraph implementation could attempt to batch everything into the minimum number of context switches, but that has other problems with threading. It's very hard to get this perfect.

This solution works fine for when you just have a few straggler plug-ins of the wrong architecture, but it would still require quite complex engineering by the DAW to make it work transparently to the user. Consider, for example, that the plug-in UI needs to embed in the DAW across a process boundary.


It doesn't require everything to be on the same process, it's just how DAWs are implemented today.

There are alternatives, like the bridges mentioned by the sibling post, but it's clunky and manual to setup and slows down your workflow.


It's a bit more than "just how DAWs are implemented today"

https://ardour.org/plugins-in-process.html


So I've been out of the DAW game for a number of years now and would like to get back into it. Don't own protocols or any plugins anymore. I was about to buy the M1 Pro. Should I not do that?


I own some M1 and the arm version of Logic Pro X can load most x86 VST plugins without problems. A recompile is a bit more energy friendly but not strictly necessary.


Unfortunately it's unlikely Linux will ever run on the new M1 Macs: "Linus Torvalds doubts Linux will get ported to Apple M1 hardware" https://news.ycombinator.com/item?id=25238175


> They're going to have to wait until the synth industry gets their act together and ports to Apple Silicon though.

Why is it their responsibility? Apple has literally billions in the bank, they should be running an incentive scheme if they want more native ports.

This reminds me of people complaining that Docker doesn't run on Apple ARM yet. Why should it? Apple are the ones that changed the architecture.


It's naive to think that Apple needs Docker more than Docker needs Apple.

If Docker didn't support Apple devices going forward, they would lose a significant amount of users.

If Apple doesn't support docker going forward, they lose a tiny percentage point.

That being said, its mutually beneficial, so Apple should be putting their best effort to making the transition smooth (which I think, at least from an outsiders perspective, they are)


Apple should care more about this - web developers are a huge part of their brand


... And yet Apple refused flash support on iOS. It didn't hurt Apple, and only strengthened web development by forcing new solutions to appear. Flash, once a big chunk of the web, is now a little more than a footnote.


Apple killed flash. If they will be able to kill x86 remains to be seen. I don't think web users won out because flash is dead though. Web games never recovered. Video making switched from animation to people filming themselves talking to the camera. What will we loose with x86, I wonder?


> Apple killed flash.

Apple didn’t kill Flash. Adobe did.

By failing to deliver a performant and secure version of Flash (under any architecture, but especially mobile), Adobe ensured Flash would be not be viable for the web as it evolved.


Security was indeed an issue, but I feel flash’s performance reputation was largely undeserved. Yes, it was cpu-hungry, too intense to run on the mobile hardware of the day, but that’s because it did things html5 couldn’t. You could do smooth animations and complex games in flash easily that were hard to impossible to do with regular web tech (pre-webgl). When flash died, web animation and web gaming mostly died along with it. Some of that is the engine itself, and some is the flash authoring tools, which don’t have an open web equivalent. Maybe that sort of power never belonged in a browser, but I still feel like we lost something that didn’t fully get replaced.


I still largely blame Adobe here -- they were the ones who should have been responsible for fixing Flash's performance. The complaints about it being a CPU hog weren't limited to mobile; even in the early 2000s, Mac users, at least, generally hated being forced to deal with Flash-based UIs. Adobe never seemed real concerned with optimization, I suspect precisely because of the advantages you listed, e.g., things that were hard-to-impossible to do without Flash.

At this point, at least, I'm pretty sure WebGL can do most or all of what you could do in Flash. It's possible that the tools have never caught up, but it also seems possible to me that the space that used to be occupied by Flash games has largely been filled, ironically, by mobile gaming.


They where poor stewards of the platform, to be sure. The security situation was very bad, and they did little to discourage people from using Flash for all of the wrong reasons. But you could just as easily turn around and blame Apple for the depressing state of mobile gaming.


Flash has an HTML5 equivalent. Alongside the Animate rebrand (years ago) they released HTML5 tooling that lets you export animations for the web.


> Video making switched from animation to people filming themselves talking to the camera.

Youtube was completely built around Flash, the FLV format hit a sweet spot between "not looking like trash" and "not needing more bandwidth than the average user had" that else had before. I'm pretty sure we would be seeing a shit-ton of people filming themselves talking to the camera even if Flash was still a going concern - animation is a ton of work.


> Youtube was completely built around Flash

Youtube exists because of Flash. They were are the right time and place to have a flash object sitting on a webpage that could play video in a reasonably cross platform way.


> Video making switched from animation to people

Not sure this is entirely due to the flash web player being phased out. Kids just watched their friends play from the couch, or watched more generic broadcast television. Now they still watch and play together but remotely.

Edit: 'due' not 'sure', 'play' not 'pay'


> web developers are a huge part of their brand

Are they? Maybe in HN's niche community—and it is niche—but I suspect the vast majority of Apple's customers don't remotely associate web developers with the brand.


I think it would be accurate to say web developers are a part of their market more than a part of their brand. I don't know if I'd say "huge," but I suspect a financially significant chunk of MacBook Pro sales, specifically, go to design and engineering groups in tech companies.


Using Apple computers is a part of some web developers' brands. Web developers are at best a tiny niche aspect of Apple's brand.


You don’t need docker to develop web apps.


There might not be an absolute need but the advantages of Docker are enough that, when removed, the pain is real.


> Apple should care more about this - web developers are a huge part of their brand

They demoed Docker on stage at WWDC. They've donated equipment to Docker. Apple built a significantly better virtualization framework for Big Sur/ Apple Silicon.

What makes you think they don't care about it?

This transition is in baby steps right now. It takes time to migrate software.


I dream that at some point, everyone will get their heads out of their asses and abandon this abusive relationship.


The abusive relationship trickles down no matter which platform you choice. Even on Linux you don’t have full control of all the driver binaries, control of the silicon of everything running on your rig. Even if you were to go RISC V chances are you are going to have to rely on some binary blob for things like your WiFi card.

And even if we over come all those hurdles we still have to convince the avg joe to adopt them too otherwise it will just live in a small costly niche that the wider community just don’t bother supporting.

All we can do is try and pick the least abusive relationship we can afford and is practical to run.

My point is, I don’t it against people for using platforms others may find abusive. Sure us on HN are more likely to be in the position where we can switch platforms for fun just to see how they work out. But for most they simply don’t have the time or patience to swap and learn a new platform (let alone the cash).

Maybe this is a bad example because of what exactly happened with one manufacturer, but it’s why airlines like plane manufacturers to not go too far with design changes because it requires them to retrain their polits on the new aircraft.

If every time you purchased a new car you had to take into consideration that the control system was different between each manufacturer which may even require you to retrain in order to drive it you may very well consider with sticking with what you already know.

So I don’t hold it against anyone who simply wants to keep with what they know. They just want to fire up the platform at get to using it.

Anyways sorry for the rant, just my thoughts on the matter.


Agreed. We should all standardize on the most open systems that permit the most users to use the machines as we see fit. For example, on an open system, you can make special systems for Mac users. Or for Windows users. Or LSARS users.

That way we don't duplicate effort for an ever closing dictated by one greedy body.


How about the malware aspect? The most open system could be the most vulnerable as well.


It's a tool not a religion.


One is a toy, the other is a tool.


Docker does run on Apple silicon already by the way :)


This whole line of argument is just weird.

> Why is it their responsibility? Apple has literally billions in the bank, they should be running an incentive scheme if they want more native ports.

Because they want to sell software and/ or hardware.

> This reminds me of people complaining that Docker doesn't run on Apple ARM yet. Why should it? Apple are the ones that changed the architecture.

Docker is getting migrated for the same reason it was originally ported to MacOS. Because the developers either run Macs or work for companies that support people who run Macs. It's the same reason any software is getting ported.


I'm puzzled why there is such a strong dependency on Mac? I mean, the M1 looks pretty nice; but if the author is budget constrained AND the hardware is too slow, wouldn't "normal" computers with Windows/Linux make much more sense anyway?

I fondly recall how a good friend of mine was impressed by the compute performance of the Apple based workstations at her university. I convinced her to get a PC for the same cost as the Mac, and then she complained how slow the university Macs suddenly felt. (That was ca 2014).


Imagine Windows deciding to do a forced auto-update (or hard disk check) ten minutes before performance. Or breaking down because the system doesn’t work well after the update.

I set up a VR performance in a gallery once. It was supposed to run the same piece of a basic program for 3 months.

Since it was Windows we had to do additional steps to make sure that it wouldn’t try to update* and possibly break drivers. Because there wasn’t IT staff around to fix things if it broke down.

I never had such issues with MacOS.

* - additional steps like making sure the built-in wireless is disabled and it doesn’t remember any wifi passwords, so it doesn’t try to fetch any sort of update. Because even if you disable Windows updates, you have a bunch of other drivers that may ignore those settings. And then, if we needed to put anything new on the computer, we had to either use usb-sticks or break the airgap and redo all the testing - so it had to be timed so that we would have at least half a day to fix it if broken.


I'm watching MacOS gradually become this, and I'm not pleased about it. I'm pretty sure I understand why it's happening: Apple is managing its full ecosystem for its own benefit.

My concern is that as a fallible lil' human, and moreover one with autism, I find it probably more upsetting than most to have stuff randomly break for no reason. I depend on continuity and repetitiveness to be well. As such I try to operate on computer systems that don't change out from under me at someone else's whim, because I can be thrown into the inability to function, by something at an unexpected level of abstraction blowing up.

This most recently happened under OSX Mojave when my (non-Apple)_apps could no longer check with the authentication server and would not launch. I lost a day to trying to repair my system: Apple had never told me 'by the way, everything you run now has to talk to a server of ours or it'll refuse to launch'. I disabled the functionality, but I can't have things like that going on. It makes Apple the cyber-terrorist they are trying to 'protect' us from.

Again, I understand their motivation as they are a titanic collective entity trying to administrate and tend another titanic collective entity, their userbase, which they feel is a part of themselves.

However, as a lil' human type, I am too deeply committed to maintaining usefulness for lots of older computers owned by other lil' human types, many of whom can't consistently throw thousands of dollars at Apple to stay in the ecosystem as Apple understands it. I find Apple's actions morally reprehensible (granted, among the least reprehensible actors in the world of computers and internet, but still)


> I set up a VR performance in a gallery once. It was supposed to run the same piece of a basic program for 3 months.

Windows 10 LTSC is a good fit for this type of scenario.

https://techcommunity.microsoft.com/t5/windows-it-pro-blog/l...


You ever work with Joanna Klass? Also... the Posterous link in your profile is of course dead


Nah, the VR work was with Norbert Delman: http://norbertdelman.com/2017/1951

Posterous - oh no! ;) Perhaps I will change the link to the webarchive page of it :)


Turning off Windows Updates is 10 minute work. And I am no Windows expert. Just got to know this via a youtube video. I am not sure how you couldn't disable updates.

I am not a professional, so I might not know audio stuffs. But manual Windows Updates is easy.


NVidia has it's own auto-update system, and some other programs as well.


If you don't install the "GeForce Experience", which given your use case, you obviously would not, the graphics driver does not auto update and has no way to do so.


Software we use is only available for macOS. Also, Mac mini availability across the world is important. Lastly Mac mini form factor is perfect for the use case.


I suspected so. Excuse my rant, and it's not directed at you, but the software companies only supporting macOS: I never understood that crap. Only offering macOS forces their customers (e.g. you) into the Apple lock-in, eliminating them from choosing the best hardware for the task.

Directed at you: I would try to avoid such software, and look for alternatives. (I expect this to be futile?).

x86 is available across the world as well. Others already pointed out comparable form factors.

Also, computerhardware reliability is pretty good these days. Maybe you could set up a scheme to reuse your tech stack?

On the extreme end, server hardware can run for years with 0 hardware-related downtime (and offers nice things like redundant power supplies, 19" rack cases [but much deeper than audio stuff?]). And brutal performance: Even my 450 Euro used&modified (new nvme disk, faster CPUs), 5y old, 1u(!) Intel dual socket system can mop the floor with most desktops below a Ryzen 3900 (at least on my compile workloads, and on anything that swaps on less than 128GB RAM in general).


If you're interested, Intel NUCs mini PCs are available all around the world in the same form factor.

They don't run macOS, so won't run your software though.


How does their performance compare to a m1 Mac mini?


Appallingly


Citation needed. There are Ryzen 5 mini PCs with active cooling [1]. You don't need to use an Intel chip manufactured on an aging 14 nanometer node.

[1] https://www.youtube.com/watch?v=fLATODi7KlU


> There are Ryzen 5 mini PCs

So not an intel nuc then, which is what I was specifically referring to.


MacOS has all sorts of serious MIDI capability built into the default install that you need a bunch of different plugins on Windows to get.

Also, while a lot of people use Pro Tools for recording/mastering Ableton (which are cross-platform) for live stuff, lots of shows use Mainstage which is Mac only. Logic is really good for recording and sequencing too and also Mac only.


I bet macOS audio API's are way better than on Windows. I don't have experience of them myself but I have been burned by Windows ones.

I have a small story to tell about this. Windows WASAPI has a way for it to tell you when, supposedly, the endpoint device started the recoding of a buffer. This is important if one wants to have a sync with precision of sub millisecond range.

Well. It doesn't quite work like that in reality. I noticed the hard way that if one gets a 2ms long packet. Queries the high performance timestamp, then queries the packet recording time, then queries the timestamp again. The packet was supposedly recorded after the first timestamp but before the next. And as the packet was 2ms long and the previous timestamp was way way way less than 2ms from the query it means that it was impossible for it to start the capture at that point.

So instead of returning the start of the recording as per docs state WASAPI just returned the current high precision timestamp. It's basically you asking someone "Hey, this 10 hour video you gave me, when did you actually start recording it?" "Now. I started the recording right at this moment".

This story has a happy ending though. I grabbed portaudio, hacked it's windows low level implementation to return the actual timestamp based on the register of the soundcard that points to the FIFO buffer and managed to get sub ms precision.

So if macOS audio API's are as good compared to Windows (which is a dumpster fire unless one uses some custom stuff dependent on manufacturer) as iOS api's are compared to Android (where only Pixel phones are not dumpster fires) I can easily see why Macs are the devices of choice.


Nothing about windows audio APIs actually stops them being good enough to do a production on.

Obviously it's not Broadway but I ran the sound for school productions for years, and we were able to quite happily do (increasingly DSP based as the years went on) live audio I/O on literally the worst windows machines you could imagine.

The whole idea that the OS really makes any difference - especially for being "creative" is just a placebo.


On a conceptual level they seem allright. The downside is that they don’t actually work as documented.

If Mac APIs actually work as advertised then the difference is that windows programs work weirdly (bad sync etc) or have had to spend tons of time making custom code to workaround the badness of their ”professional” audio API, costing way more.

And another example is Android. It’s definitely not placebo that it tends to have horrendous latency. Originally due to design and nowadays due to bad vendor implementations with only Pixel phones reaching iPhone levels of latency. We’re talking about latency in iPhones being only few ms and tens of ms or even more in average android phone. That does actually matter in music production.


Mac APIs have indeed worked as advertised for many years.

My own concern is watching Apple pursue a path of driving upgrade purchases through (A) increased performance and (B) breaking older systems or (C) disqualifying them from use of current software.

C is easy and practiced intensely by Apple, which abandons support for older stuff VERY REGULARLY in XCode. This may or may not be better than allowing it to rot and become deeply broken through lack of maintenance, but it's a choice and Apple repeatedly chooses to throw away even the possibility to support older machines.

B happens through rot: things get complicated, and if they don't care what happens to you AND they are changing your machine out from under you, they can just randomly brick it one day and not be a bit sad about it. It was your fault for not buying newer things, regularly.

A is also something Apple's been capable of. You buy into that and if you stay on the bleeding edge, Apple's become pretty good at keeping you riding that wave of the best computers can do, at any given moment. This also (to some extent) helps the older stuff become more affordable as it's left behind: that's positive in its way. 'Bleeding edge' is not the only kind of functionality to have. I've noticed that for music production testing of Apple Silicon, all the test cases are completely unrealistic: 4000 tracks each of which has 10 Space Designers, etc etc. That means the use case is a solved problem: you don't need the new Mac to do it, at any reasonable level. 8K feature film video on the desktop, yeah you can still need bleeding edge for that. Music, no, not at all.

This is also why it's important that Apple not break its own APIs or let them rot. On the whole, the functionality just works, every time, no matter what. This is a serious thing to risk by allowing the platform to become less reliable due to needing to 'churn' it and sell new generations of machines.


How much does it cost to build a Intel/AMD computer with performance comparable to a m1 Mac mini?


The author talks about his past experience, so did I. Back in 2014 the 1500 Euro Mac was beaten in image manipulating workloads (Adobe stuff) by a PC of the same price range.

Today, for a proper answer I'd had to check specific benchmarks and compare prices. If noise is no problem (e.g. separate tech room backstage, where the power amps live) used server hardware could be a reliable and performant option on the cheap.


Windows is a complete mess these days from a usability perspective and even Microsoft doesn't seem to care much about it. As for Linux, rock stable once you get it running (at least the OS itself) but lack of software support and other quirks makes it less desirable on the desktop.


Rock stable?


Solid as a rock?


If performance is really a problem, there were faster Intel Macs than a mini, as well.


True, but form factor plays into this a lot. You can fit two minis on a single 19” rack shelf, whereas trying to use an iMac or a laptop isn’t as convenient.

If they made a smaller and much cheaper entry-level Mac Pro that had a rack mount kit to fit in 2U or something, that would be amazing for production.


The current Mac Pro is available in a 4U rack form factor for US$500 more.

Click the “Buy” link: https://www.apple.com/mac-pro/


Just two of those blows through the budget of 10k for 2 complete redundant sound system given as an example in the article.


That is true. A Mac Mini is much more cost effective (for processing power per volume). But, if you’re getting a Mac Pro, it’s because you either (1) need a Mac Pro, or (2) have enough money to blow that much on the fancy new stuff.


None in the form factor/price range we need. Mac mini is unique.


Entertainers are not Engineers.

They fall for marketing significantly easier and use Veblen goods to signal wealth and power.

Then they get used to a system and it's all they know. It's more effort to change and they get locked in.


Out of the FAANG companies I can’t speak for Facebook and Netflix, but I can assure you that the engineers at Apple, Amazon, and Google have a broad preference for macOS and Apple laptops. Since the pandemic even the engineers using desktop Linux have mostly transitioned to Apple laptop + remote Linux server.


Software engineers are not Engineers if we use the strict definition of Applied scientists.

Software engineers are forced to use tradition and Authority due to abstraction.

I say this as a programmer.


Applied scientist is not and has never been the definition of an engineer. Dating back to the term’s earliest usage, predating both modern English and the scientific method, it has meant someone who designs and maintains complex machines.


Don't be ridiculous. The designers at the highest level of entertainment are extremely technical.


Programming has been my "day job" for a long time, but writing music has been my real passion.

It's absolutely standard for electronic musicians to bounce tracks with computationally expensive plugins to static audio tracks while working. Otherwise, there is usually not enough CPU available to perform all the DSP for a complex song. (Additionally, bouncing tracks to audio gives more control over fades, cross-fades, tails, etc.)

This blog post confuses me, because is he playing the VSTs live? Why not trigger bounced audio instead?


Author here. Most Broadway shows have live keyboard players. There are a variety of things that can go wrong during a live show, and a live band is extremely useful in those situations. Some shows do have tracks for certain sections of the show, but most of it is played live with VSTs.


Broadway performances aren’t always trigger-able.

They may not always be performed exactly the same way ... with the exact same timing or at the same tempo or the same pauses, etc.


Yeah, when I started out with the first Intel Core Mac Mini, I bounced down most of my effects-heavy tracks, though I did this pretty begrudgingly. Fortunately Logic has a great feature where you can "Freeze" a track to a "bounced"/rendered audio file -- if you want to make edits to the effects, you can un-Freeze, make your changes and re-Freeze. Only downside is, this takes up precious time when you're trying to be creative/productive. :)

These days everyone expects to be able to just layer on 8 effects on every track but.. that's a pretty recent luxury. Back when computers weren't really powerful enough out of the box to do stuff like that you could get dedicated DSP cards like Universal Audio's "UAD-1" PCI card, which let you use really high quality effects on your tracks without overloading your CPU. Now they have newer Thunderbolt/USB-based outboard stuff to do the same thing.

Regardless, most people in audio know you never buy the brand new hardware (especially when it's on a new processor architecture) and expect to be able to do everything as effectively before. It takes a while for all the third-party software vendors to update their stuff and make it run smoothly, assuming the software team is still in business or still working on that product. There's a ton of great stuff we'll never get new updates of. This is why you'll sometimes find pretty old computers in musicians' homes... There's still one cool granular synthesis app I can think of for Mac OS 8/9 (th0nk) which was never updated for anything newer, for example.


I remember seeing ProTools and other audio workstations around my city still running on old 68K Macs for years after the PPC transition because they worked, had crazy-expensive licenses, and were treated more like racked equipment than traditional computers.


Absolutely. That is more the rule than the exception for professional recording studios. Still happening, too. If you bought into many thousands of dollars of gear for Thunderbolt, for instance, or FireWire, the computer is only one small piece of that system, and becomes part of the racked equipment.


Yeah, audio/musician guys I know were using PowerMac G4 towers years into Apple's "Intel" transition. And yeah, "upgrading" means basically tossing away a perfectly good $1k piece of software and paying nearly as much for an upgrade license.


> There's still one cool granular synthesis app I can think of for Mac OS 8/9 (th0nk) which was never updated for anything newer, for example.

If they no longer plan to sell or update the software, I wonder what's to stop them from open sourcing it if asked.


Oh, they actually allow you to download it for free now, IIRC. Maybe they just figure it's so obsolete, no one would bother to port it to a newer system? To be fair you can get the same effect with newer granular synthesis software manually, th0nk was just cool in how you never knew what you're going to get out of it, leaving the result to chance.

(direct download link http://www.audioease.com/download/thOnk_0+2.sit.hqx )


> Regardless, most people in audio know you never buy the brand new hardware (especially when it's on a new processor architecture) and expect to be able to do everything as effectively before.

Worse, you also can't know in advance IF your existing hardware will be supported on a new system. That has bitten me once with an E-MU 0404 USB audio interface. The fine folks at Creative never bothered to release a production driver for Vista, let alone Windows 7 or later. Needless to say, I don't buy Creative products after that.

On the Mac side, I could only imagine them getting advance news of the M1 and going ¯\_(ツ)_/¯.


Musicians[0] (electronic and otherwise) do absolutely perform live with real-time AU/VST synths and effects.

There are some constraints on how much complexity a single machine can handle but MIDI sync, Ableton Link etc. allow to combine multiple computers and hardware synths in one setup.

Anecdotally, I don’t perform live but I write music as a hobby and I rarely if ever bounce to audio to free up RAM or CPU. I do bounce for artistic effect though (such as reversing a drum loop or shifting it up/down an octave).

[0] Like Caribou: https://m.youtube.com/watch?v=8s7Z3vMUGrs


It's a little weird, especially since the musicians aren't visible to the audience - why would you play live?


I played in pit orchestras in high school and college, and it's surprising how much can change from show to show. Lots of songs have vamps (sets of four or eight bars you repeat over and over until something happens and it's time to go to the next bit), or holds, or other various other things where the conductor is paying attention to what's happening on stage to cue the musicians. You could probably rig something up in QLab or whatnot to maybe emulate that effect digitally, but it's never going to be "just hit play on the canned track."


People go to Broadway shows to see live performances, and music is a big part of it (in some shows, the biggest).

You may say the audience won't know the difference. But word will get out. Audiences may not be willing to pay premium prices for prerecorded content.


Actually, what is the reason for that? If we had computer-linked humans act a scene, then record all of their interactions, and then have them perform that ten nights in a row as live marionettes replaying their original actions from the computer, is that worse or better?

Does the potential for failure and variation attract the audience? Or just the fact that the medium is much higher fidelity: stage actors are human, three dimensional, and no other medium can record all the subtleties of motion and expression just as well.


It's not the fidelity, it's the human factor.

An concert, a play, or any other live performance is much more than a single night, 2 hour performance. It's countless hours of practice, errors, emotional up & downs during the journey.

The performance one watches is the tip of the iceberg and the underlying depth is what amazes us as humans. The unmistakable performance or real-time and invisible improvisation during the show.

I used to played in a symphony orchestra. Even the slowest of classical pieces is a storm for the orchestra, if not for the spectators.


Because then the people on stage can't react to the audience reacting. That's part of the live experience. Every live show is a bit different because every audience is different.


The reason for that is that most humans like other humans.


Variation is absolutely a huge part of the draw. Broadway fans absolutely see the same show multiple times and compare the different performances/actors.


I agree. I've only seen a couple of broadway shows, and simply seeing there were actual people playing music in the pit enhanced the whole experience (occasionally music was played on stage too in the shows I've seen).

On a side note, these broadway performers and musicians were awesome and incredibly professional and talented. I didn't know what to expect (as a kid, I found these musicals so boring) but it was great.


If it's for theatre performance, I'm guessing subtle timing is important.


Yes, this is a major reason.


Because it's a slippery slope. Before you know it, you'll be listening to material that can't be produced live.


Same reason that people hire a mediocre DJ at a wedding instead of just having a cousin operate an iPod playlist?


And what's that?

Ease and quality of improvisation? Does the musician here improvise? Slight variation in each performance?


Compare Jimi Hendrix's studio recordings, with that of his live performances.

The main song in Phantom of The Opera, in the West End, is/was done to a click track - and the performance is/was noticeably flat because of it, in my opinion.


I'd say it's the human aspect. Wedding and Broadway you go to see people live.

Otherwise why not just fire up YouTube out Spotify


Control.


All of the other answers have good points but are missing the one that matters: the musician’s union requires it. Every Broadway theatre has a minimum number of musicians (depending on the space) if a musical is being performed.


This is a very badly written article and I'm not sure why it has so many comments and points.

There is no real content. No real impact is mentioned.

It says they used Mac Minis which they had to reduce the sound quality to use. OK.

"Apple Silicon changes everything for Broadway electronic music designers. The new M1 Mac mini is capable of running high-end sample libraries and virtual instruments in a stable manner".

That's an output. What is the impact then?


It's not an article. It's a personal blog post. The author is conveying personal experience and thinking. It is perfectly well written for that.

I found it interesting and informative about a part of the world I don't have experience in. If you don't like it, that's fine. Not everything has to be for you.


Thank you, I'm not sure why some people find the need to be hostile. It was basically some personal thoughts that I decided to share online. Didn't feel like writing 3,000 words on it and definitely never expected to see it shared on HN. Anyway, the post is completely accurate (whether people believe it or not) as I work in the industry.


Agreed. I outlined my thoughts in a more detailed post but I dont see how M1 changes the game for anyone using Orchestral libraries. SSDs and disk streaming large sample sets multihundred note polyphony has been a thing for 20 years.


Designing music systems for Broadway isn't only about playing back orchestral libraries. We have to factor in acceptable latency for the player, as well as balance DSP effects as needed for each show. I promise you have no idea what goes into it unless you happen to be one of the five or so people that do this sort of stuff in NYC.


Totally get that. And don’t claim to know. However the article didn’t help me learn anything about it either which was part of my point. The title was impact of apple silicon on Broadway and I’m still speculating...

The only mention was big sample libraries which are not generally cpu bound. If they mentioned 300 note polyphony or analog or physical modeled synths or even mention some of the software used I’m happy to learn. The point is that there is no detail here. I have run daws and pro audio for over 20 years and I want to know details.


SM sites with relevant customer base, are sometimes targeted by pr teams.


Wow so it turns out that Broadway (!) had really bad digital sound until Apple came out with its revolutionary new design. When you go to a show next year, you'll be able to hear the quality of the M1. And it will only get better with the M2, the M3, and the M4.


In fact, it'll eventually get so good we'll be the bottleneck. We're gonna need new ears, folks.


That just gave me a nice idea. There's probably a market for "audiophile" eardrops, to lubricate the outer ear, such that the sound waves effortlessly glide through the fleshy waveguide, to hit the tympanic membrane at optimum velocity - or something like that.


There's far crazier out there. https://englishelectric.uk


I think this is for music “streaming” applications described here:

https://www.sweetwater.com/insync/audio-networking-explained...

https://en.m.wikipedia.org/wiki/Dante_(networking)

versus the streaming associated with Spotify et al

IOW a legitimate (non—audiophile) piece of gear


Very light on the details. Is there a prosumer version? Will this increase the quality of my Discord audio?


This is awesome news! Now I only need to find where I can buy some gold-plated Cat-48kHz network cables.



This put a smile on my face, happy holidays


Couldn't you use a PC box built for less than $600. Sound does not require near the bandwidth of video, and I am sure you could get a decent rig setup for that price. I have been doing low-latency audio on Windows for over a decade. I have used Ableton on Windows with my Novation X-Station 25 from 2005, and I play with Extempore[1] for livecoding, and Windows has been more than adequate. I used the X-Station as my soundcard at the time, because it was better than my cheap PC at the time. I also work for the Entertainment division of an engineering company and we do (did before COVID) lots of mechanical/structural work for Broadway.

[1] https://extemporelang.github.io/


The software is the issue. If the entire team has been trained on and built their back catalog of work in a specific software package, switching to a different platform may not be an option.


Yes, Im kindof baffled why they insist on a mac mini with such a tight budget. There are linux options too


I think there are several dimensions to this. These are mission critical systems with extremely high uptime expectations. Therefore using highly standardised easily available identical replacement parts with solid vendor support is a must. There are probably several Apple Stores within easy reach of Broadway and Apple Care will get you rapid support and repair if needed.

Another aspect is that the audio stack on MacOS is truly outstanding, with a whole host of professional audio applications and utilities on the platform. Hardware drivers are up to date and also well supported. Connectivity is also well supported by audio hardware vendors.

Finally Mac Minis have had a stable hardware form factor for well over a decade. If you go for a small form factor generic Pc system there’s no guarantee anything like it will even exist a year or even 6 months later. Hardware specs and specific component choices change with the breeze as availability and relative pricing fluctuate continuously.


Those seem like good points, however you can build a PC with name brand parts and keep them consistent and going for a decade. AMD has made sure you don't need a new motherboard everytime they move to a new generation of CPU compared with Intel for example. Yes, Apple stores are all over, but rapid support and repair are not inexpensive, which was one of the arguments for why they chose Mac Minis. FTR, I am interested in a Mac Mini for my children's main home computer and also for streaming on our TV. It's pretty neat with the M1 processor! I have a 2011 iMac 27 with an AMD HD6970M with 1 gb (possibly 2gb?), and I still do a lot of stuff on it. I program graphics on it, and play with audio.


building a PC with name brand parts does absolutely nothing to address the critical factor mentioned by the GP:

> Another aspect is that the audio stack on MacOS is truly outstanding, with a whole host of professional audio applications and utilities on the platform. Hardware drivers are up to date and also well supported. Connectivity is also well supported by audio hardware vendors.


Drivers. Good lord. I primarily use a Mac for both school and work and had forgotten about the nightmare of drivers. I recently purchased an HP laptop for software that only works on Windows and have already had to reinstall Windows since the display driver was completely broken.

I absolutely appreciate how Macs don't really need you to play with the drivers... I've been using them for years and haven't once needed to do so.


It doesn't have to be Windows. Linux has had the superior audio stack for the past two decades, and it's considered the standard for most A/V work. Just as there are "Mac exclusive" apps, many Linux/*nix audio tools are never ported to MacOS (for example, the industry staple CALF tools are not actively maintained for MacOS).

The larger point here is that the M1 is not as revolutionary as it's being made out to be. The "flagship killing" multi-core performance is embarrassed by the Ryzen 4800u, which can be found in NUCs that cost significantly less than the Mac Mini. If price were a limiting factor in the industry, we'd probably see more people reaching for those Ryzen machines: but we don't. That's why this article is ultimately self-defeating.


In all my time spent in studios and live music in the past 20 years, I've never encountered an A/V professional using Linux. I don't work in video production but I know several people who do, and I've watched some documentaries on this -- in that case, the proportion of Apple machines seems to be even higher than the already quite high proportion found in audio. Never seen a professional or semi-pro using a Linux box. I'm sure they exist, but this doesn't track at all with what I know about this industry.

Sometimes a DJ or bedroom producer is using Windows. So it's quite surprising to hear you say that Linux is the standard.

Can you point me to any producers, artists, sound engineers, studios, or production houses that use Linux? I would love to know...

I have never seen anyone using CALF tools in this profession. Looking at what that does - the functionality is available in a number of rock-solid applications native to Apple (MainStage, Ableton, Logic, etc). So there's no reason why it would need to be ported to MacOS.

How are you measuring audio stack superiority? And in terms of "stack", are you talking about something separate from the suite of professional audio / MIDI tools that come standard on every Apple computer that don't seem to have any comparison on Linux or Windows?

Are you taking into account the availability of hardware drivers on these different platforms? What is your experience in setting up a Linux-based audio production system with a lot of outboard gear? This is one area that seems to be a major source of headaches and motivation to not use Linux for producers and engineers.

Even looking through the comments on this page - there's people talking about the difficulty of setting up a functional audio production environment in Linux compared to Apple. I've noticed this sentiment is repeated over and over again on HN (going back to previous articles on audio production, Ableton, etc).

This topic stands out to me as I've sincerely wanted Linux to be much less of a headache with audio so I can get off the Apple ecosystem entirely.


There's an element here of being led to the proprietary stack only because of paper-cut UX considerations. They aren't going micro-budget here - which they could do, by simply eliminating all the equipment and doing the show a capella - so much as they are looking for impact for money, which means something that works reliably for the current talent pool in live scenarios.

Audio on Linux can definitely work well, but other comments here mention Mainstage as the live performance software driving this decision, and offhand I can't think of a comparable app on Linux; you'd either live with different/worse UX, or have to code up something, which is not really in the scope of the quoted budgets.


I don't understand how it's self-defeating. I literally work in the industry and shared my thoughts about how Apple Silicon is going to benefit the work I do. You can keep talking about Linux, but at the end of the day Linux doesn't have anything we need for the job...


How can the article be self defeating, it’s not really an opinion piece when it comes to Mac Minis dominating Broadway, it’s a statement of fact by an industry insider. How can that be ‘defeated’, self or not? What does that even mean?


I don’t think you can run something like Native Instruments acoustic instrument sample suite on Linux.

I’m guessing they want orchestra sounds, and not just analog/subtractive, FM or other digital synth sounds.


It’s the software.

With consumer and professional software, it’s never as simple as switching to Windows or Linux to get the job done when your entire back catalog of work and the training for the entire team is invested in a specific software package.


Linux is not really an option. There are very few DAWs with support, next to no drivers for common hardware, and next to no support for popular VSTs. Linux is not a good platform for audio work.


No drivers are needed for "common hardware", because iOS forced most "common hardware" to be driver free i.e. actually compliant with the USB audio standard.

It is true that if you're (still) using PCI devices, drivers can be an issue, but there are several high end companies, including RME, with PCI device support on Linux.

Ardour, Mixbus, Bitwig and Reaper all run natively on Linux. Yes, that's a tiny subset of those available on proprietary platforms.

"Popular VSTs" feels like a wierd thing to say. I would doubt there is enough overlap in people's plugin use to every really create particularly popular ones. There are thousands of plugins available on Linux, both libre, gratis and proprietary. And even if I do not recommend the approach, many (not all) Windows VST plugins can be used on Linux. Yes, you cannot use plugins from companies that choose not support Linux without some hurdle hopping, and yes, that means that "well known" and not-easily replaceable plugins like those from Izotope and Native Instruments are generally out of reach.


A lot of hardware still has "drivers" (more just hardware specific software) for their devices though that only support Mac/windows.

The bigger point isn't that audio production isn't possible on Linux, it's that you have to make significant compromises to make it work. I count switching DAW as a significant compromise.

The cost difference between a Mac mini and Linux hardware of similar performance isn't worth those compromises to many creatives, outside of CG


>I count switching DAW as a significant compromise.

That's why Ardour runs on more platforms that any other DAW, so you don't have to make that compromise :)


There's zero upside to messing around with Linux. Software is one of the biggest issues.


My understanding of this has been that OSX has always had much lower audio latency than Windows or Linux.

Apple have put quite a bit of effort into making their audio sub-systems both performant and reliable. Something that isn’t true on Windows and Linux at the moment.

I also assume that part of if is driven by audio professionals using Macs elsewhere, and you don’t want to be using an unfamiliar system when running a live performance. That’s the one place where “just let me Google that” doesn’t fly.


> My understanding of this has been that OSX has always had much lower audio latency than Windows or Linux.

Your understanding is wrong, at least with respect to Linux (which technically has lower latency than macOS/OSX has ever had).

One can debate the ease of use (and to be fair, macOS will likely win), but from a technical perspective, if you want the absolute lowest latency, Linux is the system to use.

At least ... that was true for PCI audio devices. At ardour.org we've been playing around with an M1 mini, and the ease with which it gets to 16 sample buffer size with a stock USB audio device is astounding and envy-inducing.


Does Mac have a built-in low latency system? The Windows ones are terrible but I don't think anyone would use them anyway when there is ASIO.


It does. CoreAudio gets the job done.


> Due to budget constraints, many shows end up using Mac minis. Historically speaking, the Mac mini’s computing power has been a bottleneck for electronic music designers on Broadway. In a perfect world, we’d all like to use the best-sounding sample libraries for our work, but that was never feasible with the Mac mini. Thus, the compromise was always to reduce sound quality to fit within the Mac mini’s compute constraints.

What is the complexity with sample libraries? Until now I thought they were just big collections of categorised MP3s, and surely Mac Minis can handle those. I guess I'm missing something.


They use samples as a base, but do heavy amounts of DSP to get the final product, these days up to simple ML models.

Think being able to create new virtual backup singers who you can play like a piano, pretty much on a whim, who sound convincing to the kinds of people who work production on broadway.


Try 77gb of high-resolution audio with realtime DSP happening, I guess? That's just for a piano instrument. https://synthogy.com/index.php/products/software-products/iv...

To be fair most sample-based instruments are not so data-heavy, but the very best-sounding high-quality ones are. They usually include some degree of software processing to dynamically alter the sound to make it sound as realistic or organic (or whatever) as possible. That's before any post-processing effects are layered on, per-instrument.


Sample libraries are usually WAVs or (non-MPC) encoded audio files. Mac Minis simply don't have the processing power to handle the DAWs, which are generally CPU-intensive and can be memory-intensive depending on the VSTs or plugins used.


Sample libraries are pretty efficient compared to digital synthesis VSTs. They really don't hit the processor nearly has hard.

But M1 designs that max out at 16GB don't have the memory to handle plenty of sample libraries, so I don't understand how a Mac Mini is supposed to be up to the job.

It's not just about raw cycles but about cached access to the samples. The biggest libraries can run up to 1TB and you'll probably have more than one. Obviously you don't keep everything in RAM at the same time, but even so - 16GB is a serious limitation for this kind of work.

And if you're using a computer instead of a synth rig you cannot afford to have problems, because any stuttering or glitching is painfully obvious and distracting in a live setting.

It also makes no business sense for a Broadway show that may be grossing $25m a year with a multi-year run to cut costs to the bone on its musical hardware. Considering the cost saving involved in replacing real players (for better or worse...) it makes far more sense to spend twice as much initially for a no-risk professional setup than to pinch pennies and risk glitches.


Actually RAM is not the issue. Most samplers only load the attack portion of samples into RAM anyway. SSDs are so fast now, we can stream the rest without issue. The problem has always been CPU-related as we have to set a low buffer size to minimize latency. The previous Mac minis we've worked with struggled with some of the more high-CPU plugins (VSTs as well as FX), so we had to compromise in many cases. I did some testing with an M1 MacBook Air, and it blew the old Mac minis away in terms of performance and stability. Very much looking forward to see how the M1 will be used in these live production situations.


You beat me to mentioning the sheer size of those libraries. If they reside on multi TB SSDs connected via Thunderbolt, taken along for the tour, i can easily imagine, that a Mac mini would a good solution. Apple embraced Thunderbolt early, it was even exclusive to them IIRC. If i were responsible for playing those sounds, i would want one stable platform too, instead of several diverging implementations on standard pc hardware and the fun (/s) that i'd have (drivers, windows updates etc...).


The first "gigabyte" multisampled libraries appeared in the 2000's, when memory was even tighter and spinning disks were the norm, so you're underestimating the technique here - it's always been streaming-intensive, and the software is doing a lot to mask I/O latency. A faster disk goes a long way in this respect, letting you run more instances with smaller buffers.

Memory does pose a bottleneck for huge arrangements in the studio, but in the live setting you literally don't have enough performers at the keys for the same constraint to apply. The stuff they might trigger can be bounced out into multisamples, so the remaining bottleneck is with effects processing.


I get that all else being equal, it's better to put everything in RAM, but what point is it just poor software design? 1TB of RAM is nothing to sneeze at. Is there really such a great need for sub-100us read latencies that commodity NVMe SSDs are insufficient?


Sounds like a macmini is a terrible choice for that job. Is there a hard osx requirement? It's not like they are really cheap.


It's not that there's a hard OSX requirement, but Mainstage (for live mixing) dominates this field. It's incredibly low latency, and there's really nothing as good for this specific use case. Combine Mainstage with on-demand sequencers like Doggiebox, and Apple really owns this market.


Yeah, this is the main reason. We use MainStage for all shows. Also, Mac minis are quickly replaceable. If we’re in a city and a computer breaks, we can grab a new computer from the Apple Store right away.


Besides software and familiarity, I'm sure the Mini's form factor has a lot to do with it.

Also, if you need to replace or duplicate a unit with an identical one, you know you can always run out and easily find replacement Mac Minis. If you used some random ultra-SFF PC, can you be sure you can get another identical one easily if you need to? What about 3 years from now? Changing out hardware always introduces a possibility that something might go wrong - exactly what you don't want on a broadway show, a few hours before a performance!


Yup. Form factor plays a huge role. All our machines are racked and space is precious. Being able to fit two Mac minis in a 1U rack space is great.


The 2018 Mac mini is, from all reports, a pretty decent machine for DAW usage. Many replaced their 2010 Mac Pro rigs with 2018 Mac minis + Thunderbolt chassis.


Yup, I am also not understanding the need for it to be a mac mini.

The new Apple Silicon chips might be a lot faster. But I am pretty sure that audio plugins take a long time to update.


I'm a MacOSX plugin developer, and in touch with others also facing this situation.

I develop on a VERY old machine in order to support backward compatibility way way farther back than Apple will allow: my current plugins will run on PPC machines because those can be used as music DAWs. As such, the machine I'm compiling on is not producing 64-bit AUs that will work, directly, on MI Macs. They work on literally everything up to that point, but Apple finally shanked me, at least w.r.t that compile target. Until then I was able to support PPC to present day with one three-target fat binary :)

Another dev, Sean Costello, told me that older builds of his stuff (pre-2017?) weren't running on M1, but everything built past a certain point (a new version of XCode, that had long abandoned things like PPC and possibly 32-bit support) was automatically working on M1 through the Rosetta layer.

So, depending on the build environment, Apple arranged that the audio plugins don't even have to be updated. Depending on the libraries the plugins rely on (a vulnerability for some of the big names that use bespoke but OLD libraries to do things), some of the plugins might need only a recompile to be native to M1 architecture. And some might be really intractable.


20 years ago, the audio software industry could have decided to support Linux. Some notable players in the industry tried to convince others to do this (e.g. Waves). It didn't work.

Plugin makers, DAW makers all refused to go down that path, despite the possibility of liberating their highly complex, deeply technical products from the whims of Redmond and Cupertino.

At that time, Linux already had better latency than OS X or Windows. It would have provided access to faster, bigger systems than anything you could run OS X or Windows on, and access to ARM "early" too. The industry could have actually convinced people that they need specialized computers, not off-the-shelf laptops and desktops to do this stuff (still largely true). But more or less nobody wanted to play.

And now, in 2020/2021, just as when the PPC->Intel shift happened under Jobs, because Apple tells them all to dance, they will.

It's sort of pathetic, even if all "understandable" from various points of view.


Linux audio support is not good. Macs have much better audio support.


I've been at the heart of Linux audio for more than 20 years.

The situation on Linux is not "not good". Most people who comment on it simply don't know what they are talking about.

It is fair to say that Macs are easier to get good results with.


My experience with linux audio as a casual user (hobbyist composer and arranger) was awful. This was about two years ago -- I remember trying to install Jack, completely screwing up my audio configuration, and then spending days mucking around trying to get Pulse Audio working again at all. I never could get my sound card working, and had to nuke my Fedora installation and reinstall. It was a nightmare. (This was around three years ago, on Fedora 25).

While I don't doubt that Linux can be great for audio, if the configuration befuddled someone with a CS degree so badly, I think most ordinary musicians don't stand a chance.

N.B. Compare to something like Soundflower on Mac at that time, and it's no contest -- almost foolproof to set up.


CS degrees are generally not useful with system configuration, and they demonstrably do not cover the concepts associated with audio on computers.

I know dozens of people who've had experiences isomorphic to yours on OS X/macOS, so the truthfulness of this anecdote isn't particularly useful in establishing anything.

But yes, as a casual user who doesn't understand or want to understand the design decisions that led to the current state of audio on a typical Linux machine, macOS will provide a much smoother experience.

I wrote JACK. I know the guys who wrote SoundFlower. I asked them why they wrote SoundFlower when JACK already existed. They said it was because 90% of their user base never wanted 90% of what JACK made possible, so they cooked up a really simple version. "But it barely does anything!" I insisted, grumpily. "Precisely", they said.

If you don't understand the engineering mindset that says that you probably shouldn't do this, then certainly, macOS will look like a much better idea (along with SoundFlower).

That will likely remain true until you run into a situation involving one of the many things that JACK makes possible (note however that I generally advise most new/casual users against using JACK these days, not because it is broken but because as your comment demonstrates, it doesn't make sense to the mindset/workflow that they bring to the table).


Pulse Audio, the virulent gonorrhea of user-facing audio software!

Seriously, that drove me away from Linux last time. When basic stuff like that just won't work, there's a real problem.


I don't doubt that you can do almost anything on Linux if you want, but "easier to get good results with" is a super important consideration that I feel my more dedicated Linux-using friends and acquaintances sometimes undervalue. They'd be horrified at the idea of paying for Rogue Amoeba's Loopback (let alone for a Mac to run it on, of course), but the flip side is that it's a lot easier to do pretty sophisticated audio routing with something like that.


It's an interesting point though.. even if linux can bring you the best but requires 'too much' work for the average audio engineer it's gonna flop I assume. You need balance to survive.


I’ve found audio in Linux much easier to work with than MacOS in recent years - the Pulseaudio team has done a great job.


As a plugin vendor supporting Linux: I think the problem might not be technical but is instead all about the UX. Regardless, it's been great supporting Linux as it appear 5% to 20% of our userbase are now Linux users (macOS 30%).

On Windows or macOS a dictatorship constrain the Audio API to be X and the Graphics API to be Y. On Linux, as a user I can choose ALSA, JACK, PulseAudio and while I'm sure there is a better option, why is the user asked to make this choice in the first place? Just choose "the best" for the user.

So Linux software usually doesn't hide stuff to users (liberty?) and this quickly translate to a worse beginner/intermediate experience, having too many choices in audio software is usually something to combat.

And Ardour is the perfect example of this: upon opening it asks several questions to the user in a pop-up, while virtually all other DAWs will reopen the last session. There is a significant divide in terms of UX, and I see a bit of this everywhere I go when booting under Linux.


Thanks for the interesting response. Glad to hear you're supporting Linux with your plugins!

There is no way for a DAW to choose "what is best" for the user. On Windows the same range of choices exists whether or not any particular DAW offers it to them. ASIO? WASAPI? WaveRT? MME? There's a case to be made for each, depending on circumstances. Only macOS really gets this right. The user can also select "auto-start" for the audio/MIDI I/O backend, which will cause them to no longer be asked which to use each time. This is a reasonable choice if they always use the same computer with the same audio interface. It's not so great if those things change quite a bit.

You're absolutely right that the GUI situation is a mess though. There is no standard graphics API on Linux beside X Window, which is totally unsuitable for direct use in any modern development effort. The desktop toolkits (Qt and GTK etc.) are unsuitable because they cannot be easily statically linked into your plugin, which can then lead to version clashes with whatever the host might use (e.g. your plugin uses QtN, the host uses QtM). There is no good solution to this: we always advise plugin authors to avoid desktop toolkits, and if possible use small standalone statically-linkable GUI toolkits designed for the purpose (PUGL, RobTk and a few others). Alas, even JUCE by default tries to link in some version of Qt (it can be turned off, and should be).

On restarting Ardour, it gives the user the choice of a new session or selecting from a list of recent sessions. I've never heard of anyone suggesting that the correct behavior is "open the last session" and this would differ from the behavior of numerous other creatives applications too. If you start up Inkscape or GIMP (or its derivatives) they will not open the most recent file/project automatically.

> having too many choices in audio software is usually something to combat.

Be sure you let the Reaper devs know this :))


> which will cause them to no longer be asked which to use each time.

This sort of pattern is conceptualized well in the book "About Face". I think it's a helpful book that explain a lot of the DAW popularity variation.


> Some shows I’ve worked at set aside a $10,000-$12,000 budget for two keyboard rigs. That sounds like a lot of money at first, but it’s not.

It's a bit surprising that each show in the same theatre buys their own audio equipment.


This sort of "wasteful spending" tends to happen for big budget events for reliability reasons. You might be able to save $10k by reusing equipment from the last event or using a venue's equipment, but if the equipment cuts out mid-show, it will cost a lot more than $10k in missed opportunity.

I spent a week hanging out at Circuit of the Americas helping to run a solar car race, and asked about all the loose CAT6 bursting out of every wire conduit. The broadcasters run new cable for their equipment every Formula One race, hardwire it, then cut it loose and pack up. It's apparently cheaper to do that than to debug connection problems with existing cables, or risk losing a camera feed unexpectedly due to intermittent connections from failing connectors.


I help out at Spain's largest LAN party every year, and run systems (not networking, but I work with those folks). One year we tried laying the fiber for the satellite switches in the troughs and leaving it there for next year. We laid extras for redundancy. Come next year, something like 30% of the fibers were dead and we even had places where both paths were dead and we had to patch up above ground.

Nope. Venues and especially the people who rent them can't be trusted with cabling. We now run our own every year, in an efficient combination of above ground and through the wiring troughs. We pull it out when we're done though. Only a couple things -the main internet feed, which is more robust, and the line that crosses halls- stay there to be reused (or fixed if necessary, groan).


I remember when I was doing live sound that my mentor would switch out EVERY battery in the microphones EACH show for fresh ones. Those batteries were probably still 75% charged after 2-3 hours, but the risk of accidentally having a battery you lost track of dying during a show was just too high -- easier to replace them all. Ultimately not that expensive in the scheme of things.


Production companies are ephemeral and tour multiple locations. Depending on the production they may require different sorts of equipment to accommodate different performance skills. It's totally unfeasible for a venue to stock a wide enough menu to satisfy a variety of productions, and likewise totally unfeasible for a production that shows up and has mere hours before showtime to adapt to someone else's instrument rigs beyond the basic necessities of PA. You can rent but it's often better to buy as cheap as you can to eliminate complexity and risk.


Probably no different from each professional cook in a kitchen bringing his/her own knives to work. Are you going to rely on the crappy "house" equipment that everyone shares and you have no idea what quality it is? I suppose also that a theater has no interest or incentive to keep the equipment top notch, etc. or well-maintained (or is even qualified to do it).


Would you also find it surprising that each band that plays in a venue buys their own audio equipment?


Mixer and speakers? Yes.


The speakers that sound good for country music don't necessarily work well for hard-core gangster rap. Bigger acts especially prefer to bring all their own gear in order to ensure a consistent sound in every venue.


This is commonplace in the US, less so elsewhere.


A Designer at this level (Sound, Lighting, Projection, whatever) specifies the best possible rig for her vision for this show within her budget. The production rents the gear, the performance space, and the stagehand hours to put it together.

No one likes being stuck with the landlord's choices. A designer would have worked on shows like that (work with what the venue has) earlier in his career, but not after graduation to the big leagues.


Is all the software necessary to pull this off already compiled for arm64?

I've read claims that rosetta 2 is fast. I haven't seen results about m1 running x86_64 through rosetta vs intel though.


I ran a multithreaded poker solver (a Windows executable) on top of Wine on Rosetta on base MacBook 13 M1 and it beats the performance of base MacBook Pro 16 inch (Intel 6-core) with Windows running on bare metal by a small margin.

(M1 fan actually turned on, which is a rare occurrence; at the same time MacBook Pro 16 Intel would be frying my laps. Ballpark estimates on the internet seem to pin it down beating MBP16 6-core i7 and trailing MBP16 8-core i9 on pure CPU performance benchmarks, power obviously much lower; GPU far better than Intel UHD 630 and close but not quite as good as Radeon 5300M.)

For most applications that aren't especially pathological for Rosetta (e.g. V8 x64), it seems not more than 10-30% slower than arm64.

(Don't feel too surprised. If your binary is static code, it is pretty much a static compiler from x86 to ARM [albeit an incredibly well executed one], so not magic: in fact if you disable SIP on your system, you can peek into the translated executables in `/var` and look into them via `objdump`/`otool`)

--

Another benchmark that is a bit worse for Rosetta:

Compiling gRPC from clean source (not quite Apple to Apple comparison cause under Rosetta, it runs LLVM x86 codegen vs LLVM arm64 codegen outside Rosetta and I did not want to mess with the flags). I also feel like process creation under Rosetta can be more expensive, but not sure.

- MacBook 16 (i7-9750H, 16GB): 67s

- MacBook 13 (M1, 8GB): 48s

- MacBook 13 (M1, 8GB, Rosetta): 85s

Now I have my Intel Macs to list on Craiglist...


What solver are you running? What kind of performance do you get with these setups? I'm curious because I only hear about more core-heavy builds for piosolver. Thanks


I tried PioSolver free version with identical settings on all machines and targeted a certain exploitation level and compared the time it took to finish the task. Performance on M1 is close to a desktop 3600X (admittedly weak cooler was installed on that machine)


There are various benchmarks on YouTube showing the performance is reasonable. Native code is obviously faster but with rosetta it seems like the M1 is a mid tier Core i5 or Ryzen 3 series in performance. Not bad at all but not top tier.


I mean it’s totally crazy, emulation of Arm on x86 has always felt extremely slow, to me anyway.


Apple put some special sauce in the M1 to help make rosetta fast. Support for the x86 TSO memory model, for one thing.


Also ARM now has a bunch of instructions that are blatantly there for efficient x86 emulation, though ARM won't tell you that in the docs.


IBM POWER9 supports some memory ordering instructions that (as I understand) would in theory be useful for x86 emulation, but a) I'm unsure if anyone actually uses them and b) They are removed from POWER10


I don't think Apple generates those, though.


Can that secret sauce just be to slow down Intel based macs in the OS layer?


I can’t be the only one to misread the title as “Broadwell”, and scratching my head extra hard at the reverse timeline (Broadwell was an Intel design produced from ~2014-2108)


I also thought Broadway was some Intel CPU or some console GPU.. oh it's actually the name for an ATI GPU


The M1 is also going to cure world hunger and solve global warming


Great article, too short. I thought music union rules forbade the use of synthesizers that replaced humans, e.g. horn and string sections.


Thanks. I didn't originally write it as something to be shared, but I guess I should from now on. I definitely have a lot more thoughts about it, so might do a follow up in the future. Regarding union rules, synthesizes are fine but there are some "house minimum" requirements so usually there are 2-3 keyboardists and then other acoustic musicians as well.


> and I’m very excited to see, or hear, what happens.

The cynic in me says the sound budget will get cut further.


What high end sample libraries do they use? Also to be fair the Korg D1 sounds perfect for this and only around $600 each...


A Kong D1 is just a keyboard- it can’t run Mainstage, store gigs of samples, or run a DAW...


I know, the point I’m making is (if you read the article) you don’t have to spend $3000+++ on keyboards. The D1 has an amazing action but is still reasonably priced. Used as a midi keyboard with all the things you mentioned it’s fine/designed for exactly this.


The M1 Mac Mini will also run much cooler and be more stable.


This is a great point. Heat is always an issue since our Mac minis are always racked up. Depending on the theatre location, it can get quite hot in the pit. We’ve had some computers die in the past from suspected heat issues, so the relative coolness of the M1 Macs is a welcome improvement.


very curious to know about this - what kind of code doesnt run on mac mini and windows computers ? What about linux. where is the complexity coming from ?

is it because of the synthesizer software makers write very bad code,etc ?

i would have thought that a modern iphone 12 pro would be sufficient to run synthesizer libraries


The situation is similar to gaming-why work to support a platform that most of your customers don't use?


As written the piece seems to imply that there's new hardware for every show. If true - why on earth?


> Budget constraints

> Mac Minis


Shill


> Some shows I’ve worked at set aside a $10,000-$12,000 budget for two keyboard rigs. That sounds like a lot of money at first, but it’s not.

> Apple Silicon changes everything for Broadway electronic music designers. The new M1 Mac mini is capable of running high-end sample libraries and virtual instruments in a stable manner, and it’s only going to get better with M2, M3, and M4-series chips in the future. The performance per dollar characteristics of Apple Silicon machines are going to have a huge impact on Broadway’s sound, and I’m very excited to see, or hear, what happens.

What will happen is probably that your budget will get reduced, since you don't need as much to deliver the same quality as today.


I'm sure that that market segment tends towards macs, but given these constraints I'm surprised that they're not using much cheaper PCs to run the same audio suites, given the budget constraints.

This is to suggest that Broadway shows could already have achieved the same increase-in-sound-quality-per-dollar by switching to cheaper hardware running Windows, and that the M1 introduction isn't really the big sea change the author makes it out to be.


It's the OS.

CoreAudio is native to macOS/iOS and very stable. The latency is low and most of the time you don't need to install anything: just plug an USB. It even provides APIs for running the plugins, if the DAW wants to use AU. Hell, even the built-in interfaces has good latency on Macs. Also, if you have multiple, different branded interfaces, Apple provides a tool to "merge them" so your DAW pretends it's only one.

On Windows you need an alternative third-party driver solution, ASIO, just to get proper latency. Microsoft tried DirectAudio, Kernel Audio, among other tech, but it never worked and/or never caught on. Even ASIO is not as stable as CoreAudio and requires installing (sometimes unstable) third-party driver software and control panels. IME, sometimes those drivers conflict with each other, so you can't use multiple soundcards at the same time, or swap them. Most people eventually get there, but it still feels like a house of cards.

There's a lot of small studios that choose Hackintoshes, so it's not really the hardware, although M1 might change that, who knows.


I don't know about the present state, but certainly 10 years ago you would have had to be out of your mind to use anything but a Mac in an Audio setup that is used for important live performances - the Audio and MIDI stacks on Windows were a mess compared to those in OSX.


As a former sound engineer I can state for a fact that this is not true. We stopped using Macs while I was still doing it and it was back in like ~98 when we moved for real to digital studios. Depending on what we were doing the only thing the Macs gave us were bigger purchase and support bills. What Apple had going for them was history so people didn't know as much about how to support the setup if it wasn't Macs. The non-Apple software was just fine and the hardware, at the same price point, way better.


I remember working with some Windows 7 setups and having to deal with stuff like WASAPI Drivers and ASIO4All, having to daisy chain MIDI through the out on synths (which introduced jitter) because there is only one system MIDI in and out, etc. Perhaps there was a way to get these things working smoothly, but on the Mac all this stuff just worked out of the box. My situations were probably edge cases in terms of how much gear was connected to a single computer, but still.

I think you're right that legacy, mindshare/knowledge etc played a big part in it, and I'm sure Windows is much better with this stuff now. Although the handful of studios I've been to in the last few years were still running Macs, but again maybe that's just mindshare.

A funny thing I realized recently while trying to setup OBS for video conferencing on my Mac at home (I work as a teacher occasionally): There is no way out of the box to capture system audio, you either need to do it through external hardware or use a hacky solution like Loopback. On Windows, this "just works".


1998 was before OS X was released and before Apple purchased Logic. This is when things changed for the better, while in Windows things kind of remained the same audio-wise.


This era was kind of a low point for the Mac. The dying days of both PowerPC and MacOS 9, overpriced underperforming hardware, and the slow painful transition to OS X. Things started to improve after 2006 when Apple switched to Intel.


That’s 22 years ago. Whole other era. Steve Jobs had just returned, so macs weren’t yet intel and they weren’t running OS X yet, using the NEXT architecture.

In other words, you and OP may both be right. Three absolutely massive transitions at apple between your era and theirs: hardware, software and management quality.


If you stopped using Macs before CoreAudio was invented, I don't really even know what to say, other than I'd have run away from running digital audio on an Apple II, too :)

Your bad luck was that you bailed out of the MacOS audio subsystems just when they started to get good.


"Audio and MIDI stacks on Windows were a mess compared to those in OSX" What do you mean? That sounds like its harder to write drivers for soundcards for windows.. But the user is not going to be writing their own drivers.

Any user who needs to perform audio with any computer will need a soundcard in order to have any sort of decent quality, for #1 Decent quality inputs and outputs #2 The correct input and output types like XLR (or fibre optic or whatever) #3 Acceptable latency for live performance.

Soundcards are available for mac or windows, the makers provide drivers which deal with the "Audio and MIDI stacks". From a users perspective theyre both fine. After that, you care about the machines performance, CPU, how much RAM, how fast is the RAM, how fast is the SSD.


It means that Windows/Microsoft doesn't provide anything to driver writers or DAW authors.

MS tried a bunch of tech in the past (WinMM, MCIWnd, DirectSound, WaveOut, WASAPI, XAudio2, etc), but none of them ever worked for professional Audio.

The proper solution was always to use ASIO, which is third-party technology by Steinberg. It works but it's not integrated with Windows: it goes directly to the audio interface, bypassing stuff in the Kernel.

This bypass causes some limitations, such as not being able to use the system mixer (which prevents from using media players or multiple audio apps), or sometimes having different audio-interfaces not work well with each other.

There are workarounds to those issues, but they have to be handled by the interface manufacturer when writing the drivers. Also, you can't have low-latency with built-in soundcard unless you use something like ASIO4ALL, which is not super stable IME. This sucks when you want to work on-the-go with headphones.

Of course, it can be very stable when you use the right combination of DAW, drivers and sound interfaces, but when you don't you have problems.

On macOS and iOS? CoreAudio is native and it has super low latency by default. It's mostly plug and play and all apps use it. It even provides APIs to use Audio Plugins or reroute Audio, so DAW writers don't even have to write it themselves (unless they want to). Do you have multiple Audio/MIDI interface? In macOS there's a built-in app to "link" them together.


> "Broadway shows could already have achieved the same increase-in-sound-quality-per-dollar by switching to cheaper hardware running Windows"

Mac Minis are relatively cheap. Not much different in cost to comparable, branded, ultra-small-form-factor PC hardware. (And now, with the M1, I don't think you'll find competitive PC hardware in the same price range and form factor at all)


Not only are they cheap, but Mac minis are ubiquitous. Almost every major city has them instock at an Apple Store or other retailer, so it can be replaced quickly anywhere, unlike any pc vendor


I am a huge M1 fan but that statement is almost true, but not quite there yet. I have experimented with a number of prebuilt machines with AMD Ryzen 3600X/3700X, Intel i7-10700, and Apple M1. Those (esp 3700X) are often slightly ahead of M1 (esp in Rosetta) usually at slightly cheaper prices [often discounted too] compared to the base model Mac mini, admittedly at a much higher power consumption, noise, physical size. The delta increases quickly if you have to pay $200 more for only 8GB RAM (which is the max and sometimes the real bottleneck) and overpriced SSD. If you need Rosetta, you are still probably better off with a mid-range Ryzen PC, assuming you're not married to macOS.

I agree that it is at least super competitive on pricing with comparable compact HP/Dell/Lenovo 8-core desktops. This was surely not true before M1 (I'd say on the order of 2x improvement in mini price-performance).

I suspect this will remain true for a at least a little bit more since Ryzen 5xxx are beasts too, and once supply-demand gets more balanced, machine prices will likely come down at or slightly below Mac mini M1.

On the laptop side, however, M1 is glorious and way cheaper than performance-matching alternatives.


As you say, you're comparing a far higher TDP chip here (Ryzen 3600X/3700X). I don't think those can realistically fit into the Mac Mini's form factor, right? Surely we should be comparing the M1 to the "U" series Ryzen chips?


I was comparing M1 to higher TDP chips, yes. If you take TDP into account M1 wins hands down, of course (unless your workload requires more than 16GB RAM). The context of this post is specifically the price-sensitive customer, not a TDP/space-focused one (compare to this[1] for example, once you upgrade RAM and SSD, and install a decent cooler). You also get better I/O in the PC world.

[1] https://www.officedepot.com/a/products/7814504/HP-Pavilion-T...


Audio folks tend to be fairly sensitive to all of size/noise/heat - this sort of setup is going to end up slotted into a rack of audio equipment, after all.


macOS/CoreAudio alone probably justifies Mac mini before performance comes into play at all. To say Mac mini is cheaper than alternatives matching in CPU performance is however still incorrect, though dangerously close. That’s all.


Form factor is most important. After that, price. Then third is probably the cooling situation, though we can install some extra fans if needed. Better I/O doesn't really matter here as we only need 2-3 USB ports, Ethernet, and display.


Many professional audio software suites have historically only run on macOS. I think that’s started to change over the past 5 years. But moving from macOS to Windows is a big, big effort for most orgs.


I don't know how it was historically. But I got this typical thing where I think more software and gear will make me a better producer and thus I buy almost every big daw, software synth and all plugins I come across and have yet to find the first thing that doesn't run on windows (apart from logic pro obviously)


A lot of small-shop semi-DIY VSTs used to be Windows only. That's not so true these days. It's been a while since I found a VST/AU that wasn't at least dual platform.

But having run music on Windows for a long time, I would never ever go back. The telemetry, random updates, and general awfulness of the user experience are not something I want in my life.

MacOS has issues, not least the breaking changes in Catalina and Big Sur. But when each OS iteration settles down it's generally super-stable and - most importantly for professional use - it doesn't get in the way.


MainStage doesn't run on Windows, and it's the software all the shows use. Even shows that use Ableton Live for playback sometimes use MainStage as a frontend software controller hooked up via IAC.


Most audio software suites were historically dual-OS, or Windows only, for example, Cakewalk, AVID ProTools, and Ableton. Even Logic started as a Windows-first DAW.

Indeed, the whole point of Apple buying Logic (and discontinuing the Windows releases) was that they needed a DAW on MacOS to get audio professionals to consider the OS. People don't remember this any more, but the Apple versions of Logic were very much inferior to the Windows versions.


> Even Logic started as a Windows-first DAW.

I hate to be that guy, but your entire comment is full of wrong.

Firstly, Emagic "Logic" started on the Atari and Mac OS platforms. I'm not sure when it appeared on Windows, but it certainly wasn't a Windows first DAW. The Apple versions of Logic were never "very much inferior" to the Windows versions. In fact, it was the other way around.

> Indeed, the whole point of Apple buying Logic (and discontinuing the Windows releases) was that they needed a DAW on MacOS to get audio professionals to consider the OS.

The fact of the matter is, practically all "audio professionals" of the time period of which you refer, used Apple computers - either for MIDI sequencing, or Digital Audio Workstations. Windows computers weren't even a serious consideration. Those who didn't, still used Ataris, or hardware sequencers/recorders.

Pro Tools, originally by Digidesign, was Mac only for years also.


Well, if you want to be technical about the history of hardware and software platforms in Hollywood, it was Atari and Silicon Graphics until the mid-1990s, when most studios switched to a combination of Windows/nix for their needs. Apple was briefly in consideration for DAWs/audio work in the late 80s/early 90s, until Windows DAWs started hitting the market in force.

Today, most Hollywood composers use Cubase, which was definitely Windows-first. TV productions favor Studio One, which again, was Windows-first (and from the same developers as Cubase). Pro Tools is industry standard for Hollywood movies...but it didn't become the standard until version 6, running on Windows. Ableton Live, which is the most popular tool for recording live music, was written first on Windows (but originally commercially released simultaneously for Windows and Apple).

And Logic on Mac was very much inferior to Logic on Windows, which is why Windows was the preferred platform for running Logic. The Mac version didn't become better than the Windows version until version 6, for which there was no Windows version. While all accounts say that Logic is a great DAW these days, because it's Mac-only, the only production companies that run Logic are ones that are exclusively Mac-based.


I'm sorry, but this is completely wrong. Furthermore, I'm not talking about the history of hardware/software in Hollywood - your original statement was about the use of software by audio professionals. I was an "audio professional". I trained as a Sound Engineer (City & Guilds 1820 Sound Engineering, and BTEC ND Music Technology), and was there, right around the time period in question. I also worked in London's West End for years, which is the equivalent of New Yorks's Broadway.

I want to be charitable, and hope you're getting confused about the "Apple" versions - in that, you're conflating when Apple (the company) bought Emagic Logic, for when Logic (the software) was available on Apple Mac OS?! Either way, doesn't make your former statements any less incorrect.


> Cubase, which was definitely Windows-first

The Atari ST Cubase 1.0 would like to dispute this. I remember sulking because it wasn't available for my Amiga.


Nothing to dispute. Cubase was also available for the Atari because Atari was still the big platform for audio at the time. But it was programmed on Windows, for Windows, and the next version of Cubase (the famous one, which introduced VSTs) dropped support for Atari altogether.


Not quite - https://www.musicradar.com/tuition/tech/a-brief-history-of-s...

Windows version didn't come until 1992, the Atari and Mac versions were released before that (Mac before Atari, however, the precursor by the same company was developed for Atari first).


There was a lot of Mac-first software too, though - Opcode Vision, for example.


Ableton Live, the only one used for live performance by any of my circles (admittedly, in the electronic music scene, not Broadway) is cross-platform.

Logic Pro is, of course, Mac-only as it's made by Apple.

We're not talking about an organization moving to Windows here, we're just talking about a single, appliance-like keyboard-input-to-audio-output computer for someone to play music on in a live Broadway show. TFA writes about buying the machine new for that single purpose. There aren't really "switching costs" in the traditional sense in that circumstance.

TBH I think it's just an ad. The circumstance (we have to buy two brand new computers for every run of a show!) and claimed impact are just too contrived.

EDIT: Further supporting the idea that it's just an ad, every other post from this domain on HN is promoting a product.


An ad? For what? Don't think too much. It's pretty easy to understand.

1. We use Macs because we need MainStage for these shows. 2. Mac mini fits the form factor. 3. Faster Mac mini means we can do cooler stuff.

Easy.


I think the impact is the value of Intel MacBook Pros will be extremely high for a long time. Last I checked, most musicians don't replace their rig & buy new software because Apple disrupted their computing platform. Drivers for Firewire adapters, old sequencing software, maintaining access to old projects are all things Apple doesn't care about.

Anecdotal: I sold my 2011 MBP to a musician. Best machine I ever owned, but will never buy another Mac.


Article is pretty light on detail and a bit hand wavy. I’m not sure why M1 is any better at disk streaming samples than x86 based Mac minis either. In fact with a 16 Gb ram limit you can’t put more of those sample libraries in ram. Disk streaming of sample libraries has been a thing for 20 years+. In my DAW the sampler is never the bottleneck. I don’t use the huge multi sampled orchestral libraries but those too have been a thing for years on much more modest hardware than a recent Mac mini. A large ssd array should be plenty fast enough to support it. If they are using 50 instances of high end software synthesizers, those are cpu bound.


Broadway can be a whole different beast. These folks run high polyphony instances of software like Omnisphere and that can still be very much CPU bound even on modern x86-64 chips.


Thats kinda the point I was making. Hints of large sample libraries which omnisphere is a sort of hybrid sample library and synth. It’s also a large library but it’s not the same as huge multi sample libraries that the article mentions.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: