I believe they mean performance drop versus native.
Let's say you have a DAW with ARM and x86 binaries but a VST/AU/RTAS that's x86 only. You need to run the DAW as x86 under Rosetta, which will result in reduced performance versus a native binary, assuming the native processor was capable of equal performance.
Given how zippy the M1 is, that performance penalty may well still put you ahead of where your x86 perf was, but still slower than native.
Well yes, x86 macs in those classes were using underpowered Intel CPUs :-). That said, I've yet to see any serious benchmarks of this kind of workload. We know the M1 is fast for single threaded UI code (fast as heck atomics sure help with ObjC reference counting), but that says nothing about SIMD and floating-point heavy audio processing. Rosetta adds overhead to that, potentially a lot vs native ARM code, depending on the code. There is no way to make a blanket "the M1 is so fast it overtakes Intel even under emulation" claim.
I'd personally be more concerned about stability and drop-out issues. Apple sells Rosetta as an "ahead-of-time" translator, but that is necessarily best effort (because true transpiling is not a solvable problem, e.g. self-modifying code). Thus, there is always the possibility that the JIT gets invoked in the middle of the audio processing thread, and that won't end well for real-time guarantees. There are also programs that just don't work under Rosetta properly (for unclear reasons).
Self-modifying code is just one example. The point is there is no way to positively identify all basic block entry points on x86 (especially due to its variable-length instructions). That means that there is always the possibility that the AoT didn't catch everything, and then your plug-in takes a novel uncached codepath at some point, the JIT fires, and you get a real-time constraint violation and a drop out.
Also, audio companies sometimes like to use "fun" DRM/copy protection systems.
Besides, audio apps themselves doing JIT would not be unheard of. For example, that'd be a very efficient way of implementing a modular synth. Those apps could JIT in a realtime-safe way; Rosetta can't (it doesn't have enough info).
Insert "not sure if trolling" GIF here, but you get that what the OP is saying is that when M1 Macs run x86 code in emulation they're often running it as fast as comparable x86 processors run it natively, right?
Right, but if the conversation is about how this is going to change Broadway or anything, running just as fast means nothing changes (the case if you have to emulate)
The conversation started by the linked article is clearly suggesting it can change Broadway when software is recompiled to run natively with Apple Silicon, though, and this is a perfectly reasonable take. An M1-based Mac mini that's actually less expensive than the Intel one it directly replaced can be a lot faster.
The arguments in the thread about "but a lot of software right now runs in emulation" and "who knows when the software companies will get around to transitioning" aren't wrong, per se, but they're also arguments that long-time Apple users have heard variants of in the PowerPC to x86 transition. And in the (original) Mac OS to OS X transition. And in the 68K to PowerPC transition. It turns out that when transitioning your software to the new platform doubles your performance and/or is necessary to keep selling your product, you have a fairly strong motivation to do the work.
Let's say you have a DAW with ARM and x86 binaries but a VST/AU/RTAS that's x86 only. You need to run the DAW as x86 under Rosetta, which will result in reduced performance versus a native binary, assuming the native processor was capable of equal performance.
Given how zippy the M1 is, that performance penalty may well still put you ahead of where your x86 perf was, but still slower than native.