Hacker News new | past | comments | ask | show | jobs | submit login

interesting. so this fixes the major performance issue by allowing users to install a user installable rosetta inside user created vms.

could still end up with some "works for me" but still broken on prod issues resulting from the underlying architecture being different (for intel backing infra), and also some questions around how it would work in practice in terms of integrating with production container workflows, but seems like a boon for anyone who is struggling today with intel vm or container performance issues on apple silicon.

nice!




Basic question: Why is this faster than running Intel Linux apps in an emulated Intel Linux VM? Because Rosetta is faster than QEMU, and you only need to emulate one application rather than the entire kernel?


Emulation of an x86 kernel level means you lose the hardware-assisted virtualization support you'd get with an ARM kernel, and emulating an MMU is slow (among other things.)

Technically this would be replacing QEMU user-mode emulation. Which isn't fast in a large part because QEMU being portable to all host architectures was more important than speed.


a lot of the performance gains in rosetta 2 come from load time translation of executables and libraries. so when you run a binary on the host mac, rosetta jumps in, does a one time translation to something that can run natively on the mx processor and then (probably) caches the result on the filesystem. next time you run it, it runs the cached translation. if you're running a vm without support inside the guest for this, then you're just running a big intel blob on the mx processor and having to do realtime translation which is really slow. (worst case, you have an interrupt for every instruction, as you have to translate it... although i assume it must be better than that. either way you're constantly interrupting and context switching between the target code you're trying to run and the translation environment, context switches are expensive in and of themselves, and they also defeat a lot of caching)


It’s because Rosetta is able to do AOT compilation for the most part. So it actually converts the binary rather than emulating it at run time.


Correct, plus Rosetta is substantially faster than QEMU because of AOT (as others mentioned) as well as a greater focus on performance.


There is a specific piece of x86-64 software I need for my job, a back end service. It meant there was no way for me to use an M1 Mac for my job. I wish this was available last year, maybe I could’ve upgraded to an M1 based Mac.


when the m1 initially shipped, i was watching this issue like a hawk. rosetta was really cool technology, but they seemed to really miss the mark in terms of many developer workflows that involve intel vms and containers.

it was particularly interesting to me as i always got the sense that a lot of the success of the intel era macs was carried by their popularity and (and recommendability) amongst the internet development and engineering crowd. it seemed to me like a mistake to potentially alienate that group.


I figured it was largely a time management thing. They could only do so much (I mean they moved the whole base OS architecture… again).

Glad to see it.

Hope that by the time I get my next work laptop it’s still there, or better yet we’ve moved to a newer version of the software where this isn’t a problem.


Ime podman (or docker) + qemu-user-static covers nearly everything. There are some issues with older go binaries emulation but most of c/c++ x86 code works as is


what's the performance cost vs. intel code on the host under rosetta2 and native arm code on the host?


I think it depends greatly on what instructions are being used (sse, avx, etc vs neon). Some benches I’ve seen boast rosetta2 is only 30% hit but that’s doubtful to me based on my own experience running factorio =)


I am in the same boat and this news makes me very happy. I hope it is reliable when I have to replace my x86-MB Pro one day.


I don't know about "production container workflows" on a Mac, however the only thing that needs to be done is to use the new Rosetta binfmt handlers instead of qemu (which Docker Desktop already sets up for you). I imagine Docker will have an option to use Rosetta instead of qemu.


I don’t understand this part

> The remaining steps required to activate Rosetta in a Linux guest aren’t commands that your app can execute or that you can script from inside your application to a Linux VM; the user must perform them either interactively or as part of a script while logged in to the Linux guest. You must communicate these requirements to the user of your app.

That means Docker cannot do it magically for me?


Most likely Docker Desktop would have an option to toggle between Rosetta and qemu. Docker Desktop manages the vm, so it would be up to it to mount the Rosetta share into the vm and register it with binfmt.


they specifically say the user must do it himself, for some reason? but let’s see how it works actually


"The user" being the entity that sets up the vm.

It requires the share to be added to the vm and then a command to be run to register it with the Linux kernel.


ahh I misunderstood that




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: