Hacker News new | past | comments | ask | show | jobs | submit login
Running Intel Binaries in Linux VMs with Rosetta (developer.apple.com)
387 points by gok on June 6, 2022 | hide | past | favorite | 169 comments



This in combination with the 3D acceleration work going into QEMU [1] could end up being a pretty compelling solution for running the x86 Linux version of Steam on macOS.

[1] https://gist.github.com/akihikodaki/87df4149e7ca87f18dc56807...


I'm hoping Alyssa Rosenzweig and Hector Martin's work will allow Linux to access the GPU directly.


Can you run Win32/Win64 binaries on ARM WINE / Proton?


Box86/box64 lets you run Linux x86 Steam running x86 Proton. I'm not sure about ARM Proton.

https://www.youtube.com/watch?v=JuRZGf7Jqxg


I was under the impression that Box86 would not work on the M1 because it wants to run in 32 bit mode, and Apple did not implement 32 bit mode on their processors.


Rosetta on Linux is also an option.


Yes, with FEX-Emu


Games on Proton (WINE) on Linux on Rosetta on macOS?


If it's stupid and it works, it's not stupid :-)


You would need Apple to support Vulkan in their GPU drivers first.


Wouldn’t people working on Asahi right now write their own GPU drivers?


Using MoltenV maybe, similarly to vkd3d?


A large issue with this will be that Rosetta doesn't support AVX, AVX2 extensions, which are increasingly commonly used in games.

https://steamcommunity.com/discussions/forum/0/2976275080122...


I would like to see Hackintosh continue to flourish on x86 chips, but I may be asking for a lot here.


Depends however on what the 3D acceleration supports in terms of features.


Does anyone understand (or have any theories on) how this actually works? I don't understand how it's possible. Surely they didn't write a Linux version of Rosetta, it must be talking to the host OS somehow—but how? Where is the boundary?


> Surely they didn't write a Linux version of Rosetta

They did just that. The folder share is just for licensing as far as I can see.


I wonder how they handled TSO mode. Do they enable it for the whole VM? Otherwise I cannot see how it would safely work given they could be context-switched anytime by the guest kernel.


According to Twitter, as soon as you attach the Rosetta volume it switches TSO to on


Either that or not relying on HW TSO, didn't evaluate yet which path they took between those two options.


Apple picked always-on TSO.


Huh? Are you saying it can't toggle?

Are you saying that it's always on for VMs?


Always on for VMs with the emulator shared filesystem attached.

That's... a weird way for coding a feature flag, but I guess it "just works".


That's super neat! I wonder if the "licensing trick" could be patched out some day, for use in e.g. Asahi Linux.



It seems pretty clear from TFA - there’s a directory share with the host, and given that Rosetta isn’t an emulator, but rather a translation layer, they don’t need a Linux version: x86 instructions go in, arm64 come out.


But surely that would be too slow? Although Rosetta is great at caching instructions ahead of time, it does need to emulate a lot of code (ie, anything generated at runtime).


It’s doing an AOT translation of x86-64 opcodes to ARM64 equivalents. There isn’t really any back and forth, it just digests the binary all at once.

This would still be pretty slow (see: Microsoft’s version of this under Windows ARM) due to the need to issue a ton of memory fence instructions to make ARM’s looser memory model behave like Intel’s, except that Apple baked the ability to switch the CPU into an Intel-like memory model directly into the silicon.

So in practice it is shockingly fast.


> It’s doing an AOT translation of x86-64 opcodes to ARM64 equivalents. There isn’t really any back and forth, it just digests the binary all at once.

No it's not. Apple is not immune from fundamental computer science principles whatever their marketing team says, and even the original keynote acknowledged that Rosetta 2 emulates some instructions at runtime.

Imagine you're running Python under Rosetta. The original Python interpeter takes Python code, translates it into x86 assembly, and runs that x86 assembly. Those x86 instructions did not exist prior to execution! Even if Rosetta could translate the entire interpreter into ARM code, the interpreter would still be producing x86 assembly.

Other types of programs produce code at runtime as well. Rosetta 2 is able to cache a very impressive amount of instructions ahead of time, but it's still doing emulation.


Yes, it includes a runtime JIT component for apps that happen to dynamically generate x86-64, but in all other cases the binary is AOT translated. It does this by inserting (during the AOT translation) function calls to a linked in, in-process translation function whenever it sees mmap’d or malloc’d regions being marked for execute and then jumped to - this data dependency on the jump instructions can be entirely determined from a static analysis of the executable, no violation of fundamental computing science principles required.

So yeah, no real back and forth to the host platform.


AOT (ahead of time) really does work on whole binaries. That is also why the first launch of an Intel app seems longer sometimes. Intel binary in, ARM binary out.

You're still right that that's not sufficient, for example anything that generates Intel code will definitely need JIT (just in time) translation. But presumably a lot of code will still hit the happy AOP path.

That being said, a JIT does not have to be super slow. The early vwmware products, back before there was virtualization support in Intel products, actually had to do some translation as well: https://www.vmware.com/pdf/asplos235_adams.pdf


> That is also why the first launch of an Intel app seems longer sometimes. Intel binary in, ARM binary out.

I mean, we can call it an ARM binary or we can call it an instruction cache. I generally prefer the latter term, because what Rosetta produces are not standalone executables, they're incomplete. I don't know how often the happy path is used, but Rosetta can always be observed doing work at runtime.

JITs are great and Rosetta 2 is incredible! I just can't imagine it working over any sort of shared filesystem, that would add an incredible amount of latency.


Note that the main Python implementation CPython actually does no translation. It's an interpreter with no JIT.


An interpreter is still producing x86 instructions at some point, right? Or else what does the CPU execute? Am I totally misunderstanding how interpreters work?


> An interpreter is still producing x86 instructions at some point, right?

Not dynamically. They just call predefined C (or whatever the interpreter was written in) functions based on some internal mechanism.

> Or else what does the CPU execute?

Usually either the interpreter is just walking the AST and calling C functions based on the parse tree’s node type (this is very slow), or it will convert the AST into an opcode stream (not x86-64 opcodes, just internal names for integers, like OP_ADD = 0, OP_SUB = 1, etc) when parsing the file, and then the interpreter’s “core” will look something like a gigantic switch state statement with case OP_ADD: add(lhs, rhs) type cases. “add” in this case being a C function that implements the add semantics in this language. (The latter approach, where the input file is converted to some intermediate form for more efficient execution after the parse tree is derived, is more properly termed a virtual machine and “interpreter” generally only refers to the AST approach. People tend to use “interpreter” pretty broadly in informal conversations, but Python is strictly speaking a VM, not an interpreter)

In either case, the only thing emitting x86-64 is the compiler that built the interpreter’s binary.

> Am I totally misunderstanding how interpreters work?

You’re confusing them with JITs.

If every interpreter had to roll their own dynamic binary generation, they’d be a hell of a lot less portable (like JITs).


Have you tried Rosetta? It can be pretty impressive.


Rosetta is very impressive! I just don't see how they could maintain that by passing instructions back and fourth over a shared drive, that would be a ridiculous amount of latency!


The shared drive is just a licensing trick. They do an ioctl over /proc/self/exe as the licensing mechanism. (and that's routed over with virtio-fs to the host)


Didn’t Apple also implement togglable memory models for m1 (and I’m assuming m2)?


They are exporting some sort of Linux ARM binary under a virtual filesystem mount point that handles execution of x64 images.

Probably, that binary is passing the instructions into native macOS Rosetta code for translation, but its also possible that the entire Rosetta code was ported to Linux.


If I’m not mistaken this is also available on WSL. I was surprised, while in WSL, to be able to run windows binaries.


Sort of, yes-- there is `binfmt_misc` handling of PE excutables and virtual filesystems (akin to VirtIOFS) involved, but no binary/architecture translation like Rosetta.


Ahhh ok, thanks for the clarification instead of down voting like others.


Does anyone know some work in the opposite direction, that is, running OSX applications with x86 CPUs? I'm seeing a lot of effort with making the Apple's hardware to be universal and run everything else (like Asahi Linux and Rosetta), while I can only imagine how impossible it is becoming to emulate the M1/ARM and run a hackintosh.



What for? There's just Apple software and a few niche things.


Sure. Just reverse the polarity.


> ...while I can only imagine how impossible it is to emulate a hackintosh considering the M1/ARM factor.

Simply not possible. Apple Silicon is ARMv8, but with a number of their own custom extensions on top of it.


They still make Intel macs, right? So it's not impossible.


If I am not wrong, this will allow much faster Docker with x86 base images?

Docker is “just” a binary with cgroups magic.


It’s a whole Linux kernel on the Mac that must run through the virtualization system, so it’s heavier weight than Linux docker.

Sounds like x86 binaries within an already running ARM Linux base image can run much faster.


"heavier" means more footprint in memory and probably more startup time but not necessarily slower once it's running.


> Sounds like x86 binaries within an already running ARM Linux base image can run much faster.

At the moment the Docker desktop VM is setup to use qemu when running amd64 containers (processes) on the arm64 VM.

Rosetta would replace that qemu binfmt setup, or maybe qemu-static wraps rosetta when available.


Hopefully! It's still a major roadblock for me. I have to rent a VPS with x86 Linux just for docker.


For performance reasons? Docker Desktop for macOS does run x86 docker on a VM. Has been doing it for a very long time.


Yes, I tried emulation and it basically doesn't work for me, I guess because of Java.


yeah me too, the x86 docker randomly segfaults and using it is really clunky in general


Semi-related question for those who might know: Is it possible to run, say, Debian for x86 in a VM on M1 Macs? If so, I assume the performance is low because it has to translate every x86 instruction to ARM?


UTM[0] does this, which uses qemu under the hood. It's pretty slow, but for some non-GUI workloads, the performance is acceptable.

[0]: https://mac.getutm.app/


Does this mean I can run MSSQL Server on an M1 Mac?


I mean, you can do that now. So, yes, but not because of this.


I know of Azure SQL Edge, which has significant limitations, but I don't know how to run MS SQL Server on an M1 at anything more than glacial speeds (i.e. full system emulation in Qemu). Is there something I don't know about? Please share!


the Drawbridge environment has some peculiar needs that most JITted envs don't provide. So unlikely to work but will test that out soon...


If you're speaking about the existing technique MS has developed to run MS SQL on Linux, it doesn't work on M1s in Docker under Qemu user emulation: https://github.com/microsoft/mssql-docker/issues/668

That's 2019. I haven't tested 2017 or 2022, but I assume they don't work either. AFAIK the official advice is to use Azure SQL Edge instead.


qemu-user is indeed hopelessly unsuitable for any kind of heavy use. Use anything else. :)

But that doesn't answer the question of "does Rosetta deal with it properly?".


SQL Server doesn't work under Rosetta.


Oh! Sorry, I didn't realize you were going to try it under Rosetta. That is very disappointing that Rosetta doesn't do any better. Thanks for testing.


How would this affect running an x86 container on a M1/M2 Mac?


Instead of running in qemu (as it currently would), it would run under Rosetta.


It explicitly states it doesn't support bootstrapping - you still need your OS to actually run on arm64


I think it’s already the case. The kernel is arm64 and the amd64 containers are running with qemu. Most containers are fine but I experienced some issues, with maintainers refusing to support this setup.

I understand that you can replace qemu by rosetta now.


Containers are virtualized on macOS. x86-64 containers have used qemu to run the whole system under emulation, which is pretty slow. This workflow continues to require QEMU; however, now you can create an arm64 container and run x86-64 code in that using this technology.


What is arm64 container? Container is just a namespaced process so if your binary is x86 it is still technically x86 container just run using emulation. And qemu user static already does translation a la rosetta 2 so there probably wont be much difference in speed


The whole point in providing Rosetta is because it (should) be significantly faster than plain qemu.


If by plain qemu you mean its user mode emulation then i think it remains to be seen what the performance improvement will be. User mode already can be ok if it’s not doing endianness translation


I'm gonna guess that if Apple invested this level of effort into it, it's not a "remains to be seen what the performance improvement will be". You can take their graphs and charts and sort of de-reality-distortion-field them to know that it will be much faster than qemu for the workloads Apple knows people run frequently, but still slower than if you had a new x86_64 machine.


I've done a lot of work with running userland qemu for x86_64 in aarch64 vms on macOS 12. The experience is serviceable for simple things, but if I build anything really heavyweight, I find I need a native arch builder. I have little doubt that Rosetta will be an improvement.


Docker contains a Linux kernel, does it not?


Nope, docker just sets up linux cgroups/namespaces and spawns a new linux process, no kernel of its own.


Docker Desktop (ie Docker on Mac) absolutely sets up a VM.


Correct, on macs it spawns a (usually aarch64) vm using apples hypervisor framework api (hw virt) and runs docker daemon inside of it. Then it’s what I described above - just an process running arm instructions. If you’re using qemu binfmt_misc runner which uses “user mode emulation” not to be confused with system emulation which does full blown hw emulation. So it’s aready doing similar things to rosetta 2


Yes qemu and Rosetta perform the same basic role here (bintfmt handler for x86 bins), but Rosetta should perform significantly better (otherwise there would be virtually no reason at all to build this).


> Docker Desktop (ie Docker on Mac) absolutely sets up a VM.

Docker runs inside the VM, not the other way around.


Something needs to set up the VM that Docker runs in. Docker Desktop is that something. The GUI even lets you tweak the VM resources. I suppose you could do it all on your own, but most macOS users probably just install Docker Desktop and let it handle the VM.


Docker Desktop has pretty bad performance. Vagrant seems to be the way.


Do you know if these changes will allow Vagrant to configure VMs with Intel operating systems to run on an M1?


Op of thread here: there seems to be a lot of back and forth on running Arm64 containers and executing Rosetta in them to run something x86 but as far as I can tell (correct me if I’m mistaken and I think I might be) that doesn’t really help because my production containers are x86 so I would need to compile my container for Arm64 then run Rosetta in which is a totally divergent image and kind of useless for running production images locally.


really good question... is a x86 container already bootstrapping or can we put it through Rosetta (instead through QEMU) to get it run?


The container’s kernel has to run on hardware (that’s what kernels do). If you have an x86 container that means an x86 kernel. Rosetta only runs user mode processes (and presumably in this environment knows how to translate linux system calls?).

If you /have/ to run an x86 kernel (I’m not sure what the reason would be?) you’re stuck with full system emulation a la qemu.

In reality what you probably have is a few x86 binaries that you need, and it sounds like this will let docker, VMware, qemu, whatever, leverage Rosetta w/o having to implement it themselves.

Now, if you’re running something that does codegen at runtime, then Rosetta is pretty much just as screwed as any other tech you might want to use.


Can Apple please just make a simple Virtualization front end ala libvirts Virtual Machine Manager. They seem to have all the bits, just need to bring them all together.


Seems like they're leaving that part up to the dev community (which I'm fine with). They'll never be able to make every person and every use case happy, so better to just give us the tools to do so ourselves. Check out UTM if you want a GUI for hypervisor.framework powered VMs.


I investigated virtualization framework recently and I'm kind of impressed. They really implemented a lot of things. And coming updates makes it even better.

Here's what I've learned:

1. You can run console Linux. But you need to download ARM ISO, extract kernel and initrd and find out exact kernel parameters. Not a big issue, but probably can't be done automatically. You can't just boot ISO.

2. Disk support is bad. No snapshots, etc. Just single file. I think that you can emulate snapshots with APFS CoW support, but that's not as good as something like qcow2.

3. No GUI support (for Linux, for macOS there's GUI support but I have no idea how it works).

4. Network support is limited. You can run NAT but then you can't communicate with host. You can run bridge, but I don't like this mode and didn't even try it. You can implement userspace networking completely by yourself, but that's a lot of work and I didn't try it either. But I think that if someone would want to build a virtualization frontend, he had to do it. Otherwise basic features like port mapping will not be available.

Now from what I read at this topic: ISO boot is coming, UEFI support is coming, Linux GUI support is coming. And I guess with enough tinkering it should be possible to run other BSDs, as they just leverage standard virtio protocols.

Also Windows virtualization theoretically should be possible. I mean it's just another UEFI OS. You would need to put virtio drivers, but Redhat wrote virtio drivers (for x86, but should be possible to compile for ARM).

I just wish that this virtualization framework was more extensible. Right now it's like use it or drop it. For example I didn't find any way to implement qcow2 support while using it for other things.


how feasible is the x86 windows virtualization? ive looked into UTM but it seems to be quite slow, how easy would it be to port this to windows then? "theoretically possible" could mean anything, but its a shot in the dark, haha.


I think this WWDC session will discuss that exact scenario. From the short description, I think they are still going to let the community handle the wiring of this into a VM management application. But, hopefully, this tutorial will make this process much clearer. I’ve tried to follow some of the older Framework examples, but I didn’t make it very far.

https://developer.apple.com/videos/play/wwdc2022/10002/

HN Thread: https://news.ycombinator.com/item?id=31645000


Is this the first time (at-least this decade) that apple is shipping Linux binaries?


This is probably the first time macOS includes a Linux binary, yes.

Outside of macOS, Apple has been providing Linux binaries of the Swift toolchain for years: https://www.swift.org/download/


Try to simplify my understanding- can we position of apple on a) doing wsl1 or b) even less just provide tool to do wsl1 or c) … wsl2 … tool only to wsl2 …

Or we have a simpler mental model ? From a non-system land programmer point of view ?


Somehow, I wish there was an easy way to do the opposite: creating and running arm images in intel laptops. This would allow me to run images easily and safely in Graviton based ec2 instances.



libhoudini is how Atom tablets ran ARM apps for Android on Intel ISA chips. It can be done. Rosetta is just a ripoff of libhoudini. Nothing new.


Isn't Rosetta 2 the opposite of libhoundini? (translating from x86 to ARM, instead of ARM to x86), I also fail to see how an application that has its roots in a PPC to x86 translator (Rosetta 1) from 2006, is somehow a rip off of an application from around 5 years ago.

If anything libhoudini is a rip off of Rosetta.


See DEC's FX!32 (running x86 WinNT 3.51 binaries on WinNT 3.51 workstations with Alpha AXP CPUs) and HP's Project Dynamo. There are probably even earlier examples, but FX!32 was roughly equivalent to Rosetta, and shipping production code in the 1990s.


You realize that Rosetta is over 16 years old right?


If you're talking about Docker images, you can already do that with buildx multiarch builds.


interesting. so this fixes the major performance issue by allowing users to install a user installable rosetta inside user created vms.

could still end up with some "works for me" but still broken on prod issues resulting from the underlying architecture being different (for intel backing infra), and also some questions around how it would work in practice in terms of integrating with production container workflows, but seems like a boon for anyone who is struggling today with intel vm or container performance issues on apple silicon.

nice!


Basic question: Why is this faster than running Intel Linux apps in an emulated Intel Linux VM? Because Rosetta is faster than QEMU, and you only need to emulate one application rather than the entire kernel?


Emulation of an x86 kernel level means you lose the hardware-assisted virtualization support you'd get with an ARM kernel, and emulating an MMU is slow (among other things.)

Technically this would be replacing QEMU user-mode emulation. Which isn't fast in a large part because QEMU being portable to all host architectures was more important than speed.


a lot of the performance gains in rosetta 2 come from load time translation of executables and libraries. so when you run a binary on the host mac, rosetta jumps in, does a one time translation to something that can run natively on the mx processor and then (probably) caches the result on the filesystem. next time you run it, it runs the cached translation. if you're running a vm without support inside the guest for this, then you're just running a big intel blob on the mx processor and having to do realtime translation which is really slow. (worst case, you have an interrupt for every instruction, as you have to translate it... although i assume it must be better than that. either way you're constantly interrupting and context switching between the target code you're trying to run and the translation environment, context switches are expensive in and of themselves, and they also defeat a lot of caching)


It’s because Rosetta is able to do AOT compilation for the most part. So it actually converts the binary rather than emulating it at run time.


Correct, plus Rosetta is substantially faster than QEMU because of AOT (as others mentioned) as well as a greater focus on performance.


There is a specific piece of x86-64 software I need for my job, a back end service. It meant there was no way for me to use an M1 Mac for my job. I wish this was available last year, maybe I could’ve upgraded to an M1 based Mac.


when the m1 initially shipped, i was watching this issue like a hawk. rosetta was really cool technology, but they seemed to really miss the mark in terms of many developer workflows that involve intel vms and containers.

it was particularly interesting to me as i always got the sense that a lot of the success of the intel era macs was carried by their popularity and (and recommendability) amongst the internet development and engineering crowd. it seemed to me like a mistake to potentially alienate that group.


I figured it was largely a time management thing. They could only do so much (I mean they moved the whole base OS architecture… again).

Glad to see it.

Hope that by the time I get my next work laptop it’s still there, or better yet we’ve moved to a newer version of the software where this isn’t a problem.


Ime podman (or docker) + qemu-user-static covers nearly everything. There are some issues with older go binaries emulation but most of c/c++ x86 code works as is


what's the performance cost vs. intel code on the host under rosetta2 and native arm code on the host?


I think it depends greatly on what instructions are being used (sse, avx, etc vs neon). Some benches I’ve seen boast rosetta2 is only 30% hit but that’s doubtful to me based on my own experience running factorio =)


I am in the same boat and this news makes me very happy. I hope it is reliable when I have to replace my x86-MB Pro one day.


I don't know about "production container workflows" on a Mac, however the only thing that needs to be done is to use the new Rosetta binfmt handlers instead of qemu (which Docker Desktop already sets up for you). I imagine Docker will have an option to use Rosetta instead of qemu.


I don’t understand this part

> The remaining steps required to activate Rosetta in a Linux guest aren’t commands that your app can execute or that you can script from inside your application to a Linux VM; the user must perform them either interactively or as part of a script while logged in to the Linux guest. You must communicate these requirements to the user of your app.

That means Docker cannot do it magically for me?


Most likely Docker Desktop would have an option to toggle between Rosetta and qemu. Docker Desktop manages the vm, so it would be up to it to mount the Rosetta share into the vm and register it with binfmt.


they specifically say the user must do it himself, for some reason? but let’s see how it works actually


"The user" being the entity that sets up the vm.

It requires the share to be added to the vm and then a command to be run to register it with the Linux kernel.


ahh I misunderstood that


Do any hypervisors support this yet? (i.e. can I try it out right now?)


You're asking about something that was just released in a beta version of an OS announced a couple hours ago.


Correct!


There will be a session posted tomorrow from Apple on how to use their sample to run a Linux VM with Rosetta support. I wonder if the sample on this page will do the trick by itself?

https://developer.apple.com/documentation/virtualization/run...


Virtualization.framework is the hypervisor itself, you only need simple tool to launch it. You probably could just copy-paste provided code into eg. https://github.com/gyf304/vmcli. However macOS 13 beta currently seems to be only available for registered developers.


It seems that very little on the part of the hypervisor is required, so it shouldn't take them long to add it. They basically just have to call a function to create the virtual filesystem device for the guest.


No, this requires changes to use new APIs. I expect the major VM developers to adopt it in the coming days and weeks.


So this lets me run windows steam games on apple silicon, with a low performance penalty?

x86/Win game -> Steam for Windows(?) -> Linux VM Guest OS -> MacOS 13 VM Host OS -> Mac HW


How would this help windows specifically, or are you thinking steam+wine? In which case presumably steam+wine on Mac would have lower perf overhead.

If your thinking involves a windows VM under linux you’re SOOL, as that has the same problem as an win/x86 vm on Mac: it’s a full system emulation - eg qemu, etc

Rosetta can translate user mode code, because every can either be translated to native code /or/ is a system call, and apparently Rosetta knows how to translate linux syscalls.


For some values of "low" anyway.


I guess what I don't get is the:

Steam for Windows -> Linux VM Guest OS

You can double click the Steam for Windows installer, in a Linux GUI and it installs right now? Using WINE? What magic is this?


Is there any way to capture or view Rosetta's output, i.e. to use Rosetta as a tool for porting x86 code to the M1?


You’d surely only need that if you had significant raw assembly to translate, in which case you’d have to assume that there were technical reasons for exact memory layout in x86, or perf reasons where you’d presumably want handcoded arm64 assembly?


Yes, that is exactly the case. I am involved in the effort to port Clozure Common Lisp to the M1 and it has a lot of bespoke assembly code.

(The x86_64 version of CCL actually works under Rosetta (more or less) which absolutely blows my mind since it's a compiler that emits x86 code at run-time!)


One issue you'd hit is that Rosetta is going to do fairly linear translation, and isn't going to improve calling conventions to the native one (in practice not terrible as x86_64 mercifully has enough registers for the all arguments in the general case).

In a sense if you were to go through the hand assembly one at a time and do 1:1 substitution with ARMv8 equivalent you'd probably be able to do a bit better, just because you don't have to consider behaviour of generic x86 code.


Will this enable tensorflow in docker on apple silicon?


tensorflow is available natively on Apple silicon as of version 2.8.0.


Soooo does this mean that Docker will suck less on macOS, Real Soon Now?


I believe the main issue is file syncing between macOS and the container host OS.

I don’t have a M1 mac, but on my Mac a standard Linux VM has no impact on performance or battery life. The minute I use Docker the machine become sluggish and halves (literally) the batter life, I now avoids is as much as possible.


Have you tried enabling "Use the new virtualization framework" in experimental features? This removed the background CPU usage for me.


Try out Colima with the experimental 9p mount type. Network performance is improving. It's as good as it will likely get on the platform.


I have a M1 Macbook Pro with 16 GB RAM, and qemu ballons in RAM when I try to stand up our backend microservice stack locally - it uses RAM until there's none left, and falls over. My teammates running Linux report ~30mb of RAM usage by the Docker processes to run the same containers.


Vagrant solved most of my problems. Docker Desktop was unusable to me as well.


This can't boot x86-64 containers, but you can create an arm64 container and run x86-64 code in it.


I don't think that's right - containers are just userspace code, nothing to boot.

By my reading of TFA, your container runtime (eg Docker for Mac) should be able to provide an ARM VM, and use this Rosetta feature to run amd64 containers directly.


Containers that you and everyone else think of use Linux distros as a base, so yeah, anything else than Linux actually had to boot a VM to use them. That’s what eg Docker does on macOS.


Why does the rosetta binfmt_misc setup exclude starting amd64 containers? It's the same functionality the Docker desktop arm64 VM currently uses to run the process for an amd64 container via qemu (or any non arm64 container).


Docker on macOS already has some kind of qemu based solution for running x86_64 containers on m1.

Potentially this could be used as an alternative?


It's QEMU emulation. It's slow. It also segfaults a lot. I am able to run a select few containers on it, but I needed to sort out native arm64 for the vast majority.


The main reason for segfaults is that not all syscalls (syscalls on linux are different per architecture - not even 32bit and 64bit x86 use the same) are implemented in a 100% compatible way.


Yes, this is exactly what Apple has done - created Rosetta that handles Linux ELF binaries, translates code and syscalls.


Yes, Docker on macOS can switch from qemu-user to this. It uses the same binfmt mechanism, so it should be mostly a drop-in replacement.

This is only available in combination with Virtualization.framework, but Docker is already migrating from Hypervisor.framework to Virtualization.framework.


I have to admit to a little bit of trepidation about the Rosetta mount point. Will this work for Docker without explicitly passing that mount point through to the containers?


The way this would work is Docker would be setting this up for you (maybe through some option to switch between qemu and Rosetta).

The containers should not need the mount. What happens is Rosetta gets registered with the kernel (binfmt_misc) to execute x86 binaries. The is the same mechanism that allows seamless qemu support.


Yeah I'm not entirely sure how they're going to make this work, but I kind of understand why it's so weird, because there's really not any very good way to communicate between macOS userspace and some arbitrary kernel running inside the VM…


If I cannot use Apple Boot Camp to boot into another OS and avoid macOS, I'm not really interested in non-Intel macs.


The effort to support Linux has made decent progress. https://asahilinux.org Apple said Microsoft would have to cooperate to get Window working. https://www.macrumors.com/2020/11/20/craig-federighi-on-wind...


Microsoft has a version of Windows (and even Visual Studio Pro) that runs on aarch64, specifically for Qualcomm SOCs, so it wouldn't be completely outside of possible. But I suspect they won't do it because they're probably be more interested in pushing Windows on ARM enthusiasts to Microsoft's and their partners' own hardware.


From what I've heard, Microsoft and Qualcomm signed an exclusive deal for Windows on ARM for a number of years. Microsoft could theoretically release a Windows version for the M1 but they'd be breaking their contract with Qualcomm which can only go down badly.

However, in a few years' time the deal runs out and it might be possible again, unless they can think of a reason to renew the deal (i.e. inclusion of Qualcomm's optimised x86/amd64 translation layer).


The former Apple M1 team is now at Qualcomm/Nuvia, so expect M1-equivalent Windows Arm laptops "soon".


This is a huge overstatement. A few engineers went to Qualcomm, not the "Apple M1 team." Market forces at work. However, I do suspect they will help Qualcomm come closer to being competitive. How close they will get in performance remains to be seen. The M2 is already about 20% faster with the same number of cores.


Qualcomm paid $1.4 billion for Nuvia, they must have hired more engineers and/or invented new IP.

https://www.androidauthority.com/qualcomm-nuvia-1192440/

> Nuvia was founded back in 2019 by former Apple silicon executive Gerard Williams III, along with Manu Gulati and John Bruno. Williams was the chief architect behind several major Apple CPUs and chipsets from 2010 to 2019 .. the Cyclone, Typhoon, Twister, Hurricane, Monsoon, Vortex, Lightning and Firestorm CPUs. These CPUs were featured in the Apple A7, A8, A9, A10, A11, A12 series, A13, and A14 respectively. The Nuvia founder’s profile also notes that he was the chief architect for Apple’s Mac hardware ... Going back even futher, the Nuvia co-founder worked at Arm from 1998 to 2010, working on CPU tech like the Arm Cortex-A8 and Cortex-A15 CPU cores.


> Qualcomm paid $1.4 billion for Nuvia, they must have hired more engineers and/or invented new IP.

This assumes that valuation is rational. It is not, and recent history is full of companies over-paying or under-paying for their acquisitions. The proof of the pudding will be in the eating, and the pudding will still take a while to get ready.


The promised M1-level performance on chips that ship at the end of 2023. So, they'd initially still be three years behind Apple.

https://www.tomshardware.com/news/qualcomm-confirms-nuvia-ar...


M1 is a fixed target, everyone will get there eventually. For example, AFAIK Intel has plans to have something competitive with the M1 in terms of performance per watt at some point in 2023. The question is whether Qualcomm will be able to get where Apple will be at that time, because nobody will care about “M1-equivalent” in 2 years.

And yeah, Nuvia is more “a handful of engineers” (though quite good at their job) than “the former Apple M1 team”. It took Apple billions upon billions and the better part of a decade. These things are very long term plans, and at this scale it depends at least as much on high-level strategies than on engineering.


Windows 11 for ARM apparently runs nicely in a VM on ARM Macs, a coworker of mine is using it for small projects. Including x86 emultion - his project requires some x86 tools.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: