Hacker News new | past | comments | ask | show | jobs | submit login
VirtualBuddy: Virtualize macOS 12 and later on Apple Silicon (github.com/insidegui)
218 points by tosh on June 25, 2022 | hide | past | favorite | 62 comments



Is virtualizing aarch64 macos via qemu on aarch64 Linux within the realm of possibility? I know it can be done for x86-64, but would be pretty cool if an aarch64 kvm hackintosh would be possible. I have an Nvidia Jetson AGX and honeycomb lx2 which in theory could be up to the task.


I fooled around with this for a weekend, learnt a lot about qemu and friends.

So there is this: https://blogs.blackberry.com/en/2021/05/strong-arming-with-m...

And there is also work around getting iOS to boot, which could pave the way for future work. https://github.com/sickcodes/Docker-eyeOS


There are additional instructions only available on Apple chips beyond the aarch64 standard, so I don't think that this could be done in hardware (someone correct me if I'm wrong because this isn't my area)

Of course it's possible in software in principle but obviously that would be awful.


Apple has their own custom conventions and ABI (https://developer.apple.com/documentation/xcode/writing-arm6...). I believe you are right that they have their own custom instructions too. The asahi Linux project has good documentation on their wiki about the hardware.

Also important here is the bespoke ASIC functionality on their CPUs - as it's a system on chip, you'll likely need to emulate the full SoC, including all the peripherals the system expects to be there. Unlike x86_64, ARM isn't really designed to do device discovery on extensible buses in the same way. Your peripherals should all be in the same place, many being on the chip on internal interconnects.

Emulating anything would be possible with enough effort. But you'd likely need to emulate a whole host of extra bespoke silicon functionality on the SoC to get it working. For example, given every ARM Mac has a hardware neural engine, you might find unmodified Mac OS assumes it's there and usable for core functionality (or does so in future). It would probably be a lot of work to emulate all of the extra SoC stuff to the point you could boot the OS unmodified. But nothing is impossible - I think iOS has been run in third party emulators etc.


ABI, which calling conventions are one part of, only affects how different pieces of code within the operating system talk to each other. It’s not uncommon to customize it - for instance, Windows does too. [1] Having a custom ABI does not present any obstacle to emulation.

The rest of what you said is right, though.

[1] https://docs.microsoft.com/en-us/cpp/build/arm64-windows-abi...


Yeah, agreed. I think of every ARM SoC as more like its entire own architecture, for OS compat purposes. A Raspberry Pi SoC will require an entirely different level of compatibility to an M1 Mac, or a Snapdragon based Chromebook.

I think we've been somewhat spoilt by x86 and how easy loads of different machines will boot the same OS image. It's really really different in ARM land.


To ease execution of x86₆₄ applications on Apple Silicon the Apple processor team implemented the x86 memory model with a series of additional experiences.


Those aren't an issue really.

macOS VMs eschew most apple extensions anyway (no AMX there for example).

The big roadblock that we've hit is that arm64 macOS has no software rasteriser for rendering the GUI. The result is that unless you implement Metal and the paravirtualised GPU infrastructure, you're stuck in console mode.


> The big roadblock that we've hit is that arm64 macOS has no software rasteriser for rendering the GUI. The result is that unless you implement Metal and the paravirtualised GPU infrastructure, you're stuck in console mode.

And this has been just as much of an issue for virtualizing x86 macOS… although it does have a software rasterizer, it's almost unusably slow. The only way you're going to get decent performance with that is with passthrough of a GPU that macOS has drivers for (which for modern GPUs, means some Radeon 500 series, Radeon 5000/6000 series, or Intel iGPU up through Coffee Lake).

Since the early 2000s, real Macs have practically never been configured in such a way that there's no usable GPU, and so the software rasterizer never got any attention since it's an absolute-last-resort fallback. With ARM Macs, all models have some form of usable GPU available (presumably, even the forthcoming M-series cheesegrater tower will have a few GPU cores in its SoC) and they don't care to support virtualization on anything but macOS so they dropped the rasterizer entirely.


Of course it's possible in software in principle but obviously that would be awful.

The Hackintosh community emulated SSE2/3 to allow OS X to run on CPUs that didn't have those extensions, and AFAIK it worked reasonably well:

http://www.tutilapia.com/2011/11/a-little-about-hackintosh-i...


That’s all well & good, but it’s a completely different game when you’ve got custom secure enclaves & “AI” blocks to deal with.


Then it may be possible. Of course pure SW emulation would be possible but very slow, though.


It's probably about as possible as emulating anything else with qemu, which is to say, definitely possible after some more development effort.

Some interesting discussions on HN from when the M1 first came out:

https://news.ycombinator.com/item?id=25064593

https://news.ycombinator.com/item?id=24071371


I'd be way more excited about a stable and fully featured Linux port to M1+


That appears to be in progress: https://news.ycombinator.com/item?id=25649719


Lot of negativity in that comment section.


Apple provides their own hypervisor framework and virtualization framework. Roughly 50 lines of code and you can boot up a mac vm which they support.


Comparing the example dagmx has linked and the source code[1], it looks like that's exactly what this tool is doing.

Except, of course, this tool also adds a GUI and easy configuration options.

[1]: https://github.com/insidegui/VirtualBuddy/blob/main/VirtualC...


This has been around for a while, but I recall it was incredibly slow compared to even qemu. Has that improved?


New as of WWDC22


Virtualization.framework exists since Big Sur?

https://developer.apple.com/documentation/virtualization


Any tutorials for this?



Note that this is using Apple's Virtualization.framework to do the heavy listing.


Why I’d really like to see is a lightweight vm using the virtualization.framework to run 32 bit pre-catalina apps on current macOS. (On intel machines)


I liked this session at WWDC22:

"Create macOS or Linux virtual machines"

https://developer.apple.com/videos/play/wwdc2022/10002/


Does Rosetta still work in the virtualized MacOS when using Apple’s virtualization framework?


Yes: Just now I checked TextEdit's "Open with Rosetta" box in Get Info, launched it, and saw it come up as an Intel process in Activity Monitor.

This was in a Ventura beta 2 VM run with Apple's virtualization sample project: https://developer.apple.com/documentation/virtualization/run...


One limitation I have observed is that this VM can't host another VM itself.


Reportedly M2 supports nested virtualization though.


But why would you ever want to?


Docker desktop for macOS requires a linux vm on the macOS host, so nested virtualization is required if you want to use docker desktop inside the macOS guest.

Other Tools like multipass kind and minikube on the guest will not work


It’s helpful when spinning up labs, or testing infrastructure deployments


Works in Linux VMs too. (on Ventura)



Anyone using Parallels to virtualize MacOS on M1 Macs?


Yes, I am. It's quite smooth. It has its problems, but for the most part its alright. What did you want to explore further about this?

Edit: many people download the default parallels; you need to download the parallels from this page https://www.parallels.com/blogs/parallels-desktop-apple-sili... to be able to have access to M1 virtualization.


Does Rosetta still work in the virtualized MacOS in Parallels?


Yes.


Parallels run a similar thin wrapper on top of the OS-provided VM API which looks somewhat like: vm = createVM([device list]); vmWindow = createVMWindow(vm); vm.run();


Yes, and it's terrible.

You can't sign into icloud and you can't maximize a VM to 4k resolutions. It's usable, but for $100 they could do much much better.


Those are well known limitations of Apple virtualized OSes. The threshold for solving those issues involves using a different virtualization framework and a lot of reverse engineering.


Yes, I use macOS as development VM on a maxed out 16 inch M1 MacBook Pro. It all works as expected, except you don’t have any VM settings (e.g. how much ram / cpu you want to give the VM) and Docker doesn’t run inside the VM.


You can change some of the settings by editing an ini file.

https://kb.parallels.com/en/128842


Oh, I didn't know that. Thanks a lot!


I’ve been using it on Monterey. It’s not nearly as optimized as virtualized Windows or Linux on the same hardware (most Parallels features like auto-scaling not available yet), but I think the situation should improve with Ventura.


It works! I can even run x86 binaries for Windows in a VM. Don't ask me how that works though


Microsoft has their own x86 emulator.


anyone know if any progress has been made on running hackintosh on amd chips like epyc or threadripper?


AMD CPUs mostly work, with some exceptions: https://dortania.github.io/Anti-Hackintosh-Buyers-Guide/CPU....


AFAIK it's been possible for a while. I ran it on my first gen Ryzen 1700X and I've seen multiple threadripper builds


Oddly enough, I was a ThreadRipper early adopter and for me macOS was more stable than everything else. I dual booted macOS and Slackware current on it from 2017 until early this year. That machine is retired now.


It's a bit peculiar but yeah, if you find a hardware combination that macOS "likes", it can be surprisingly solid. Back in the early 2010s I had one laptop that no version of Windows or Linux distribution particularly liked, but ran great with hackintoshed 10.6-10.9.

The only way to make it usable in other operating systems was to disable GPU power management under Windows, which turned it into an oven, or to run Nouveau drivers under Linux which had serious performance problems, graphical artifacting, etc. The Nvidia drivers bundled with macOS had no such problems, running it at full performance with proper power management and no lockups.


Yeah, I increasingly believe every OS is picky. People are just accustomed to a lot of things never working quite right, to the extent that they sometimes don't even notice.


This looks like a better open-source option:

https://github.com/KhaosT/MacVM


UTM should support most of the same features (aside from ease of us for installing all macOS versions). It also now supports paravirtualisation using the hypervisor framework.

https://github.com/utmapp/UTM


> This looks like a better open-source option

Comparing the code between the two, VirtualBuddy seems like the better option to me (albeit not by a lot though). They are both lightweight wrappers around MacOS’s built in hypervisor, so I’m really not sure what you’re going on.


Since you have an opinion about this, can you explain why one option is better than the other?


VirtualBuddy is still experimental


> VirtualBuddy is still experimental

Really? You’ve gotta be trolling, right?


Sounds like you're the one trolling when the site says this upfront.

WARNING: This project is experimental. Things might break or not work as expected.


Look at the code for the one he’s suggesting. It’s the same essentially. One is just more upfront about giving its visitors a clear understanding. Taken out of context I can maybe see what you mean, but like come on, it’s not an effort to keep up is it?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: