So this is pretty misleading. It's really a full system emulator (qemu) running inside Docker, using root privileges on the container that make the isolation very weak (--privileged).
It also uses hardware assisted virtualization (KVM) which is not going to be available most of the time Docker is.
You can think of the Docker platform itself as subset of the Linux platform. With many common features removed by default... SYS_PTRACE, cgroups come to mind as not allowed within the container. (This "Docker as a subset of Linux" is also what you end up getting from most "Docker as a service" platforms offered by clouds, including kubernetes. I'm referring to AWS Fargate, Google Cloud Run, GKE, AKS, here.)
So don't think of this as macOS in docker wherever docker runs.
What would be a lot more analogous to macOS in docker would be running Darling in docker: https://www.darlinghq.org/ ... if that could be made to work for the entire system (highly unlikely)
Darling is more like Wine in that it runs native executables for one platform as native processes on another platform using a compatibility layer. Wine, by the way, definitely works quite well inside Docker.
Also, one final thought. I wonder if you could get macOS to boot in QEMU without hardware assisted virtualization. Then you could probably run this in a fully isolated container again. The performance would likely be abysmal though!
I don't care though. What I care about is that it's a pain in the butt to do CI/CD pipelines for an application with iOS/OSX support. So if someone has a headless OS X contraption on offer, I want to hear some more about it.
The last time I set this up, a manager decided he wanted a laptop like the rest of us instead of the iMac he got. He asked semi-jokingly if someone wanted the machine for anything and I said "Yes, I do" before he even got the sentence out.
There was just enough memory on the machine for me to set up a few Jenkins agents on it, one for Safari, the rest using the Selenium-maintained docker images.
> So if someone has a headless OS X contraption on offer, I want to hear some more about it.
This project relies on OSX-KVM, which is a thing that already exists. The dockerization of that project (what this post is about) is a gimmick as described by GP.
> This Dockerfile automates the installation of OSX-KVM inside a docker container.
Except it automates the fetching of the macOS installation media and launching qemu, which is exactly what OSX-KVM already does. [1] This project does nothing additional to automate the actual installation of macOS inside the VM.
I wish Apple supported installation automation like Microsoft does with sysprep (or Linux with kickstart/preseed). The best I've found is Arduino USB devices that pretend to be a keyboard and mouse to manually advance the installer, which is super lame "automation"
> The best I've found is Arduino USB devices that pretend to be a keyboard and mouse to manually advance the installer, which is super lame "automation".
Definitely not great as an actual process, but this sounds like a super cool project!
If we are going to divert into Zen koans and non-dual philosophies, then it is definitively "yes it did". The falling away of the perception of an intrinsicly-existing self doesn't change that there's still perception. It would make a sound the same way there is a sound made when one hand claps.
Back on topic though: it's too bad Apple doesn't allow licenses for running things headlessly like this.
You over-philosophized to the point of bringing their kitschy koan off-topic.
How I interpreted the use of the koan : Apple has no history of legally chasing those who virtualize their operating system; as this is a non-topic thusfar -- who cares?
The hammer may fall one day, but so far 'why are we worried about a legal response that doesn't seem to exist?'.
The answer, of course, is that anyone who builds product based on a legally grey area is at risk when that area begins to crumble.
>it's too bad Apple doesn't allow licenses for running things headlessly like this.
agreed, but I think Apple wants to drive everyone to a hardware solution.
At one point 'enterprise-ish' hardware was offered, but now it seems that it'd be in their interest to offer virtualization licenses while trying to smooth out whatever troubles exist between their software and the major VM hyper-visor offers out there -- mostly since there are huges holes in their hardware offerings for those seeking to do 'enterprise-ish' things en masse.
I'd interpret that koan to mean that those in isolated woods can make noise without being heard.
Say I have a personal project with a few dozen users. Somebody reports a bug on osx -- I don't do windows, I don't do macs, so I'd need to rely on my small community to fix it. With a pirate copy, I'd be able to do the fix -- and none would be the wiser.
Scale that up to a company, put it up on a public repo's CI, and that's when people might hear the tree fall.
With 40 years of Apple development under my belt, I can safely say that Apple used to be great about supporting older hardware.
When they were on rough times, everyone understood about needing to break compatibility with old hardware (that nobody really cared about anyway because, despite it's horrendous price, was super obsolete due to the rate of Moore's law back then). But nowadays, Apple is invalidating old hardware platforms for superfluous reasons, like abandoning 32-bit apps, enforcing their OEM cryptographic authority (with T2 chip) and getting into it with nVidia (granted nVidia screwed up big time with those GPU chips that blow) or more recently, getting into it with Intel (which has caused agonizing supply-chain issues for Apple). They are no longer that good about helping people support Linux either --you would think that when they deprecate a machine, they would at least have the decency to open source all of its drivers...(and have pre-negotiated the legal rights to do so).
That is exactly what I was thinking of. I don't want to buy dedicated hosts from MacStadium. I got native iOS builds and Unity buidls that would work well with this. Isolation is not as big of a concern.
I too would like to hear if anyone has been able to pull this off.
Licensing is impossible if you don't run this on Apple hardware. The macOS license only allows you to install and run macOS on devices that came with it and this includes virtualization.
However, if you buy a Mac, install Linux and use it to run the pipelines, that'd be strictly legal I believe.
If we put aside the licensing issues, isn't QEMU notoriously slow? Building iOS apps seems pretty slow even on a high-end MBP, so I'd be curious how long it took under QEMU.
Well the title: "A full system emulator (qemu) running inside Docker, using root privileges on the container that make the isolation very weak" would be a bit to long for HN, what do you think?
If you have container orchestration in place, being able to use it to run VMs via qemu is actually incredibly useful and isn't really much of a yikes. Sure, you're losing the container's isolation features, but you have a VM there, which is even stronger isolation.
We do CI for our VM images in our kubernetes clusters. The build system already was in kubernetes, so putting the OS image testing in there was a big win.
The benefit of doing this is also that on a personal machine you can start playing with an OSX vm with a single docker run command with no other dependencies and many people already have docker setup, whereas standardized qemu/virtualization tooling is now much less common on developer machines
I don't think you understand what this project is.
You have to fiddle with BIOS and kernel module parameters, install packages, configure KVM, etc on your docker host for this to work. It's not something that you can just throw into kubernetes, especially if you don't manage the kubernetes deployment yourself.
> The benefit of doing this is also that on a personal machine you can start playing with an OSX vm with a single docker run command with no other dependencies
There _are_ external dependencies that you have to set up manually. It's the same amount of work to set up on docker or to use a real VM, so I can't imagine why you would prefer this method.
To run 64 bit VMs, you always needed to turn on hardware virtualization in the bios. To configure kvm, all you need to do is "modprobe kvm" if it isn't already loaded. At that point, everything else is user-space and 100% of the user-space dependencies are installed in the image. All the docs about libvirt on the github page are unnecessary. So full steps are really:
> It's not something that you can just throw into kubernetes, especially if you don't manage the kubernetes deployment yourself.
GCP and Azure support nested virtualization, so you actually could do this in a managed kubernetes cluster. It's plenty common to use privileged DaemonSets in kubernetes to load kernel modules for filesystems, storage, or iptables rules. If you're allowed to run privileged containers, it's trivial to run VMs like this in kubernetes.
I work at Sourcegraph and have been considering something like this for a while for running long-running jobs, things like CI pipelines and GitHub actions for example.
How would you feel about an app like GitLab, for example, shipping a docker container that required privileges for this, I wonder?
It'd definitely be a harder sell for third party software to require a privileged container and /dev/kvm mounted when you run it in your environment, especially since nested virtualization is largely unavailable in AWS. It also requires that the correct kvm kernel module is loaded, etc.
However, if it was a product that required virtualization and that was recognized as a requirement, then also distributing a docker image that could do it would probably be useful for people in the "and if you don't have virtualization infrastructure, but have container orchestration and nodes that support virtualization, our service will also work in a privileged container" camp
I would say docker is primarily used as an artifact delivery mechanism where artifact presupposes inclusion of required runtime.
I an not saying it’s right or wrong. I think it’s the most practical and thus most pervasive use of docker.
The company and the project provided a great service by creating lot of shipping/container related metaphors though. That’s very much invaluable in my book. Providing thousands of developers means to talk about objects that are static and dynamic from an architectural perspective.
(Darling requires a kernel module, which also isn't a thing you are able to do in the context of Docker as you are often just working with the host kernel.)
Wow! This is highly surprising and is totally unusual! They're embedding parts of the XNU kernel in Linux. I can imagine this was done as a last resort as there is pretty much no chance this will ever get upstreamed, not just because of licensing, but also simply just strangeness.
I have run MacOS in QEMU in emulation mode, that was around the time of the first Hackintoshes. You're right, it was very slow (on the hardware of the time.)
This thing is basically a bash script that runs the package manager of a distro pointing to the latest package feed; not even a specific stable version or anything like that.
It also depends on host kernel features such as KVM settings and hardware such as CPU (good luck on AMD) much more than whatever is packaged on the container.
This is about as "reproducible" as replaying my .bash_history on a different machine. I would bet that before the end of this year this script no longer works.
Why do we do this to ourselves? Words have meaning. This should go doubly so for engineers. Our language should be as precise as possible when describing systems.
English vocabulary rules - vocabulary is built of axiom schemata - are insanely complex, and still we have to rely heavily on disambiguation by context. It is the nature of NL semantics. I'm not arguing against more precise terms of art, just saying that expanding an insanely huge and complex vocabulary isn't a great way to accomplish that. Establish your context, then use your symbols in that context; thus are all utterances disambiguated.
It is definitely a recipe, but the "reproducible" claim is weak. These 43 lines of Dockerfile text are unlikely to work in a few months (or even weeks).
Okay, I know how docker can mean containers, and I agree that Dockerfiles are quite similar to makefiles, but how does docker relate to source control?
I'm not saying people use it for vc, it's how docker works. It optimizes your containers using layers and if you do "FROM ubuntu:18.04" in 3 dockerfiles, you only have one copy of the ubuntu stuff.
KVM will be available if you run Docker natively on your laptop, or if your docker-on-VM setup supports nested virtualization - I think these are pretty common setups?
I considered including a price tag (... and comes for only $6990 ), but decided against it for being too thick. I also think that using /s markers kinda kills the idea of sarcasm.
> unless you go over the top with praising the price
most apple enthusiasts think that the price is spot on and in no way overblown. (i do not have an opinion on the matter as i neither own nor use apple devices)
i dont think you can convey any point about apple in a sarcastic manner while omitting a /s tag. there are always people honestly believing the point you're stating sarcastically.
this applies to both negative and positive statements
It's an unenforced provision of the license agreement. No attorneys are recommending it but it's happening. I wouldn't start your own CI firm with it though.
Unlike Microsoft, Apple has no motives to send the BSA after anyone. Pretty sure they've only used them for egregious copyright violations like the commercial Hackintoshes.
Probably the majority of macOS CI/CD use cases are materially beneficial to apple.
Most people just want to automate building software for macOS/iOS.
Making it easy to produce software for their products, just strengthens their ecosystem.
I've always wondered why apple gets a pass on the kinds of anti competitive suits brought against google. Google gets fined Billions of dollars for preloading a web browser in Android, but it's fine for apple to completely monopolize their hardware and software ecosystems.
As far as I know, it's because they do not have partners.
Chromium is competing with Samsung internet and others on Android, because several companies sell Android.
So it's not fine for Google to force the choice.
Apple is not competing with anyone else on iOS, you don't have a choice.
As long as iOS does not have a monopoly in smartphones, they're in the clear.
Doesn't that seem a bit backwards? Google faces anticompetitive scrutiny because they created an open(ish) platform that others can compete on. But Apple goes all in on a walled garden where nobody even has the opportunity to compete with them. Doesn't that feel anticompetitive?
I'm certainly no friend of Google and I'm not losing sleep over them getting fined. But it seems that Apple is just as much if not more anti competitive and anti consumer, but they get a pass because of what feels like a loophole.
At the same time, it makes kinda sense. No company with a product has to allow competition in. GM doesn't have to allow BMW motors in their cars.
When you join a market with a product, like Apple, then you don't have to allow competition in your product.
When you create a market like Google did Android, then you to allow competition in that market.
I think that's the key difference, product vs market.
Irrelevant. Both licenses rely on exactly the same laws, morality and principles. Either you think intellectual property can be owned and licensed, or you think it should not be.
I'm not sure that to run a CI you have to violate the Apple license. I mean most core components of macOS (the Darwin project) are actually open source, and thus can be built and used freely. And for a CI you don't need a GUI and other stuff, only the kernel and the compiler basically. So maybe it's possible to compile a version of Darwin so close to the real macOS that has the bare minimum to run the compiler to build and test software.
Of course the problem are the SDK that you need to use to build most software, they are obviously proprietary, but does its license say that they can only be used to compile software on a real Mac?
You can of course run the CI/CD system on Apple hardware just fine. You can even get colo hosted Apple hardware with a monthly payment Hetzner style from a few companies.
I used MacInCloud before Azure DevOps got hosted MacOS pipelines.
I've no affiliation with them, but can recommend them. Never had any technical issues, and the one billing issue we had was sorted out quickly by their support people.
Then you should look at either Bitrise [0] who have a CI/CD system tailored to solving that exact problem or GitHub Actions [1] that provides macOS build machines.
The typical answer is paying to those who do that for you, like Travis.
And yes, too much to ask. Apple's principal revenue source is selling hardware. They don't care if developing for their platform is not cheap; they explicitly target the premium segment, and are never uncomfortable with their well-known large margins.
Apple doesn't care if developing for their platform is a PITA.
The existence of companies like MacStadium filling rooms with Macs just to bend over backwards for inane license terms proves the paid demand is there. Apple could offer to license their OS for use not on Apple hardware — for a fee. They don't. The objection isn't to the price; the objection is that needing to manage physical hardware (as opposed to spinning up VMs like in the article) is a PITA to manage, comparatively. Companies would — and do — pay to not need to deal with that pain, but it would be a lot less painful to not need to get a third party involved / to be able to make use of the infra I have without having to stuff some Macs into a closet and wonder how I'm going to make that redundant.
I've been worried about Azure DevOps getting merged into GitHub since Microsoft bought Github, but thankfully I haven't seen any drop in the maintenance of Azure DevOps, not yet anyway.
As much as I like Github for OSS projects, I really love Azure DevOps for everything else. The CI/CD capabilities are amazing, and I haven't had any real issues with lack of documentation (a few small things here and there for sure though).
I doubt it, the valuable thing is the GitHub brand but the meat lies on Azure. Most developers still see Microsoft as the old Microsoft of the 90s but they are actively working on that image or brand by acquisitions like Github or NPM. The inverse is most likely where GitHub is just a web endpoint for underlying Azure services.
Not to jackknife the thread, but most developers were born in the 90s and do not remember this Microsoft.
The engineering and dev managers DO remember this Microsoft. So while I agree somewhat they are trying to repair their image, with the younger crowd they only know MS as Minecraft, VS Code, Azure, Github. More people probably got exposure to Linux via WSL than all of the previous installs combined.
90s me would have thought hell would have frozen over, and now me knows it has.
The only real change I've seen to Microsoft since the 90s is that it finally "embraced" open source in markets it was losing in when the only other alternative was irrelevance.
They're still playing dirty tricks on open source. They're just not stupid enough to use the ones that would do more harm than good.
Most companies are only using open source as a weapon [1].
When people say that "MS gets open source now", what it really means is the MS gets how OS can be used to further its goals. Not that they have fundamentally changed.
MS will have graduated to the next level of maturity when it open sources something that is strategic to the ecosystem and the ecosystem as a whole benefits (Windows NT 4?) or something that is obviously making them money.
> MS will have graduated to the next level of maturity when it open sources something that is strategic to the ecosystem and the ecosystem as a whole benefits
Dotnet Core is cross-platform and fits that bill.
It introduced a huge segment of Windows-only devs to Linux.
Yes, that furthers Microsoft's goals too, as Linux is very important in the cloud, and therefore Azure. But regardless, the move has also benefited many, many developers, and the Linux ecosystem.
Microsoft told my previous employer (a very large enterprise customer) that they can expect ADO to be de emphasized at best and retired in favor of the GH offering at worst.
They've told my employer the exact opposite, that they expect Github and Azure DevOps to coexist peacefully indefinitely, so I guess neither of us should read too much into that!
I built a PoC of this before, and as far as I can tell, entirely legally.
End goal was building iOS apps w/o any mac hardware. Using some open source patches to clang, libimobiledevice, and a whole bunch of other tools, I was able to write an iOS app in "good ole C in emacs" on my linux laptop, cross compile it for the iPhone, and even code sign, upload and run it on the phone.
This was several years ago. If offered as a hosted service, do you really think anyone would pay for it enough to make it worth my while to code up and polish?
My CI/CD pipelines have always been a minmaxed affair of doing the math on how useful a piece of information is and how expensive it is to get.
As such, it's not unusual for the OS X tests to be farther down the list and not to trigger at all if earlier stages failed. It's easier to scale up multiple subprojects if you run the first couple phases on commodity hardware and then ramp up to the more peculiar stuff only if everything else already looks good. That way, your broken build can't slow down my green build very much.
It's not so much that I like doing this, as that it prevents a number of things I most definitely do not like at all.
Azure DevOps has the most reasonably priced Mac nodes I could find, I believe it’s £30/month/node (same pricing as their Windows and Linux nodes). Having to figure out another YAML config for CI isn’t my favourite task but it’s not too hard and we are generally pretty pleased with its performance (using it to build iOS app builds for every change and upload to AppCenter, which is a great free repository for them which hooks into Azure easily). Feel free to get in touch if you’d like more info.
Azure DevOps also has a generous free tier with lots of CI minutes, and I'm fairly sure you can use the MacOS pipelines in that tier.
I've used Azure DevOps on several projects, and it's CI/CD capabilities really are fantastic.
I've also used AppCenter (and the previous incarnation, HockeyApp), and it makes for a great app distribution experience. It also has an integration with InTune, which is very useful in the enterprise, although we did find configurating it to be a total PITA and it required a domain admin to set it up.
Wrote a separate comment about this, but Azure DevOps seemed to be quite a bit cheaper for Mac nodes than GH Actions. Strange as they are presumably using Azure under the hood!
If you make games, Unity Cloud Build can turn a Unity project into a downloadable IPA that you can easily load into TestFlight/App Store (and presumably a device or simulator, but I've never tried).
It's hard to see what Docker is adding here since qemu is being run inside Docker. You could get almost identical functionality out of a bare VM image and not deal with the hassles of docker.
Aren't you being a little too harsh? The Dockerfile automates the installation of libvirt and custom components to launch the VM so while the title might be misleading it is hardly "not adding" anything.
Docker is not adding anything whatsoever. The "dockerfile" could be converted with a simple macro into a plain bash script and it would literally be the same thing. The container is not abstracting anything here; you are using KVM of the host system.
You even have to install more packages in the host than in the container.
You could even just run the original script the Dockerfile is wrapping directly which will even autoinstall the packages for you.
Yes, Docker is effectively acting as the supervisor process for this container, only managing its process lifecycle and allowing easy removal/persistence.
My (linux) desktop has a macOS icon on it. I click on it. That starts a QEMU/KVM process which runs macOS. When I quit out of that, it is gone. When I click on it again, it runs again.
I suppose there are a few files floating around that persist between runs, and are not utterly trivial to manage. Just ... trivial to manage.
> Doing anything else can take an act of Congress.
Does fiddling with BIOS parameters and installing kernel modules fall into "anything else"? Because this project doesn't work until you've done that on your docker host.
I often wonder what it is that perpetuates hammer problems. When I look at a problem, all technologies are options. If I'm doing the work, I'll obviously try to choose from those I'm more familiar with or choose what seems like the best established/mature hammer.
It often seems like someone wants to use a specific hammer and tries to tailor/turn/reshape problems into nails. Some of it may be resume driven development, perhaps choosing comfortable technologies, etc. Perhaps hammers are the only tools they're familiar with and risk aversion in business leads us to only use our hammers.
Something is amiss because I keep seeing hammer problems and it never seems to stop.
Luckily, a lot of the things are nails. The number of applications you can keep neat and tidy with a fleet of containers is so great. Even a relative novice can get some great things working by poking about a few guides. And when it all breaks you can start again without too much collateral damage.
I'd argue that I was able to do the exact same thing 20 years ago using VMWare Workstation - download a VHD image with OS and app(s) pre-installed and configured, optionally map to a drive on the host OS and get started instantly...
Maybe I'm too old to get the appeal of "a fleet of containers" in place of a single (and potentially throw-away) VM image along with maybe even a vagrant-script to make it easily reproducible... ¯\_(ツ)_/¯
Running multiple containers is faster and more efficient than multiple VMs. Running a container on the same kernel as the host is faster and more efficient than a VM. If you're trying to run an app across different OS kernels, you still have to use a VM unless you can compile your app on that kernel natively. This project really doesn't do much beyond what a VM gets you, and you still have to configure the host outside the container to enable virtualization.
Yeah, tooling/workflows are the (possibly only) reason that Docker is popular. It's like how the FreeBSD folks will say that they've had containers (jails) long before Linux, which is true but misses the point that without the ability to pull and run images trivially they're losing, badly, on usability.
It's just another packaging format, and if I had to choose just one then Docker is the least worst for me.
Least worst is obviously subjective but for me it includes running a bunch of random stuff in roughly the same way across Linux, OSX and Windows. The launch and manage UX is significantly better than vms also (not intrinsically, for any good reason, but in practice).
I get that this container is not actually portable tho.
Looks like a package for those who have no package manager. Highly insecure - do they review Dockerfile on each installation? Does it store and display pulled git hash?
Looks like work in progress:
$ git clone https://aur.archlinux.org/yay.git
$ cd yay
$ makepkg -si
I know it's against the HN rules to complain, but this comment really feels so generic. Like every time a JS project is posted, 'oh npm is so bloated'. Everytime there is a new app, 'oh electron is so inefficient'. Everytime something is packaged in a docker 'whats the point of containerizing this'.
It's just nerds being nerds, nothing new. Dropbox as ftp and rsync yadda yadda. Other tribes have "is that even lifting" and stuff like that. Take it as small talk (pun intended) and just collapse the thread, you can do that these days.
I sympathize with what you're complaining about, but I don't think GP is an instance of that. GP is (rightly, IMO) complaining that the linked article proposes something with extra steps that are not needed, as in: it could be easier than suggested. The typical HN gripes you describe are sort of the inverse of that: complaints that easy-to-use things (electron, npm) are flawed in comparison to their harder-to-use counterparts.
Can macOS virtual machines ever be performant enough to use as a workstation? So far I have only tried setting it up in VMWare and VirtualBox, the performance wasn't there but I haven't dedicated a GPU or drive to it yet. It would be so convenient to decouple macOS from Mac hardware.
I run macOS (currently Mojave) on a 8 core, 16GB KVM on top of Debian on a 16 core 64GB Ryzen. It's faster than the (older) Mac Mini I used to use for everything I've tried on it. I don't know how it compares to current apple hardware, but for my purposes (doing the bits of macOS-specific code development in a large native cross-platform application), it's about as perfect as it gets.
The problem is cocoa performs terribly without gpu acceleration and nobody has figured out how to get around that, there are some tweaks to get OSX running in vmware and wherever else but you never wind up with working GPU acceleration so not only can you not change the resolution once you turn it on (iirc, may be wrong here), it's refresh rate is horrendous. If you ever used Windows before it installs the gpu drivers where the window manager is all weird and unoptimized and glitchy, it's that but worse.
It's been a long time since I played with macos vfio passthru stuff but maybe that's a way around it nowadays. There's a little /r/vfio community that tries to tackle it pretty frequently.
Hopefully someone else has more recent details than me, I'm back to using osx hardware now that the 16" mbp lets me have 32gb.
QEMU/KVM/VFIO has come a long way. If you have a MacOS-supported GPU and working IOMMU (AMD) or VT-d (Intel), then you can achieve near-native MacOS performance for your CPU/GPU combo.
I have always wondering how Azure, BrowserStack and such support Safari or macOS. Do they have custom licensing with Apple to allow to run it virtualised or are they actually running it on Macs?
If I remember correctly you can only run macOS on their hardware.
Not a lawyer, etc, but I read this as requiring the VM to be run on macOS host:
allowed "to install, use and run up to two (2) additional copies or instances of the Apple Software within virtual operating system environments on each Mac Computer you own or control that is already running the Apple Software"
IANAL either, and also I only read that PDF for a minute, but: maybe the Apple firmware running on the motherboard, a T2 chip etc, can satisfy the requirement that it is a computer "already running the Apple Software", even with Linux as the host OS. I think "the Apple Software" carries broad meaning in that agreement (and not very exclusive in its definition), e.g. clause 1: "The Apple software (including Boot ROM code)"... ?
The definition is broad (and self-referential, oddly): The "Apple software" is defined as "The Apple software (including Boot ROM code), any third party software, documentation, interfaces, content, fonts and any data accompanying this License"
If a subset such as "Boot ROM code" on the VM host was sufficient to allow for using "Apple software" beyond said subset in VM guests, then any other subset (such as "fonts") would also have to be sufficient, and no reasonable person would agree with such a thing. Therefore, it follows that the _entirety_ of "Apple software" must be "already running" on that "Mac Computer" before booting any VM guests that use "Apple software." This interpretation is supported by the use of the word "and."
Upon second look, it would seem that the spirit of what they mean by "the Apple Software" is 'the whole set of standard Apple software (that comes pre-installed on the Apple machine'.
But actually, even more strongly in favour of interpreting that they don't specifically license macOS for use on a Linux host via their agreement, is the syntax of 2.B.iii itself (italicisation for emphasis):
> [you are granted a license] to install, use and run up to two (2) additional copies or instances of the Apple Software within virtual operating system environments on each Mac Computer you own or control that is already running the Apple Software"
Their clear syntax of repeating "the Apple Software" in the context of both guest and host environment indicates that what is used virtually must also be used on the host.
Additionally, you probably couldn't get out of it by dual booting with Linux and saying that 'aha, see, I have mac running on the host machine I'm fine', the grammar of the words "that is already running" indicates that macOS must be running while using macOS as a guest, under their license.
I can't imagine any serious legal implications that really matter apart from to major corporation making major money off virtualising macOS somehow. To anyone else or any other angle relating to it, I don't think there's any worry whatsoever. It appears the Hackintosh community hasn't been sued into oblivion...
The agreement is pretty vague on the subject but it seems that you can get permission to virtualise on non-Apple hardware. How one does this and if anyone ever has is another matter entirely.
Apple does it internally, since they can turn off the mechanisms that stop macOS from booting on non-Apple hardware (last I heard on HN, they ran it on HP workstations?). I'd imagine that if you got JAMF, Adobe, and Microsoft in a room one of them has the magic incantation as well, considering the amount of Mac code they put out.
Yes. But for testing on actual devices I am pretty sure you need a spare USB controller to pass through. When I tried a few months ago, there was no other way to connect USB devices.
This was done by SpaceInvader for the UnRAID docker community months ago. As stated, it is just streamlining a process that has been available with KVM-OSX with various bash scripts for months or years before that.
OT: Has anyone Found virtualization really resource in the latest OS X? It was eating up 25 percent of my RAM doing nothing. Both Docker and Vagrant were so resource hungry that I ended up ditching them.
The last version of Docker for Mac had some big resource issues, current latest version seems to be better, but still makes my 2017 MBP into a grill when running multiple containers.
Great work! I would add a disclaimer that you should only run this on Mac hardware. Ie you can run Arch on mac hardware.
Get an old mac and point to it if someone asks.
You can use Docker Machine and whatever backend you want, including xhyve, VirtualBox, and VMware Fusion, and enable nested virtualisation in the created machine.
hyperkit is a fork of xhyve, plus stuff like bridges between inotify and kqueue/fsevents, or transparent tcp port forwarding.
Hypervisor.framework is an API to execute hypervised code, to build virtualisation engines that can run unprivileged; you still have to write an actual virtual machine.
I misunderstood the angle this thread was taking - I thought the OP was going to ask if the Dockered MacOS could run Docker.
I was just reading up on Rancher OS, but that’s a containerisation too far at this time in the AM.
Edit: Okay, I understand my assumption was wrong. Thanks
Doesn't docker run a container in a single thread? So this would be running the entire MacOS in a single thread? Is there a way to tell Docker to execute this in multiple threads?
No, that isn't how containers work. Containers run processes, as many as you want, each which can use as many threads as they (up to ulimits). Most hypervisors will allocate 1 thread per virtual CPU core by default, and since this is using qemu with KVM then that's likely the case.
Looking at the Dockerfile in the OP, you can see it's using https://github.com/kholia/OSX-KVM/blob/master/OpenCore-Boot.... as the script to start the VM, and you can see the -smp 4,cores=2 in the qemu arguments which configures how many threads/cores/sockets to assign to the guest. Though I'm not entirely familiar with the syntax so I'm not sure what the "4" is for, but I'd guess threads.
It only works on Macs, it needs the toolchain and ROM I guess? Docker is just one step closer to porting it on the Windows and Linux platforms.
Personally, if I wanted to run MacOSX that badly, I'd buy a Mac Mini or the lowest-priced Mac they have. Much easier and worth it for the AppleCare and Warranty.
It also uses hardware assisted virtualization (KVM) which is not going to be available most of the time Docker is.
You can think of the Docker platform itself as subset of the Linux platform. With many common features removed by default... SYS_PTRACE, cgroups come to mind as not allowed within the container. (This "Docker as a subset of Linux" is also what you end up getting from most "Docker as a service" platforms offered by clouds, including kubernetes. I'm referring to AWS Fargate, Google Cloud Run, GKE, AKS, here.)
So don't think of this as macOS in docker wherever docker runs.
What would be a lot more analogous to macOS in docker would be running Darling in docker: https://www.darlinghq.org/ ... if that could be made to work for the entire system (highly unlikely)
Darling is more like Wine in that it runs native executables for one platform as native processes on another platform using a compatibility layer. Wine, by the way, definitely works quite well inside Docker.
Also, one final thought. I wonder if you could get macOS to boot in QEMU without hardware assisted virtualization. Then you could probably run this in a fully isolated container again. The performance would likely be abysmal though!