This is absolutely awesome and something I always wanted (I’m in a 8GB M1 though which I regret often not going to a 16GB). I just want to sandbox some of the applications I need to use but don’t trust on my computer (looking at you Bambu studios). I tried using the sandbox feature on macOS but it’s unusable for me on Ventura.
with Chrome/Arc/Edge being what they are, being able to get 24GB was a game changer for me - it allowed me to go back to an Air, which is definitely my preferred form factor.
That's kind of Apple's tiered upselling plan, isn't it? ;-/
But as an Apple user I concur completely. The awful experience of bloated apps (or multiple VMs) slowing your system to a crawl isn't worth the few hundred dollars you saved by opting for a lower memory SiP.
$400, while indeed a significant jump, absolutely is "a few hundred," given that $200 is "a couple hundred." In my reckoning, "a few hundred" starts around $250 and arguably runs up to $750, where "almost a thousand" could be said to start.
- Docker, but the docker on mac experience has never been great (even if Colima or Orbstack makes that much better)
- A VM running on my mac where I install whatever I need. Tried debian, silverblue, and used the opportunity to properly learn Nix and use NixOS
- My current setup, especially for hairy projects with dependencies I dislike: an intel NUC running in my local network that I set up with NixOS which runs everything I need. Because I install Tailscale everywhere, I can use it even when working from outside my home.
Depends on what you are trying to solve. If you don't like the fragmentations of dependencies on your system, this won't go away on a second machine. Also, I'd point you to hosted VS Code environments such as GitHubs code spaces or Gitpod.
You may use something that encapsulates your dev dependencies. Some use containers like docker, others use Nix (can be run on MacOS ).
As Nix has quite a steep learning curve, there are abstractions such as devenv or devbox, which I haven't used.
Current setup on my M1 for dev is to use docker with containers for mysql, localstack, rabbitmq, redis, and ruby (debian bullseye), all arm64 images, and then VS Code with the remote container extension. Was pretty rough going when the M1 first came out but is now flawless and lightning fast. Onboarding new devs takes an hour instead of a day.
VS Code remote so you don’t have to pay the cost of volume mount? Is performance that much better? I’m not a VS Code user, do you need things like the lsp server baked into your image to get intellisense to work?
Raspberry pi or other compact linux machine with code-server running in docker. Gets you a web UI that works from anything (ie iPad) and you can do docker-in-docker for other containers.
SBCs have less horsepower but you can always run them from a USB battery bank, letting you dev on the go.
I've been coding for the past 2 years almost exclusively through VSCode remote on Linux servers with few complaints.
Get a seperate SFTP/network file client as the built-in VSCode one is slow and unstable but otherwise everything works great. If you're considering using any Jetbrains IDE over a network connection however, I'd encourage you to think again.
Works great for me, I run an ARM64 Linux VM using the Apple native framework. macOS is configured to get out of the way, I only need Firefox, iterm2, and some custom shortcuts. I code and do pretty much everything on the VM. As a Linux user this has made the work-issued M1 Pro a great laptop, which I wasn't sure about.
I kind of like this option. Running the front end/client natively and the back end/server in a VM works great.
In the future Asahi or a native Ubuntu port might be a good option, but a VM is almost better in some ways since it's portable/migratable, has easy snapshots, has better isolation, etc.
Utm is just qemu, and qemu is actually open source. Utm hides an enormous amount of what they do (just saying not open source again in a different way), and that's definitely not what you want in a base system.
So it would likely be much better to simply install qemu instead.
There's the Hypervisor framework, which is the low-level framework that replaces the need for a kernel extension for type 2 hypervisors, and then there's the more recent Virtualisation framework which is a more high-level framework for running VMs. QEMU's HVF uses the Hypervisor framework, and UTM also supports running VMs with the Virtualization framework (which is what Vimy and Viable also use). Note that the Virtualization framework also has support on Apple Silicon for running macOS guests as well as Linux guests.
So, Mac doesn't come with a package manager. There are things you can do with port and brew, to alleviate some pain.
Me personally, I fool around with a lot of languages, and sometimes I don't quite keep every project up to date with the latest version. For work, the situation is even more challenging, because I don't have the authority to go tell teams to upgrade or not upgrade, but sometimes I need to build their projects.
None of this is really Mac specific. I used to use virtual box for this, which worked pretty well. I could set up a baseline environment and snapshot new boxes with new language revs.
Docker can kind of cover this, but I tend to over do it, breaking things out into the one you build with, and the one you run with, and it's fine, but it's a lot to keep in my head. So, I'm kinda flakey about that.
Now, I'm slowly building up some proficiency with nix. I think this is what I really want, to swap out the whole chain of system dependencies, and build docker containers or vms out of that if I need them.
Perhaps I'm a poor sysadmin. _everything_ locally installed makes things real weird after a while (years).
My path was vm -> containers -> nix. With nix flakes and the direnv extension in vscode you can get a clean per-project dev environment with zero docker overhead and without dev tools cluttering up your global environment and causing issues.
That looks really nice. I'm sure it's a great solution for lots of developers.
I think I'm going to stick with nix. VM's, docker, orbstack and nix all seem to let me make a machine that looks like "X". The thing is, docker and VM's provide an abstraction that works 95% of the time, but that last 5% is awful (I don't know about orbstack).
Nix, by all appearances (and my limited experience) is much much worse up front. there's a lot to learn. But in exchange I get to keep control. This abstraction is wrong, we're not doing that, we're doing this. I'm not asking, I'm telling.
Depends on why you're a mac in the first place. For me it was iOS adjacent dev, and that meant upgrading the building stack every year, uncluding straight OS upgrades. And thus screwing my other dependencies every year.
After the third or fourth time, I switched to a VM that will stay stable basically whatever happens to the system.
Executing a bunch of npm modules locally, having a Mongo and Redis database running 24/7.
I don’t have a great answer. One thing I’ve noticed on Mac is that, using Activity Monitor, before installing all the dev dependencies for local dev nearly all processes seem to run under the local user’s user space, but after installing a bunch of stuff (with sudo) a ton of processes default to running as “system”
I haven’t had time to research whether this actually has a meaningful impact on security, but TLDR I trust Mac’s out of the box security, but I instantly stop trusting it the moment I start installing a bunch of stuff via Homebrew and NPM.
You generally shouldn’t ever use sudo with either Homebrew or NPM.
Homebrew is specifically designed to be used without elevated privileges. This has the downside that packages are owned by the user which first ran the install (which might lead to those packages running with elevated privileges after a sudo install as well? I don’t know, and I’m not eager to find out).
NPM packages are typically either project local (and these definitely shouldn’t be installed with sudo), or “global” (which should be global in the sense of being installed on the user’s PATH, and thus shouldn’t require sudo for any normal setup either).
You’re right to be cautious about the security implications of this.
> This has the downside that packages are owned by the user which first ran the install
This is a bit of a security problem if Homebrew's .../bin is on your sudoers secure_path, because now your normal user can overwrite something that might be invoked via a simple `sudo whatever`, which doesn't specify the full path to whatever
> which might lead to those packages running with elevated privileges after a sudo install as well? I don’t know, and I’m not eager to find out
No, definitely not by any normal mechanism. Maybe there are some exception, like packages that set up LaunchAgents or LaunchDaemons, or which run the install scripts of .pkg installers which ask for elevated privileges. But those can set up programs that run with elevated privileges anyway.
The nice thing about the Mac is that you can run a lot of unix tools natively.
But of course, anything that runs on your Mac is a potential security hole. Obviously all 3rd party apps that you use can compromise security. But it can also be your own code: If eg. your rails app has a security vulnerability, which is common during development, and you run it with your local user, as is common during dev, then that vulnerability can potentially compromise all your data.
So if you want to be safe, run all your dev stuff in VMs or on a separate device, or in a container or something.
Of course, that is cumbersome, and whether it is necessary or not depends on what kind of threats you expect...
nvm doesn’t stop npm modules from installing locally. many npm modules have pre/post install scripts that execute binaries and such that I’d rather not execute locally
Docker for Redis/Mongo is reasonable, but npm dependencies creeping into the system is something you can’t really easily undo other than a full wipe and reinstall of the OS. Especially when certain modules require sudo to install
Are you building apps for Mac/iOS? If not, I have a 12 core Ryzen with a 3090 that was the same price as a higher ram Mac mini ($1K). I recommend that over another Mac if you’re ok running *nix.
It's designed to make future linux easier to run out of the box on Mac silicon, not really intended to be run as linux in a VM. If you want vms, use UTM https://mac.getutm.app/
The very cool thing about Tart is how it uses OCI for OS images, so you can use your existing image registry infrastructure to host and pull down OS images.
These are more CI-oriented but I like how that makes it easy to manage state with them.
If you're focused on Linux VMs and maybe not on GUI stuff (although I'm sure you can make that work), Lima seems to be the go-to in the user 'community', as it were: https://github.com/lima-vm/lima
If you use ARM guests on Apple Silicon, you should get good perf just like with stuff in the OP. (Like UTM, Lima is based on QEMU.)
I'm aware, but considering this provides virtual apple hardware to run macos inside the vm, I'm curious if this can also be used to install/run asahi inside the vm.
My motivation here is exploring asahi on my MacBook inside a vm without needing to install it on the metal and modify partitions on the disk
I think you can still take advantage of paravirtualization without running an OS built for Apple Silicon specifically. You can emulate peripherals and the motherboard and stuff without emulating the CPU, so you would probably do better just to run the regular ARM variant of whatever distro.
Both Arch and Fedora, which some releases of Asahi are based on, have regular, shmegular ARM variants.
Well there's no easy way to make it work. I need to compile to Intel, not run Intel code. I would need to setup a whole toolchain + homebrew. No idea how to have all of that setup correctly.
In theory running a different version of the OS for testing. Be aware MacOS VMs cannot use iCloud services though, so if your goal is CI/CD in the VM none of your tests for iCloud will work.