Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How are you dealing with the M1/ARM migration?
223 points by cedws on June 10, 2022 | hide | past | favorite | 271 comments
I love the M1 chips. I use a 2021 MacBook both personally and professionally. My job is DevOps work.

But the migration to ARM is proving to be quite a pain point. Not being able to just do things as I would on x86-64 is damaging my productivity and creating a necessity for horrible workarounds.

As far as I know none of our pipelines yet do multi-arch Docker builds, so everything we have is heavily x86-64 oriented. VirtualBox is out of the picture because it doesn't support ARM. That means other tools that rely on it are also out of the picture, like Molecule. My colleague wrote a sort of wrapper script that uses Multipass instead but Multipass can't do x86-on-ARM emulation.

I've been using Lima to create virtual machines which works quite well because it can do multiple architectures. I haven't tested it on Linux though, and since it claims to be geared towards macOS that worries me. We are a company using a mix of MacBooks and Linux machines so we need a tool that will work for everyone.

The virtualisation situation on MacBooks in general isn't great. I think Apple introduced Virtualization.framework to try and improve things but the performance is actually worse than QEMU. You can try enabling it in the Docker Desktop experimental options and you'll notice it gets more sluggish. Then there's just other annoyances, like having to run a VM in the background for Docker all the time because 'real' Docker is not possible on macOS. Sometimes I'll have three or more VMs going and everything except my browser is paying that virtualisation penalty.

Ugh. Again, I love the performance and battery life, but the fragmentation this has created is a nightmare.

How is your experience so far? Any tips/tricks?




I got an M1 MacBook Pro from work last year, and expecting to pay the price for being an early adopter, I set up my previous Intel-based MBP nearby in case I ran into any problems or needed to run one of my existing virtual machines. (I do varied development projects ranging from compiling kernels to building web frontends.)

In reality I have hardly turned on the Intel MBP at all since I got it. At all.

Docker and VMware Fusion both have Apple Silicon support, and even in "tech preview" status they are both rock solid. Docker gets kudos for supporting emulated x86 containers, though I rarely use them.

I was able to easily rebuild almost all of my virtual machines; thanks to the Raspberry Pi, almost all of the packages I use were already available for arm64, though Ubuntu 16.04 was a little challenging to get running.

I also had to spend an afternoon updating my CI scripts to cross-compile my Docker containers, but this mostly involves switching to `docker buildx build`.

Rosetta is flawless, including for userland drivers for USB and Bluetooth devices, but virtually all of my apps were rebuilt native very quickly. (Curious to see what, if anything, is running under translation, I just discovered that WhatsApp, the #1 Social Networking app in the App Store, still ships Intel-only.)


Rosetta is definitely not flawless. This type of jit/translation does have limits.

Often times people experience different levels of difficulty using Apple Silicon precisely because my workload is not yours, and yours is different again from OP's

So I feel this particular Ask HN is more about wondering how different everyone's workflows are, and how that impacts M1 usage.

I envision that workflow options/pathways will start converging into one "way" which is the Apple way. You already are getting shunted into relying purely on Metal for gpu acceleration and you see the various plurality of gpgpu libraries start converging on only the Apple blessed/authorized and optimized version.

There are people fighting against this, for example the Linux on Apple Silicon project bringing up the GPU, but it's slow going.

Give it another few years, and people will stop using x, y, or z frameworks, and only use whatever API's Apple gives us, because that is the Apple Way.

Proceed at your own peril. The future is fast, but there is only one road.


I use "flawless" in the sense that I have not seen a single incompatibility or even regression in any of the ordinary macOS software I have used under its translation, which is exactly what it was designed to do. It has surprisingly few documented limitations, like lacking AVX vector instructions:

https://developer.apple.com/documentation/apple-silicon/abou...

There are a handful of apps that aren't supported, but few of these are popular apps. Virtualbox is notable, but unsurprising: Rosetta is not designed for x86_64 virtual machines, and Virtualbox doesn't support arm64. (I submitted a correction to the Wine entry, since wine64 has worked under Rosetta 2 for a year.)

https://isapplesiliconready.com/for/unsupported https://liliputing.com/2021/06/wine-6-0-1-lets-you-run-windo...


Just to note. You can do `docker buildx install` to make it the default backend of `docker build`. So this precludes you from having to switch commands everywhere. I haven’t figured out how you can let it build multiple architecture by default, so without having to pass the —target flag.

On my m1 I see 16x performance differences in builds in favour of native over emulated. Even simple shell script run slow or seem to stall when emulated.


If you open Activity Monitor you can see what processes are running in "Apple" or "Intel"!


Are there more details on "docker buildx build" that you can point us to? The command line reference doesn't seem especially helpful: https://docs.docker.com/engine/reference/commandline/buildx_...

E.g. if I wanted to start building ARM binaries on a x86 host, is that the sort of thing this would enable?


You can build multiplatform images for x86 and ARM (e.g. M1 Mac) like so:

docker buildx build --platform linux/amd64,linux/arm64 .

I constantly use buildx to build x86 images on my M1 so that I can run these images in my x86 Kubernetes cluster.

https://docs.docker.com/buildx/working-with-buildx/


Just to follow up on my own question (though I appreciate the siblings):

I found this article with some overview and example command lines: https://www.docker.com/blog/multi-arch-images/ . As best I can tell, you don't actually need a custom builder. You can just skip the `docker buildx create` and go straight to `docker buildx build` after one workaround below.

I needed the workaround from this comment to make it work (to install qemu? not sure): https://github.com/docker/buildx/issues/495#issuecomment-754...

Overall a very slick experience for going from zero to multiarch containers. Well done, Docker.


..want to get rid of your old intel MBP? The video on my 2017 is dying.


Unfortunately it belongs to work, and not to me. Good luck finding a replacement, though.


I recommend the r/AppleSwap or r/HardwareSwap subreddit.


why your login is green mate?


That just means it is a user who signed up recently


> Curious to see what, if anything, is running under translation

There's a useful app called Silicon Info on Github (https://github.com/billycastelli/Silicon-Info) and also on the Mac App Store.

It adds a menu bar icon that switches according to the currently-focused app's architecture.


Hmm, docker is very buggy for my team and myself on m1. Coming from a x220 where it was flawless, on m1 it quite often just ‘dies’. When you paste the message you get when you run any docker command after that into Google, you see that many people have this issue and the fix is; restart docker. This didn’t happen under Linux. I am not sure if it did happen on x86 Mac as I never used docker there.


To be clear I think the support for Apple Silicon has been solid, but Docker Desktop for Mac regardless of CPU architecture has bugs, and I've had it get stuck as you describe on both. Right now for me it seems incapable of auto-updating, which I assume is unrelated to Apple Silicon.


Rosetta's biggest flaw is lack of AVX support.

We had to put so much effort to just run things on Rosetta because all of our compiled code had AVX enabled. We also needed to chase down pre-compiled binaries and re-compile them without AVX, we still haven't finished this work.


What's also nice is iPhone/iPad ARM apps can run as desktop apps on M1, so when there was no native desktop app replacement sometimes there was that other native app.


vmware on m1 supports x86_64 emulation?


No, and they've announced they do not plan to support it. However you can use x86_64 containers with Docker and virtual machines through QEMU, Lima.


At least some aspects of this issue are getting better as we speak. The latest Mac OS (in beta) supports virtualizing ARM Linux but also enables the ARM Linux system to use Apple's speedy Rosetta 2 x86 binary compiler and JIT compiler to run x86 programs within the ARM Linux VM. Based on descriptions, it seems that the rest of the hypervisor VM framework has also matured substantially this release.

https://developer.apple.com/documentation/virtualization/run...

https://developers.apple.com/videos/play/wwdc2022/10002/

If you are not familiar, Rosetta is how Apple Silicon Macs run existing Mac x86 binaries and it is highly performant. It does binary pre-compilation and cacheing. It also works with JIT systems. They are now making that available within Linux VMs running on Macs.


> highly performant

Last thing I read, 70% of the native performance was shown by running GeekBench through Rosetta (with a few odd results noted).

If somebody has better info...

Edit: I see that Nov 2020 checks returned an 80% performance, and there was discussion on HN at (at least) https://news.ycombinator.com/item?id=25105597


Here are my numbers for the original M1 (not Pro or Max) soon after release:

ARM Geekbench single core on M1 MacOS is 1734. ARM Geekbench single core on WinARM in VM on M1 is 1550. x86 single core on i9 MB Pro MacOS is 1138. x86 in emulation on M1 MacOS is 1254.

Yes, 72% x86 Rosetta vs. M1 Native. However, x86 Rosetta on M1 was faster than the previous i9 2019 Macbook Pro x86 native. I consider that to be performant for running code that was compiled for a very different architecture.


When you compare with the sad-trombone sound that Windows has produced for its ARM os, it is speedy.


This might be an unpopular opinion, but I really think people should ignore bench scores and run the processes they need themselves. See what it feels like, and how comfortable you are with that.

Benchmarks are good for bragging rights and maybe convincing over-zealous accounting to approve a purchase (but even then that’s probably not all there is to it.)


All the telling similes I can briefly think of, from "in Chad drinking water availability does not reach 40% and in Namibia exceeds 80% - but see what it feels like, and how comfortable you are with that" or "near that peaks the temperature is 10°C, and in the valley 20°C - but see what it feels like, and how comfortable you are with that", seem to be part a much more sophisticated reality (the "ceteris paribus" constraint is less foreseeable) than that of "on that machine this defined code runs in half the time of that one" - especially when you are trying to get an idea of the world, not to see how you will feel like and how comfortable you are with that.

You can decide beforehand if increased speed with respect to your experience on your machines is beneficial to you or not.


> […] I really think people should ignore bench scores and run the processes they need themselves. See what it feels like, and how comfortable you are with that.

And how do people without disposable income judge?


When I didn’t have disposable income, I don’t remember worrying about the performance of things I couldn’t afford.


In the case of Apple at least, they have a 14 day no questions return policy.


Those are terrific numbers for emulation.

Anyway I can say that my colleagues M1 using rosetta is faster or equal to my MBP i9 2020.


That's because it's not emulation. It's binary translation, which is vastly more performant.


I have benchmarked x86 on ARM Linux VM with Rosetta, and while Geekbench 5 shows similar performance between ARM and x86 version (for both single and multi core), this does not translate to the actual real world use cases.

When benchmarking x86 and ARM containers, our application seems to be around ~5x slower with x86-rosetta, and similarly can be observed for mysql-server or just doing `apt install`.

This is still significantly better than using qemu emulation, but it's not really usable in our case.

I've also encountered segmentation faults when running x86 `npm` inside Docker, so couldn't even install packages, but didn't dig further as to what's the cause.

(Note: I've created a simple macOS app using Virtualization framework, enabled Rosetta, and loaded Ubuntu Focal. I've installed the latest version of Docker, which automatically used `rosetta` when encountering x86 executables. Maybe this setup is not ideal.)


> It does binary pre-compilation and cacheing. It also works with JIT systems.

Much more impressively it also leverages a custom hardware x86-like memory model unique to the M1/Apple ARM chips. That's where most of the performance really comes from, as I understand it.


Not really; you can skip the barriers as Windows does and get mostly-decent emulation.


My understanding is the AOT won’t be available to Linux; it’s JIT only.


The WWDC video is unclear but seems to imply that it works the exact same as on macOS.

Hopefully, this is the right timestamp:

https://developer.apple.com/videos/play/wwdc2022/10002/?time...


Interesting. Where did you see that? I'm still trying to get a handle on the latest changes.


Says so here, which was posted earlier this week. I cannot verify it’s accuracy. I do work for Apple but not at all on related stuff.

https://threedots.ovh/blog/2022/06/quick-look-at-rosetta-on-...


I'm in a similar boat - love the performance/battery of my M1 MacBook Air, but the ecosystem is just too messy at the moment for me. I have a few tools I need to use that haven't yet been making official Apple Silicon releases due to GitHub actions not supporting Apple Silicon fully yet. The workaround involves maintaining two versions of homebrew, one for ARM and one for x86-64, and then being super careful to make sure you don't forget if you're working in an environment that's ARM and one that's X86. It's too much of a pain to keep straight for me (I admit it - I lack patience and am forgetful, so this is a bit of a "me" problem versus a tech problem).

My solution was to give up using my M1 mac for development work. It sits on a desk as my email and music machine, and I moved all my dev work to an x86 Linux laptop. I'll probably drift back to my mac if the tools I need start to properly support Apple Silicon without hacky workarounds, but until GitHub actions supports it and people start doing official releases through that mechanism, I'm kinda stuck.

It is interesting how much impact GitHub has had by not having Apple Silicon support. Just look at the ticket for this issue to see the surprisingly long list of projects that are affected. (See: https://github.com/actions/virtual-environments/issues/2187)


I’m having the same issue on azure devops. The only way forward seems to be is running your own ado agents on arm machines you managed to arrange. Arm on Azure is a private beta that you have to subscribe for.

That wouldn’t be too much of an issue if you could just cross compile like you can with go. However graalvm can’t do this yet.


Can you describe in a bit more details on this case? I'm trying to understand for myself, would it hit any problem for our dev team anytime soon - so far I've heard no complains/requests from them about Apple Silicon - should we get prepared for that or it's very specific case?

In nutshell I don't see how having Apple Silicon locally makes the problem - if your non local env (dev, prod, stage) is running on x86 Linux or even arm Linux, shouldn't be any issue to build for that architectures on your build farms anyway.

I may be missing some important part here.


> It is interesting how much impact GitHub has had by not having Apple Silicon support

Putting on my tin-foil hat for a sec: GitHub is owned by Microsoft, who would really stand to benefit from slowing down Apple Silicon adoption a bit...


Alternative theory: Apple doesn't offer an M1 server. Github doesn't offer an M1 build server because M1 servers don't exist.


Apple also doesn’t offer an x86 “server” yet Azure DevOps offers Mac build servers.


Yes because Microsoft got a special license from Apple that allows for the virtualization of Mac OS on non Apple hardware...

The rest of us is still running on racks of Mac Minis


This isn’t true.

https://devblogs.microsoft.com/devops/cloud-hosted-mac-agent...

> Please be aware that during this preview, our Mac hardware is hosted in third party datacenters in the United States and your build and release data could cross geopolitical lines. After each build completes, its macOS VM is reimaged, leaving no trace of your data on the agent. For more information, see where VSTS data is stored. Our Mac datacenters will expand to other geographies soon


Better link: https://docs.microsoft.com/en-us/azure/devops/pipelines/agen...

>Agents that run macOS images are provisioned on Mac pros with a 3 core CPU, 14 GB of RAM, and 14 GB of SSD disk space. These agents always run in the US irrespective of the location of your Azure DevOps organization.


It sounds like they're just using MacStadium.


Do you have a source for this claim? As far as I’m aware Azure racks mac minis and has no special license, but I’d be fascinated to be wrong.


AWS offers Mac M1 instances, using Mac Minis. It seems like Github could do it.


Mac Minis aren't servers though - they suck in terms of redundancy, density, form factor, "lights out" management. It's fascinating the effort made by some ( like AWS, Mac Stadium, Scaleway) to try to force Mac Minis and Pros to be server-ish.

And Apple's EULA makes them basically unusable as short term rented servers, there's a minimum of 24h which is ridiculous.


True, but if they're "server enough" for AWS, I think that says something. The 24 hour thing is a problem though.


You can't cross compile on an x86?


Some things can be cross-compiled, sure.

But a lot of projects have tests they want to run on their target platforms.

If you build your Linux release on Windows, you might accidentally release a .zip where your binaries aren't executable. If you set up cross-compiling to Mac 5 years ago and tested it then, perhaps in the meantime code signing has become mandatory. If you build and test on Linux and release to Windows, perhaps you'll get tripped up by filenames becoming case-insensitive.

Some projects are keen on testing on all target platforms to flush out issues like that.


M1 Mac Mini farms easily solve that question


The comments on the github issue claim Apple ToS says you need to rent the machine to customer for 24h at a time.

”(ii) each lease period must be for a minimum period of twenty-four (24) consecutive hours;

(iii) during the lease period, the End User Lessee must have sole and exclusive use and control of the Apple Software and the Apple-branded hardware ”

https://www.apple.com/legal/sla/docs/macOSMonterey.pdf (Section 3.)


There are many comments here that goes: "we had to do all sorts of configuration for this to work, but it's been great and we like it".

As a primarily Linux user these feel like very familiar stories.

It's kinda refreshing to hear those stories from mac users. Maybe we are not so different after all.


Mac-using devs are mostly Linux-using devs who prefer a non-free distro ;)


That’s not that far from the reality. It’s essentially like using a Linux with really good graphical UI, ecosystem/3rd party integrations that just works and fully supported stellar hardware.

After all, macOS is certified Unix and Linux is Unix-like.


As a Linux user who has run a Mac daily for work before, I don't really feel this at all.

To me, the core of the Linux experience consists mostly of things that are not part of macOS:

  - uniform, comprehensive, robust package management, including for the system software
  - GNU coreutils and related utilities (sed, grep, find, etc.)
  - good filesystems
  - 'root is root'; no policy or other bullshit restricting what root can do by default
  - lots of featureful, performant terminal emulators (to the point that the DE default is almost always fine)
  - some choice w/r/t desktop experience
  - popular apps don't just entirely stop working between OS releases
  - pretty much everything is discoverable and configurable if you're determined
Those are given more or less in order of importance, or how essential they are to the Linux experience to me. They don't include any specific complaints about the macOS desktop on its own terms, or things that I miss from my favorite desktop environment (Plasma). But I have plenty of both of those as well.

Using Linux just feels good to me, and using macOS really doesn't, even after a year or more 'living in it' every day. Uniform, automated software management is at the heart of it for me. But regardless, using macOS doesn't really feel similar to using Linux to me, even with highly opinionated desktop environments which are not my favorites, like GNOME.


If it was the exactly same thing it would have been called Linux :)

You can bend the experience the way you like, you don't have to use it as-is. For example, homebrew can take care of the few of the complaints here.

By the way, I thought that apps stopping working randomly was a Linux feature? Then you dive in to to fix it, not really an experience I miss.

Also, hunting down software for Ubuntu 14.04 or 18.04 because something that works for one doesn't work for the other is a Linux experience. Often you don't have software that simply works on "Linux", you need to find it for specific distro and even specific distro version or compile it from source. The compiling from source often means you need to install the tooling and libraries for that specific version. Then you will have distro and version specific instructions to install, configure and run the thing.

Linux is a horrible experience for people who see their computer as tools and don't really like to manage it and want to spend their energy on using that tool to do other things(like designing apps, studying some data, making videos etc).

Anyway, it boils down to your preferences.


> For example, homebrew can take care of the few of the complaints here.

Not really. Homebrew doesn't manage the operating system (which I understand is a desirable kind of separation for some users), and it's also just not comparable to Linux package managers in its technical aspects. The result is that it's just not as reliable, predictable, fast, or complete as virtually any distro package managers on Linux.

> By the way, I thought that apps stopping working randomly was a Linux feature? Then you dive in to to fix it, not really an experience I miss.

Apple takes a pretty radical position with respect to backwards compatibility on macOS, and this has some benefits for Apple as well as developers of greenfield projects on macOS, but it does have some downsides. One issue related to those downsides is that a huge proportion of macOS software aimed at power users whose purpose is to refine or extend the desktop experience relies on undocumented or unsupported APIs to achieve its functionality, because that's all that's available to that end. With every macOS release, such APIs are ruthlessly culled, and at least a few apps are either left without replacement or have to be completely rewritten. (The most annoying one which affected me when I was a daily macOS user was maybe the forced obsolescence of Karabiner, for example.) There's really nothing comparable on Linux, for a range of reasons.

> Also, hunting down software for Ubuntu 14.04 or 18.04 because something that works for one doesn't work for the other is a Linux experience.

That kind of thing can be a serious frustration with proprietary software that only officially targets a specific Ubuntu LTS release, and it used to be especially painful before the the availability of containerized app platforms like Flatpak and Snap. If your work requires you to use software that doesn't support your OS, that is definitely a problem.

> Often you don't have software that simply works on "Linux", you need to find it for specific distro and even specific distro version or compile it from source.

As they say, Linux is not an operating system. Distros are operating systems. Fragmentation is just the obverse side of choice here, but when the Linux userbase as a whole is so small, I get feeling frustrated by that.

As for building software that's unpackaged on your distro of choice: per-distro variations in build instructions are generally not an issue unless you're just following those instructions more or less blindly or by rote. Packaging software can be annoying on very niche distros with small repos just because you have to do it frequently, but it's pretty rare and not very hard to do for the vast majority of software. To me, it doesn't make much sense to choose a platform with inferior tools just because some software you like might be pre-packaged for it, when you can just choose a platform whose package management tooling you actually like, and package the few outstanding odds and ends you need on the platform. (This is generally how I think about choice of distro.)

> Linux is a horrible experience for people who see their computer as tools and don't really like to manage it and want to spend their energy on using that tool to do other things(like designing apps, studying some data, making videos etc).

I think this is really overstated and stereotyped. Even though I'm a Plasma user who has strong preferences for particular distros, you can stick me on any major Linux distro with any desktop environment, and even largely sticking to the defaults there will still leave me feeling much more at home than I can on macOS. It's not about time spent configuring or customizing, and it's not just about a compulsion to tinker, either. It's different, even if you stick to defaults and even across distros. It's a lot of little things but it really adda up to a whole different vibe.

I also want to say that I think the metaphor is misplaced. An OS is not a tool like a hammer is a tool, because it's a whole environment rather than an object. Your workstation OS isn't like a screwdriver or even like a toolbox. It's like your whole garage, your whole studio, the whole warehouse, the whole factory floor. It may not make a lot of sense to care about customizing one specific screwdriver. But caring about being able to freely arrange your workspace is pretty natural, and macOS' differences from Linux are a genuine, substantial culture shock when you're coming from Linux workspaces that have become cozy and efficient to navigate for you.


>The result is that it's just not as reliable, predictable, fast, or complete as virtually any distro package managers on Linux

I disagree, for example pacman will happily break your system if you have any third party packages or don't use it exactly as prescribed. Not all Linux package managers are perfect or even that good.


> Not all Linux package managers are perfect or even that good.

Definitely. And the good ones are sometimes the most frustrating for their remaining imperfections! But when you compare them to package managemwmt efforts outside of Linux and free Unix distros, efforts like Homebrew or pip or NPM, there are typically lessons that Linux distro package managers have learned from each other that the others miss, to their detriment.

> for example pacman will happily break your system if you have any third party packages or don't use it exactly as prescribed.

If you care more about robustness than speed or simplicity, pacman is arguably the worst in its class. On a technical level it's still on a par with Homebrew or better, depending on what we're comparing. But it can be more painful to use in practice because of the actual role that the AUR plays in the Arch ecosystem. Arch devs' denialism about that has led to a permanent state of affairs where everyone uses the AUR and pacman ignores the dependencies of AUR packages every time it runs updates, i.e., perpetual breakage.

I'm not a fan of that design or the Arch 'blame the user for holding it wrong; after all we warned them this required manual attention' attitude. And there are better-engineered package managers available on macOS, too, like Nix and pkgsrc, too.

But the base system is still unpackaged (like in many Unix distros), and the package managers you end up needing for working on the OS are decidedly second class on the system and prone to being broken periodically by Apple. (If you follow along with Homebrew or Nixpkgs on GitHub, you can see the kinds of huge efforts they often have to go through when Apple releases a new macOS beta and it completely breaks things for them.) It's just too different to be summarized as a matter of 'like Linux but with a different GUI'.


> - uniform, comprehensive, robust package management, including for the system software

macOS uses Software Update to manage system packages, and AppStore to manage that world, and for everything else, MacPorts (if you care about your system homebrew is not an option).

> - GNU coreutils and related utilities (sed, grep, find, etc.)

macOS has a userland... you seriously can't find /usr/bin/sed in macOS? Seriously?

> - good filesystems

uh, if you actually think ext4 is any good, then I guess you just don't care about massive data loss. wtLf??!?

> - 'root is root'; no policy or other bullshit restricting what root can do by default

Always good to have unrestricted access to root for all those necessary privilege escalations. Security policy just gets in the way. Who has time to activate this and authenticate that?

> - lots of featureful, performant terminal emulators (to the point that the DE default is almost always fine)

You do realize that every single beloved terminal emulator you use on Linux most likely also runs on macOS, right?

> - some choice w/r/t desktop experience

omg. You are not required to run quartz. You can run X11 if you really want, and you can even run gnome or xfce, any window manager you want in macOS. Oh, but what you mean is because you don't know how then it can't be done.

> - popular apps don't just entirely stop working between OS releases

Because you're not aware of Linux breaking applications between releases, I guess it doesn't happen, and because upgrading system software the moment it is released is the new, state of the art sysadmin philosophy.

> - pretty much everything is discoverable and configurable if you're determined

kind of an empty observation, because this is true of all things

Stick with Linux, please. The less you know, the better off everyone else is.


I'm surprised and kind of impressed by the level of vitriol in this comment, especially given the relatively narrow focus of the comment of mine it's replying to, which didn't really consider the merits of macOS itself so much as the differences that give using it a very different vibe from using Linux for people who are used to desktop Linux.

I'm not going to reply to the ‘substance’ of your post but I will say that nothing you've written here is in fact news to me. A more curious person, or maybe just a person who is having a better day, might wonder what they're missing rather than lose their shit, when someone writes something that surprises them on the internet.


Just wanted to chime in that you're absolutely right. That comment was unnecessarily filled with hate and obsessive elitism. It's really weird that someone would take time out of their weekend to write that.


>> - 'root is root'; no policy or other bullshit restricting what root can do by default

> Always good to have unrestricted access to root for all those necessary privilege escalations. Security policy just gets in the way. Who has time to activate this and authenticate that?

On gnu/linux there’s SELinux for that, since 2001. Red Hat, that implements security very well, has had that enabled by default since forever basically.


Right, and SELinux, alternatives like AppArmor, and the various kernel hardening parameters available on Linux are things that get in your way relatively rarely. SIP is much more annoying.

And there are also configuration matters where rather than policy on what root is allowed to do per se, making things a pain in the ass is the security strategy. Like there's no way to non-interactively enable macOS' built-in SSH server without resorting to an enterprise endpoint management system.

Part of what gives the feeling that ‘root is not root’ on macOS is that you can't really administer macOS like a normal Unix system or like Linux. There's a bunch of things that require interactivity, or a cloud account logged in. There are files that are part of a normal POSIX filesystem which play a certain role in configuring Unix systems which are present on macOS, but literally just don't do anything anymore, in favor of some other format that macOS actually cares about and which is a bigger pain to edit or automate using normal Unix userland tools.

Things like creating users in a script is way more verbose on macOS than on Linux or any BSD I've seen. (The least annoying way to handle it IME is to use a wrapper that imitates NetBSD's utilities for this which is bundled in pkgsrc.)


All other complaints aside, claiming Linux has it better when it comes to “featureful, performant terminal emulators” is outright laughable. The number one thing I miss on my home Linux workstation is a good terminal emulator. Nothing in Linux land comes close to the featurfulness of iTerm2, to the point where the vast majority of Linux emulators don’t even support ligatures! Add “fast” to that and you only have one option: Kitty. Which, guess what, also runs on macOS. iTerm is generally slower than kitty, but the trade off is completely worth it, especially when you consider features like tmux CC mode which makes working on a remote system almost as transparent and reliable as working on the local system.


At the time I was on macOS, I ended up using iTerm2, but I didn't feel great about it. I understand that iTerm2 has a wealth of features, including some that are virtually unique. But I wasn't happy with iTerm2. Annoyances I can remember at the moment are:

  - it was shocking to me that getting decent performance with tmux required installing an unstable build and enabling GPU acceleration (something I never had to resort to before)
  - I had a skeptical view toward many of iTerm's unique features because
    + the whole scandal with iTerm2 leaking private data via DNS had just happened (see: https://www.bleepingcomputer.com/news/security/iterm2-leaks-everything-you-hover-in-your-terminal-via-dns-requests/ , https://gitlab.com/gnachman/iterm2/-/wikis/dnslookupissue )
    + the tmux ‘integrations’ didn't work with my tmux setup, because they require iTerm2 to launch tmux and at the time I already used a different system for managing local tmux sessions ( https://github.com/sagebind/tmux-zen ) that I liked
    + when I temporarily changed things to let iTerm2 launch tmux, I found that it broke/ignored my keybinds, which is apparently intentional -- https://gitlab.com/gnachman/iterm2/-/issues/3997
    + shell integrations always seemed almost like a layering violation to me, but also just plain didn't (and don't: https://iterm2.com/documentation-shell-integration.html ) work with normal tmux or screen sessions anyway
  - iTerm2 displayed vertical spacing with the bar characters used for drawing vertical lines in tmux incorrectly unless I went out of my way to use a special font (Hack), something I had never encountered before
  - swapping the Option and Command keys inside of iTerm2 was required for getting in-terminal Emacs keybindings to work right, but it swaps them *globally*, so that if you use either modifier in your ‘Hotkey Window’ configuration, you have to use a different key to dismiss the window than you use to summon it
    + I had to supplement my iTerm2 config with a Hammerspoon config to essentially bind summoning it and dismissing it to two different key chords, which worked but broke the animations the app features for ‘Hotkey Window’ because I was summoning it via iTerm2 but dismissing it via Hammerspoon
  - I had to modify some keycode configuration stuff to get it to act more xterm-like to get some in-terminal apps to interpret the right keybinds, and it was a more annoying, fiddly process than on terminal emulators I'm used to, which provide comprehensive profiles you can just swap between for that
  - iTerm2's configuration windows feel really disorganized, cluttered, and just messy to me compared to Konsole's
  - I didn't need some of iTerm2's special features, because I already had them implemented in my tmux config, like regex search for find-on-page and copy mode, so I didn't really have a chance to be impressed by them
  - some of iTerm2's special features, like displaying images, were then implemented only in a non-standard way, which meant I couldn't count on them on other platforms (nowadays iTerm2 and Kitty share their special image protocol, I think, plus iTerm2 supports Sixel graphics, which is really cool) and didn't really want to use things that relied on them
I feel bad about posting much negativity about iTerm2 when it is clearly such a lovingly maintained, ambitious project that serves many people well. And I understand that for people with different expectations or who don't care to have a portable, cross-platform command line environment, letting iTerm2 ‘take over’ for tools like tmux and push them to the background is an amazing feature. I also get that iTerm2 is very creative and clever about getting crufty old protocols to do things they were never originally intended to do, and that the results can be really nice sometimes.

The landscape of cross-platform terminal emulators has also changed a bit in just the past few years, because some nice terminal emulators have matured substantially.

But as someone who, a few years ago, came to macOS with a fluent, agreeable, largely terminal emulator-agnostic workflow already and just wanted ‘a fullscreen dropdown terminal that won't lag out if I have a a few noisy commands running in a tmux window’, iTerm2 was simultaneously overkill and underwhelming. On Linux, the dropdown terminals baked on the defaults of the big two DEs, Guake and Yakuake, both work just fine, as do several others that I've tried.

PS: Back then I only cared about programming ligatures in GUI Emacs, which was fine, but nowadays Konsole does have ligature support. And Wezterm is another cross-platform terminal emulator with ligature support and reasonable performance. :)

PPS: I do see an iTerm2 feature every now and then that I find myself envying until other terminal emulators I like catch up, and I'm happy to admit that the iTerm2 maintainer seems to pick up the new terminal hotness at an impressive pace. Lately it's OSC52 escapes for remote terminal Vim and Emacs sessions :)


Even those points can be argued around in various ways, so they're not even that clear-cut.

> uniform, comprehensive, robust package management, including for the system software

Every Linux distribution has its own package manager, so that somewhat fails on the 'uniform' point. Package management for system and apps being shared (comprehensive) is an arguable point as well, and 'robust' seems to not be the default on more traditional Linux distributions.

The approach seen in for example Fedora Atomic matches the macOS approach a lot more, including rollback and consistency, but also splitting the system image, overlays, and user apps.

> GNU coreutils

Not all Linux distributions have these, either, busybox distros seem fairly common these days (and offer most the popular switches you'd see with GNU utilities).

In addition, of course, the GNU utilities run under any UNIX-like, so if you do need any of the convenience switches you're used too, they exist anywhere as well.

Again, of course, subjective.

> good filesystems

APFS seems 'better' than the ext* default seen on most Linux distributions, and has had less of a history of 'random corruption' than early btrfs.

> 'root is root', ...

Red Hat-like distributions (and even Ubuntu) seem to by-default enable some kernel security module, which have the same effect.

The implicit UID 0 bypass is nasty for security.

> lots of .. terminal emulators

I'm not a picky terminal user, but macOS Terminal.app nowadays seems to match, say, gnome-terminal or Konsole quite well in behavior.

Perhaps it doesn't support some fancy feature set, but I'm not sure if the main DEs do either.

> some choice in desktop experience

Valid point. However, if you prefer something as uniform as the macOS experience, you can't get that on generic freedesktop Linux.

> popular apps don't just entirely stop working

Depends. The typical Linux distribution has a nasty ABI as well, often only guaranteeing source compatibility.

This is indeed often easier to work around even for binary-only Linux apps (on a multilib amd64 system, you can usually run some libc5 app from the late 90s fine. on macOS, this is 4 CPU architectures ago and you won't even stand a chance).

> pretty much everything is discoverable and configurable if you're determined

Same goes for macOS and even Windows. Reverse engineering is a thing that can be done with sufficient determination, and in many legislations has exemptions for these kinds of use cases.

macOS and Windows also offer symbol names for a fair amount of OS components, it's just perhaps a little more work than 'grep the source code for a flag'.

In the end, however, it's all a matter of tradeoffs and what you're used to (i.e. won't have to waste time learning before being able to get work done), and if you're used to something that varies from the default of any system, this is less and less likely with macOS or Windows.


> Every Linux distribution has its own package manager, so that somewhat fails on the 'uniform' point.

No, what I mean is that on every given system, you can manage all software on the system with a single tool, whether it's an application or a system component. Everything shares an installation mechanism and an update process— uniform in that sense.

> 'robust' seems to not be the default on more traditional Linux distributions.

The traditional ones are not robust to things like pulling the power during package installation, but their behavior is predictable and they're capable of reliably doing the basics, which is something Linux users will painfully miss any time they're on macOS or Windows.

> The approach seen in for example Fedora Atomic matches the macOS approach a lot more, including rollback and consistency, but also splitting the system image, overlays, and user apps.

I think this is an approach that has a ton of value for many users, and that kind of separation is common among Unices. Using just one tool for everything has always been a Linux weirdness. I like it, but one of the consequences that sucks for casual users or newcomers is that you're effectively always managing the whole system; there's no such thing as 'just installing $APPLICATION', because all installations are interconnected through the global web of dependencies. It's kind of cool but it's also kind of a mess, and it leads to the possibility of accidentally uninstalling your desktop environment when you mash the yes button trying to install Steam.

I think the split system approach is good because it can end that type of problem, but I think having the base system still be modular and flexible is valuable. Fedora Silverblue is like this and IIRC FreeBSD is actually like this these days, since the base system has also been packaged. SteamOS is famously like this, as well, where Steam is essentially the package manager that sits atop the base system which is packaged but which you have to 'unlock'.

Upgrading the macOS base system is not atomic, though. I know because I once bricked a Mac Mini that I thought was stuck during an OS update by unplugging it. :)

A macOS à la carte where you maintained the separation between the base system and user packages but the base system was still parceled out into individual packages would be really cool. I'd be interested in playing with that for sure.

> > good filesystems > APFS seems 'better' than the ext* default seen on most Linux distributions, and has had less of a history of 'random corruption' than early btrfs.

Yeah, what I mean here is that because Linux is such a player in the datacenter, even desktop Linux users have choices of really excellent filesystems available.

APFS is basically just a backend for Time Machine. You can't even take persistent or named snapshots that you know won't be garbage collected later, never mind the performance of filesystems like ext4 and Xfs or the interesting features of a BTRFS or ZFS.

APFS doesn't even support transparent compression, either, which is just baffling to me on 2022.

Once the OpenZFS port for Mac matures, this situation could get a lot better, and that would be awesome.

> > pretty much everything is discoverable and configurable if you're determined

> Same goes for macOS and even Windows. Reverse engineering is a thing that can be done with sufficient determination, and in many legislations has exemptions for these kinds of use cases.

There's something really depressing to me about having to spend most of my time on a platform that's so actively hostile to me that getting simple behaviors I want is a matter of reverse engineering.

----

I think I pretty much agree with everything else :)


I do not find this at all; I use mac for the hardware (stopped using it for years because it got worse; with the m1 air it is top notch again); I do not like the OS coming from a linux background for over 20 years. Good graphical ui I could not care less about outside that, everything is mostly worse. I will install asahi immediately when it is solid enough. In the meanwhile I have to cope with half baked solutions like homebrew, but the hardware is simply unmatched (I have tried many alternatives), so overall it is a positive still. With Linux, this would be the best machine ever made. For me that is; others like UIs of course so then it’s a different story.


> really good graphical UI

Honestly that's subjective. I preferred GNOME, but to each their own. There's too much flashiness for my taste, and some icons and UI just look like something for kids (big, bright, round).

> ecosystem/3rd party integrations

Eh. A big part of that ecosystem is paid for really basic features. The App Store requires an iCloud account to be used. That ecosystem also relies on various hacks to achieve things which are IMHO rather basic - like Karabiner which is a keylogger, just so you can do relatively basic key remapping. What's stopping Apple from saying keyloggers are no longer allowed in the next version, like how they broke all VPNs by deprecating the existing framework used by all of them? Also there are entire, sometimes paid, apps to do basic features the OS should support itself, like having a different scroll direction between the touchpad and mouse.

The hardware, in it's current iteration ( no butterfly keyboard), is very good. I don't particularly care about magsafe ( i keep knocking it off by mistake, which puts the device to sleep) or the giant touchpad that weirdly strains my palm when using it for prolonged periods of time, but those are purely personal. Can't wait to be able to run Linux on an M* Apple device, I might even buy one.


It's not that subjectıve actually. Apple does usability research and collects feedback and usage data all the time. It's not an artistic endeavour but a methodological study. The free and open source projects often lack in that or don't have it at all.

Some people can prefer other systems and that's alright.


UX isn't an exact science. Google and Microsoft also do studies, with testing and what not. Would you say their UX is perfect?


Nothing is perfect but Microsoft's UI's are also good. These studies are not subjective at all, they work. However, things can go bad when the company have other concerns like branding, upselling or compatibility. Apple tends to strike that balance better.


Some of us cut our teeth on BSD even before linux and Darwin is like a familiar friend’s cousin from across town.


> Mac-using devs are mostly Linux-using devs who prefer a non-free distro ;)

A working GUI, to be exact. Source: switched to Macs from Linux in 2013.


non-free distro and nice trackpad :)


and Adobe :/


I suppose that's one advantage of Linux, yes;)


My company gave me a choice between a Thinkpad running Windows and a Macbook. So yeah, of course I chose the MB.


Or work at a company where Linux isn't possible.


Mac-using devs will file tickets that CI/CD is not working, and don't know how to configure their system to use GNU coreutils.

They all just use VS Code or Jetbrains and that's about it, so hell if I know why they need a $3000 machine to run shell commands they don't understand on.

I desperately wish one of the big boys would push enterprise Linux dev machines hard.


Seems to be moving slowly but Lenovo, Dell and HP are all offering Linux as options - which is something I never would have bet on ten years ago.

Not quite _pushing hard_ though.


1. Run arm-based debian using Parallels, headless using `prlctl`. SSH in and use tmux.

2. Everything you install will be arm based. Docker will pull arm-based images locally. Most every project (that we use) now has arm support via docker manifests.

3. Use binfmt to cross-compile x86 images within prlctl, or have CI auto-build images on x86 machines.

That pretty much does it.


Yup. We have a lot of complex dependencies so a couple of us got M1s so we could charge into it headfirst to get it sorted. It wasn’t too bad. We had a couple of 3rd party things stuck on x86 so we emulated them on qemu within the vm. Slow, but ok (eventually we replaced them).

We were using UTM but have recently switched to Parallels, which is nice.

Our prod stayed on x86 but we’ve started moving to graviton3 which is better bang for buck. Suspect it’ll end up being a common story for others too.

m1s are just such nice machines that I’d go quite out of my way to stay on them now.



How would you handle docker images in a team where some use M1 and others use intel?


A customer of mine has two different Dockerfiles, one for Intel and one for M1. Deploys are Intel and are built on a CI server. No image gets out of our laptops.


For now I'm ignoring it, I'm usually about two to three years behind the curve and by then the bugs have typically been ironed out. I won't be running macOS anyway, but will wait until a fully supported version of Debian is out there that uses all of the peripherals properly. They call it the bleeding edge for a reason and I see no reason to spend extra effort that isn't driven by an immediate need. I like tech, I can't stand fashion.


I'm in exactly the same boat. I'd love the quality of the hardware, but only if my current software experience (I'm on Debian) is not degraded. Let's see...


> I'm usually about two to three years behind the curve

This makes so much sense now for many workflows. I no longer complain about my computers being slow so I don't even think of upgrading, and if something's annoying it's mostly about software rather than hardware anyway, so no point in upgrading, although the M1 seems to have convinced a lot of people otherwise. Looking forward to adopt this new tech... in 3-5 years or so.

This also makes it cheaper to upgrade through second-hand device, just stay one or two models behind.


You will be waiting far more than 3 years.


Asahi Linux has released an alpha version that supports quite a lot of features.


Simple: I never target specific CPUs to begin with ;)

I'm only half joking. I'm of the group of people who know that Docker is a security nightmare unless you're generating your Docker images yourself, so wherever I've had to support that, I insist on that. If you don't use software that's either processor centric (and therefore buggy, IMHO) or binary-only, then this is straightforward and a win for everyone.

Run x86 and amd64 VMs on real x86 and amd64 servers, and access them remotely, like we've done since the beginning of time (teletypes predate stored program electronic computers).

Since Docker is x86 / amd64 centric, treat it like the snowflake it is, and run it on x86 / amd64.


Can you elaborate on what part of docker is a security nightmare?


Dockerd is a daemon that runs with very high privileges and does too many things. People hate using "sudo docker" so they add themselves to the docker group. Now congratulations you are effectively running as root all the time.


On Windows and Mac docker is run in a virtual machine without full access to the host's files so root is less of a problem.

Even though docker runs natively on Linux, perhaps a similar vm setup can be achieved for security reasons.

I use Podman on Linux which is compatible with docker but without the root issue.


There’s podman for that, it runs in rootless mode very well.


On the whole it's been good.

I work on scientific software, so the biggest technical issue I face day-to-day is that OpenMP based threading seems almost fundamentally incompatible with M1.

https://developer.apple.com/forums/thread/674456

The summary of the issue is that OpenMP threaded code typically assumes that a) processors are symmetric and b) there isn't a penalty for threads yielding.

On M1 / macOS, what happens is that during the first OpenMP for loop, the performance cores finish much faster, their threads yield, and then they are forever scheduled on the efficiency cores which is doubly bad since they're not as fast and now have too many threads trying to run on them. As far as I can tell (from the linked thread and similar) there is not an API for pinning threads to a certain core type.


Can you not do this using the CPU affinity environment variables and just ignoring the efficiency codes? I was under the impression you could bind to specific cores with:

GOMP_CPU_AFFINITY=“1 2 5 6”

With thread 1 bound to core 1, thread 2 on core 2, thread 3 on core 5, thread 4 on core 6. I don’t have an M1 to play around on but I’d have assumed that the cores are fixed IDs.

Aside from that, if the workload is predictable in time, using a more complex scheduling pattern might help. You could perhaps look at how METIS partitions the workload, but see if it’s modifiable by adding weights to the cores reflective of their relative performance. Generally, to get good OMP performance I always found it better to treat it almost like it’s not shared memory, because on HPC clusters, you have NUMA anyway which drags performance down once you have more threads than a single processor has cores in the machine


Unfortunately, the thread affinity api on m1 doesn't work that way, at least based on what I've been able to understand by reading here: https://developer.apple.com/forums/thread/703361 and more specifically this linked source file: https://github.com/apple-oss-distributions/xnu/blob/bb611c8f...

I agree with your other points though!


Over at Airbyte, we had a project this quarter to update all of our build & publish processes over to building multi-arch (AMD and ARM) docker images. As Airbyte runs entirely within docker, getting a smooth local experience for folks on M1/(2?) Macs was important. We had a long lived support thread (1) where you can see us grow through all the phases - from "nothing works", to "our deps don't work", to "the platform works" and finally to "the connectors work"!

Assuming your base images are themselves already multi-arch, most of the tooling we needed was already built into the `dockerx` build tool, which is awesome - check it out if you haven't (2). Docker has bundled all the tooling and emulation packages (qemu) needed into a single docker image that can publish multi-arch docker images for you! You run docker to emulate docker to publish docker... There are some interesting things that you'll need to do if you publish multi-stage builds, like publish a tmp tag and delete it when you are done, but it's not /too/ terrible. Since Airbyte is OSS, you can check out our connector publish script here (3) to see some examples.

I'd recommend that spending the time to work your multi-arch tooling - not only does it make the local dev experience faster/better, it:

1. unlocks ARM cloud compute, which can be faster/cheaper in many cases (AWS)

2. removes a class of emulation bugs when running AMD images on ARM - mostly around networking & timing (in the JAVA stack anyway)

Links:

1. https://github.com/airbytehq/airbyte/issues/2017

2. https://docs.docker.com/buildx/working-with-buildx

3. https://github.com/airbytehq/airbyte/blob/master/tools/integ...


Early reviews said that even the base 8GB RAM models were snappy, despite the small amount of RAM. I went in believing that, but turns out it was anything but. Maybe it was my specific usage (Firefox with tons of tabs, Electron apps open all the time) but it feels much more sluggish than my previous 2013 iMac. Next one I get is definitely going to have 16GB minimum, maybe 24GB.


If you don't have enough ram no processor can save you. And if you aren't running out of ram adding more won't help. The m1 isn't magic and I'm sure it does some things slower than Intel and some things faster. Maybe for your workload or is slower. For the workloads I run (rails apps, postgres) the m1 has almost identical performance to the Intel i9, but is more efficient in terms of battery life.


Apple does aggressive page-out to swap space.

Some months ago a guy had bricked its own MacBook by running data analysis tools all the day every day, until the ssd finally gave up (and you can’t change that in a macbook pro, it’s soldered).


I'm using a base-spec M1 MacBook Air since a year or so now. I have a 4K external monitor attached most of the time, usually four windows in Safari with perhaps 100 tabs in total, Mail, a bunch of terminals, Slack, Signal and Telegram open. It's snappy all the time, and consistently feels much much faster than my previous laptop which was a maxed-out Intel 13" MacBook Pro.

The only thing that ever makes it sluggish is if I open VSCode with a large workspace. Even with all plugins disabled. No idea why. I just stopped using VSCode.


Safari might be the big difference here, and you only really have one Electron/Chromium app, Slack, and it's generally better than others about managing its resource usage.


Yup, I switched back to Safari because I noticed Firefox uses significantly more energy (battery lasts a few hours less when I'm out). So it's likely that it has an impact on responsiveness too.


They may feel snappy if you go 2G into swap. Not if you go 12G+. My normal stuff uses 20 G ram, and that's without any VMs on. No way that can run smooth on 8G physical ram.


This is where marketing hits reality.

There are no magical computers, just a great Reality Distortion Field that's still at work.


I tried going from a 2018 MBP to M1 MBA in 2021 and had too many issues to make it my primary machine. Docker and Android development were particularly brutal to get going reliably IIRC. The M1 performed well for the things it could do, but I still needed the MBP (which constantly reached 100% CPU) for other stuff, so I ended up doing a horrible multi-machine setup with Synergy. That was a dark time in my life ;)

Then tried again early this year with a M1Max MBP, and it has been the biggest step change productivity boost of my life. Definitely still some pain points, but the way this thing handles anything I throw at it is incredible.

I'm mostly doing front-end dev (react native). Have a minimum of 2 IDEs, 1 iOS simulator, 1 Android simulator, Windows (ARM) Virtualbox, 2 browsers open at all times. And then add a mix of Docker, XCode, Android Studio, Zoom, Sketch, Affinity apps, Slack, Zoom, etc. I haven't ever heard the fan spin up. I was carefully managing what I had open on the 2018 MBP, and now I don't even think about it.

The only thing I'm still running in Rosetta is Apple software: XCode and the iOS simulator, but they run smooth, so I don't even think about it.

The MBA setup I was just flailing my way through. For the M1Max setup, I found this guide very helpful in my initial setup (mostly focused on a RN Dev): https://amanhimself.dev/blog/setup-macbook-m1/


I've been doing React Native development on my M1 MBP and it's been pretty great overall.

But is anyone else surprised how long it's taking to get the iOS Simulator for ARM? I feel like it would make a massive difference to my developer experience (especially in battery). And I haven't seen any indication that it's coming anytime soon.


Isn't it... Just in xcode? I've used it. It's like, there, right?

I open up my 13.3 beta 2 and go to the xcode tab go down to open developer tool click on simulator and then click on file and then open simulator my mouse over to iOS 15.4 go down to the iPhone 13 for example click on it and then I have a simulated iPhone 13...

And if I check the activity monitor nothing in my activity monitor is showing up as Intel code except for parsec right now...


Great point. I was conflating the Simulator and the app that React Native builds. Simulator is definitely running natively on ARM.

And your comment made me go look back at my react native project setup. Looks like the main reason my app is still building to x86 is because of one Expo dependency that doesn't yet compile to arm64. So more of a Expo concern.

Thanks for checking on your end.


I didn't realize it wasn't already ARM. I found one day (https://stackoverflow.com/a/68929949) that running Simulator with Rosetta allows momentum scrolling to work, and everything else seemed perfect, so I left it that way.


> The only thing I'm still running in Rosetta is Apple software: XCode and the iOS simulator, but they run smooth, so I don't even think about it.

How old is your version of xcode? From what I can see they added M1 support 1.5 years ago.


I have pods that haven’t been updated to work with arm yet.


Give https://github.com/bogo/arm64-to-sim a try. Perhaps you'll be able to get the arm64-simulator slice for free.


> Docker and Android development were particularly brutal to get going reliably IIRC.

Android? Android Studio has problems? That would be vital to me if i went Mx.


Was only a problem with my first try (the Macbook air, right after release)

My 2nd try with the MBP in 2022, following the guide I linked above, Android Studio has been perfect.


I work at a SaaS vendor.

We are completing a project to upgrade to Java 11 for most of our micro services. This will also mean we do multiarch container builds for our entire pipeline. Once that is complete, we will begin developing against ARM as the primary target for devs. This is needed because we are in the middle of a hardware refresh so by EOY, something like 80% of devs will be running on M1 Pro.

This should end up saving us money long term as we move all the cloud workloads to Graviton2/3 and Ampere A1 hosts.

We'll be multiarch for many years to come. I also don't see a timeline for us to sunset x86 support considering we also do on-prem installs and ARM rack mount servers are nearly impossible to source.


> We'll be multiarch for many years to come.

I think we're entering a long period of multiarch for the "traditional" server/desktop. I expect a lot of good to come out of this on the compiling side (easier to target any OS-arch), and (perhaps more importantly) on the general cross-platform front.


If you work in Java, why use containers? You already get a singular, portable deployment artifact.


Deployed onto what? Deployed how? Jars are great, but the artifact itself doesn’t answer these questions. Granted, images themselves don’t answer these questions on their own, but there are options available that easily admit deployments that don’t assume everything’s a jar.


The 8GB M1 versions don't have enough RAM for a lot of dev work, doing pretty ordinary things created a lot of RAM pressure when I had one. I later bought Max with 64GB, and that has been brilliant. I gave up on the idea of using Docker almost immediately.

Early on there were major issues if you were targeting x86-64 systems code in C++. A lot of common tooling was broken for months on end. Time has solved some of these problems but ultimately I wrote code to support native ARM targets as first-class code citizens. There was a significant learning curve (I was not fluent in ARM64 ISA) but now that everything is ported, running native, and the tooling has finally started to catch up, it works pretty smoothly and I don't have much to complain about. The x86 emulation has limitations so native is the only way to go for many things, and you'll still want to test some code on real x86 server hardware.

That said, the latest OS on (at least) the M1 is noticeably buggy. Lots of behavioral artifacts that I wouldn't expect. Not sure if it is the hardware or the software, or both. Nothing catastrophic, just annoying.


Why did you give up Docker? How is that brilliant? Docker is brilliant, giving that up is not brilliant.


I work on large-scale systems software, mostly high-performance database and analytics products. Docker's sole function in that kind of environment is easy replication of dev and functional test environments. No one deploys this type of software in Docker or similar due to the performance implications. Docker was a convenience, it wasn't providing anything critical. I replaced it with a small set of shell scripts and similar.

For this type of development, the number of external dependencies you have is very small. Aside from the compiler, build system, and testing, the number of third-party libraries linked into a release can usually be counted on one hand. Again, for good reason.

Basically, I am doing what I did before Docker, writing a small set of scripts that can perfectly reproduce the environment where I need it. This is made much easier because modern build systems have good dependency management built-in so writing those scripts is a lot less involved.


In my wife's case, the 16GB macs don't even have enough memory to drive a 4K display. Her mac goes crazy, overheats, fan at 100%, everything slows to molasses, piece of junk. And it's x86.


Yes, well known, same here on an mbp-16" i9.

For whatever reason the m1 mac's are crazy different. They feel like they have more ram (maybe a faster NVME for swap?), they run the fan crazy less, some don't even have fans (the mac book air). I've heard it speculated they use hardware assist for compression for swap. In any case don't assume that a 16GB M1 will feel like a 16GB x86-64.


The x86 macs are noticeably inferior in a lot of ways to the new Apple Silicon macs. I have both. That is as much how macOS runs on x86 versus the M1 as it is the underlying silicon. There has long been an impedance mismatch between what Intel is optimizing for and what Apple wants their silicon to be optimized for, and I think the M1 is an expression of that gap. There are things that Intel is still better at, but Apple largely doesn't care about those things.


Just my 2cs, results may vary, but my (short) experience with the m1 was so bad I switched back to a dell xps the week after I got it. Things may have gotten better meanwhile ofcourse, my local developer experience was dreaded. Some of the non arm targeted images took ages to start, some didn’t start and others were straight up flaky. I’m not touching the m1 until I know all these issues have been resolved, docker file system api complete, all arch targeted images etc. It also doesn’t make the situation any better that a majority of the images we have been running locally are all really fat Java dependencies likes Kafka etc.


It's been silk smooth for native desktop (macOS) and mobile (iOS/Android) development. I make it a point to keep the projects I'm responsible for running with the latest toolchains, though.

The amount of trouble that some seem to be having with backend dev on M1 makes me wonder if maybe it wasn't the best idea for the industry to put its collective eggs in the single basket of trying to perfectly match dev and prod environments. If nothing else, it feels weird for progress and innovation in the world of end-user/developer-facing computing to be held back by that of the server world.


It mainly seems to be Docker causing the problems. We run node.js apps, and all we had to do was update to the latest version node and a couple of dependencies (those with native modules). No app changes.

We run macOS aarch64 locally and Linux x86 on production and have yet to have a single compatibility issue (and we run a staging environment that’s identical to prod, so if there was occasionally an issue it probably wouldn’t make it to production.


To be fair: it's people using Docker that's the problem. For example expecting to be able to run an x86 container image on an aarm64 CPU. Docker is behaving perfectly in it's role of being Docker.


People using Docker are not the problem. That's a perfectly normal workflow. If M! doesn't support that, that's on Apple and not on the users.


How can a CPU make architecture differences (the main issue dealt with by Docker users) disappear? The only way that could happen is if the M-series were x86-64 rather than ARM, which dramatically reduces the possibility space for advancement in user/developer-facing CPUs.

The most that can practically done is a software solution like Rosetta, which is a good holdover in the interim, but ultimately software needs to become more architecture-agnostic, not less. Treating archs matching between development and prod as a given is a crutch, not a long term solution.


The people didn't understand what they were actually doing, and somehow believed that Docker is a magic wand that makes binaries for the wrong CPU run ok.


What's the other option besides running prod stuff on my local machine?


I don't use Docker so I have had no problems whatsoever. Getting all the Mac apps I code on building for Arm was easy.


It is quite interesting scrolling through other comments and seeing a large majority of problems are with Docker pipelines (hard coded to x86_64 images that must be emulated, tools), in other words, problems extrinsic to the platform/OS and the actual program code/dependencies.


Same. I waited a couple months to buy my M1, so when I got it everything I needed was running fine.

There were a couple libraries my company needed that didn't have ARM support but I ported them and made pull requests to the repos, and now they work alright. It wasn't difficult at all, since lots of stuff already had ARM code because of ARM-Linux, Android or iOS.

I go weeks without turning on my work-provided Intel Mac. I actually only use it for "personal" stuff (I help maintain some open source C/Asm stuff that use multiple OSs and architectures). My boss asked if I want an upgrade tho.


I'm an Eng for a startup using Rails, MySQL, ... Next.js.

The only problems we've had is slow performance of Docker for us with our databases. So much so, we've moved those out of Docker and back to the native. Performance is easily 6x faster. MySQL was also a headache because finding a MySQL 5.7 official Docker container didn't exist for ARM so we needed to use the slow emulation through qemu.

We also have a CLI dev tool that is written in Python and distributed in Docker (x86) which has also been slow. Not enough time to build ARM based Docker image.


Isn't MySQL 5.7 is a bit outdated?

Regarding slowness - i'm curious, how that's the problem - from my understanding on local dev env datasets are small and even 6x times slower (say what is 1 ms on production be 6 ms on your machine) shouldn't be any issue? Can you provide some examples? (I may need to run DB locally for tests one day, getting prepared)


I'm happy with it.

Here's a tip for anyone with docker compatibility problems: If you add `platform: "linux/amd64"` to your docker-compose (there's also a similar command for Dockerfile iirc), it just gets the x64 images and emulates those.

There is emulation overhead of course, but it's not perceivable in my experience, compared to running native images.


Something really annoying about this is that for some reason docker can't seem to easily switch platforms for base images. If you have an x64 base image and try to run an arm64 image on top, it'll complain. Why doesn't it just download the right version automatically instead of forcing me to solve it? Seems like there are still some rough edges here.


> : If you add `platform: "linux/amd64"` to your docker-compose (there's also a similar command for Dockerfile iirc), it just gets the x64 images and emulates those.

I've never seen docker crash until I did this.


Afaict, this is the default behaviour? If there is no arm specific image it just tries the x86-64 image which is then emulated. With docker 4 and podman at least.


Vagrant + vmware vagrant plugin (https://www.vagrantup.com/vmware/downloads) + vmware fusion tech preview (https://communities.vmware.com/t5/Fusion-for-Apple-Silicon-T...).

Currently running a bunch of Ubuntu (arm) virtual machines and my mbp m1 handles it really nice.


0 Problems: I'm a JVM developer, all of my tools work as intended. We deploy things as one-jar with no OS dependencies other than ENV variables for configuration.


The biggest challenge was getting multi-arch builds sorted. Ended up putting together a layer on top of QEMU to run both x86-64 and aarch64 VMs (https://github.com/beringresearch/macpine). Have pre-baked VMs with LXD installed inside each instance, with main software builds taking place inside LXD containers - works pretty well so far.


This works so long as your build isn't compute intensive. From my experience, you need real ARM (or cross compile) for stuff like C++.


I don't own an ARM computer (except the ones running Android, that is) but in my experience Linux tooling should work just fine on ARM if you pick the right distributions. That said, I have run Linux distros on Android a few times so I am somewhat familiar with what's out there.

Running x64 and ARM together on one machine will work through tricks like Rosetta but I don't believe that stuff will ever work well in virtual machines, not until Apple open sources Rosetta anyway.

I'd take a good, hard look at your tech stack and find out what's actually blocking ARM builds. Linux runs on ARM fine, so I'm surprised to hear people have so many issues.

What you could try for running Docker is running your virtual machines on ARM and using the native qemu-static infrastructure Linux has supported for years to get (less efficient than Rosetta) x64 translation for parts of your build process that really need it. QEMU is more than just a virtualisation system, it also allows executing ELF files from other instruction sets if you set it up right. Such a setup has been very useful for me when I needed to run some RPi ARM binaries on my x64 machine and I'm sure it'll work just as well in reverse.


See my comment on original question. Apple's latest beta system, Ventura, enables using Rosetta within Linux VMs. Looks like they have done a lot of work on the virtualization frameworks since last year.


For me personally (as a freelancer), it's been a pretty smooth transition. I have a dozen or more projects relying on node-sass (which fails to compile on M1), which has been annoying but easily remedied.

For my 9-5 employer, biggest drawback we've come across is that SQL Server can't be installed on Windows 11 ARM, which is preventing us from having a truly local development environment.

We've gotten everything else working via Azure SQL Edge running via Docker for Mac, but it lacks several features that we require (e.g. full-text search, spatial data types).

Despite a recent announcement (https://blogs.windows.com/windowsdeveloper/2022/05/24/create...) that Visual Studio will soon support ARM, There are no signs that SQL Server 2022 will support ARM.

My employer is still moving forward with provisioning M1 MBPs for developers.


There have definitely been some rough edges (on my end mostly related to terraform modules. don’t have a big docker/vm dependent workflow anymore so that might be why.)

But apart from that it’s been incredibly smooth.


For terraform, I have been using tfenv to manage the different versions, and you can set a flag `TFENV_ARCH=amd64` so you download the Intel versions of terraform.

This will also download the Intel versions of all the providers when terraform executes. Which reduces the problems a ton since there are some providers that are definitely not aarch64, especially when it comes to older versions.



I just added creating and publishing ARM64 docker containers to our automated release process and the CI (GitHub Actions) time went from about 10 minutes to an hour and half.

I don't expect many teams to volunteer to suffer this sort of slowdown and complexity in the near term.


I've been working on Depot (https://depot.dev) specifically for this reason: it's a hosted Docker builder service that runs BuildKit on managed VMs. When it receives incoming build requests, it routes them to a VM running the target architecture, x86 VMs run in Fly.io, arm64 VMs run in AWS.

Since it's all BuildKit, you can swap `docker buildx build` for `depot build` and it works exactly the same - I made a depot/build-push-action to drop in place of the docker/build-push-action in GitHub Actions.

It also has a persistent SSD cache disk for each builder, that was my other pain with GitHub Actions, time saving and loading layer cache was negating the speedups from cache hits - with a persistent disk, there's no saving or loading.

Anyways, combo of having a local cache and running on real ARM machines gives like an order of magnitude speedup to builds compared to the QEMU emulation.

Still a new project, not yet officially launched, and hosted services aren't for everyone, but exactly as you said, the status quo is amazingly painful.


That's because there are no arm runners on GitHub actions. So you now emulate arm, thus slow.

You can add hosted arm GitHub runners or register arm hosts for docker and see down to earth build times.


I think it has been mentioned a few times already but ARM64 on servers is also quite compelling (Graviton on AWS and Ampere Altra in Azure). I've actually seen a few customers that couple the move to new M1 MacBooks for their devs with a move to ARM in production as well to retain that same "the same container image runs on both" that they were used to even if they go multi-arch just incase.

In one case the devs were given the condition of getting their containers working on ARM to get the new MacBooks they wanted - and the cost savings of moving to ARM in cloud even subsidised the cost of them a bit too...


I’m working on re-factoring a developer environment from using vagrant/virtualbox and docker to only docker.

The prior goal was to mock production as closely as possible.

The realization is that macos as a host machine for orchestration is close enough to build. More strict validation can be done in CI and a staging env.

So for this project, the forced transition away from virtualbox Has actually led to simplification and asking questions about why it was “required” previously.

It is a bit of a pain only because some team members will need more support than others so the entire setup kind of needs to be clean and carefully documented when there is other stuff to do.


We go one step further and don’t use Docker either!


You just use system installs? Mind sharing the stack?

What do you do to manage language versioning?


Yes. I'm on macOS, and our stack is Node.js + Postgres + Hasura.

I use Homebrew for Postgres where IME the version doesn't tend to matter too much as long as it's new enough.

Node is managed using either nodenv or asdf which both allow you to install multiple versions side-by-side and control which one runs in a given directory using a .node-version file containing the desired version number (I use asdf, most of my colleagues use nodenv - they're compatible with each other).

Hasura is the only one that's a bit of a pain as they don't provide native binaries at all (only a docker image). So we compile that from source. They do at least provide comprehensive instructions (https://github.com/hasura/graphql-engine/blob/master/server/...), and I didn't have too much trouble even though I'd never used Haskell before.

The sub-second restarts are totally worth it!


received my M1 last week , my dev workflow is Alacritty->tmux->neovim+lsp+debuggers.Do mostly golang work,installed delve and it went fine followed all steps and compiled from source.Followed same for neovim and compiled from source as i use the lua instead of vimscript. All lsp setups works fine ,unfortunately native delve debugger is not able to communicate with neovim using native back end.As if now for debugging am relying on only terminal based debug not using neovim specifically.In a rabbit hole now as this is neither neovim nor delve so thinking might be new apple arch M1.Not sure if any one else stumbled on the same. with regards to docker as the workplace have licenses i use them with no issue.Entire team is using only mac no other operating system allowed in so safe till now.


We had an engineer in India get an M1 MBP last month and most things worked, but the docker container for MS SQL had big issues and he had to get an Intel MBP to be able to work. I suspect that those issues will dissipate with time. Right now, I find it annoying that the only OTP auth apps that are available for the Mac require Apple Silicon as they're essentially iPhone apps with new skins. I really want to be able to get something like Google Auth in my menu bar so I don't have to pull out my phone every time I need a code for Okta.


When your 2fa is on your computer it is no longer 2fa


Get an external key like a yubikey and that would handle multifactor authentication/otp like a champ.


We prioritized a well working dev env on M1 half a year ago and made our Docker images multi arch, etc. It mostly just works nowadays, only one small niche service needs x86 still and it’s in the process of being replaced. We also started taking advantage of ARM machines on AWS as part of the transition. Granted it was easier to justify this as a younger faster growing company because it doesn’t take as long for the majority of developers to be on M1s and less “stuff” has accumulated over less years.


We had a customer testing our video player application recently and asked whether M1 support was there. It was embarrassing to realize we hadn't formally tested on the M1.

Our application is Gstreamer based, which means it uses highly optimized codecs that eventually render to OpenGL. I was very worried it wouldn't work on the M1.

It works flawlessly. Rosetta is amazing. I'm not an Apple fanboy at all but Apple has done an amazing job with M1 and this is true even though many applications are just running x86 code via Rosetta.


I'm in much the same boat, and I've coped by just switching to a nice beefy Linux desktop for most things.

I like how ARM is progressing (I owned a second-batch RPi!), and M1 would probably be right for me if I wasn't a technical user, but it's simply too exhausting to fight the machine, architecture, package manager and product all at the same time. Docker is (and has been for a while) loathsome on Mac. Virtualization is usually pretty bad too, which makes regression-testing/experimentation much slower. I might give it another go if Asahi figures out GPU acceleration, but I'm not very hopeful regardless. The M series of CPUs doesn't really make sense to me as a dev machine unless you have a significant stake in the Apple ecosystem as a developer. Otherwise, it's a lovely little machine that I have next to no use-cases for.

> Any tips/tricks?

Here's one (slightly controversial) tip: next time you're setting up a new Mac, ditch Homebrew and use Nix. This is really only feasible if you've got a spacious boot drive (Nix stores routinely grow to be 70-80gb in size), but the return is a top-notch developer environment. The ARM uptake on Nix is still hit-or-miss, but a lot of my package management woes are solved instantly with it. It's got a great package repository, fantastic reproducability, hermetic builds and even ephemeral dev environments. The workflow is lovely, and it lets me mostly ignore all of the Mac-isms of MacOS.


For me it's been mostly painless. I've even used Time Machine to migrate from a 2012 Intel iMac to an Apple Silicon Mac Mini and it worked perfectly!

The two pain points:

1. No support for running older virtualized macOS. I like to test back to 10.9 and need an Intel Mac to do that.

2. One Python wheel which doesn't have Apple Silicon builds and doesn't build cleanly: https://github.com/CoolProp/CoolProp/issues/2003


Oracle sucks.

I mean in general, but they have also not released ARM instantclient or even an ARM version of Java. I think its crazy that I'm using Microsoft's version of ARM java.

I'm also using Windows 11 ARM in Parallels, which does seamless emulation of Oracle instantclient / Java / PL/SQL Developer. So most of my workflow has not been interrupted.

Still, just another excuse to move to a better database. Now all I have to do is convince our heavily bureaucratic IT department to move away from Oracle. It'll be easy, right?


Java ARM (for linux) version has been around since Java 15 (came out September 2020), and for MacOS since Java 17 (September 2021) [0].

[0] https://jdk.java.net/archive/

And here is instanclient for ARM: https://www.oracle.com/database/technologies/instant-client/...


> but they have also not released ARM instantclient or even an ARM version of Java

Java has been available on ARM since the days of Nokia phone dominance. Not sure what you're referring to?


Higher Education IT here. For our users that we support, it's been great on the whole... except for those who need to use a VM for the occasional Windows-only desktop app. UTM[1] seems to be the best option (everything else is in technical preview or not supported?) but it's slow as a dog to emulate x86. ARM Windows isn't great either if you want to just virtualize. Suggestions welcome!

1. https://getutm.app/


Software is constantly evolving, so any worldview based on “if it ain’t broke, don’t fix it” is absent of reality when it comes to software.

My philosophy on most things is: if nobody else has done it, I’ll go first. I started compiling and bundling my Go applications as multi-platform universal binaries for macOS.

Last week, I spent a few hours learning how to build multi-architecture Docker images, and push them into Artifactory. That knowledge came in handy yesterday when one of the developers on another team got a new M1 Mac and could no longer build his Docker images.

Over winter break, I started putting together a build matrix for compiling RPMs, DEBs, and Alpine APKs for some software that some developers were building as part of their CI pipeline. We’ve been curious about the ARM-based EC2 Graviton instances for a while, and I only had to update a handful of lines of code to begin building arm64 versions of those same packages.

In short, necessity is the mother of invention. I enjoy inventing things. If nobody else has started adding support for arm64 to your internal pipelines, then you should go first.


If you bring up a remote VM and set DOCKER_HOST to something like "ssh://root@$IP" and have key auth set up, the local docker CLI works as it always did but using a remote dockerd via ssh. I do all my container builds this way (on remote x64) because hotel/LTE internet sucks and I would rather download 47363367373 npm packages 4700 times on datacenter gigabit.


Waiting for GitHub actions to have ARM.


No problems here. Node, php, apache, mariadb, postgresql run native out of the box via homebrew. Java11 and Java17 have native aarch64 builds via homebrew and/or temurin (or the oracle openjdk project, which unfortunately doesn't seem to care about being a responsible security patch vendor at all). Android studio is fine except they don't support androidtv emulators yet. UTM with an aarch64 debian host runs mssql (azure edge sql) in docker natively, as well as anything you'd expect from a high quality debian distribution. UTM with windows 11 arm64 even runs vs2022 through its fairly efficient x64 usermode translator (WPF apps and everything). Xcode and the iOS simulator works great as expected, too.

Even the x64 java8 SDK for macOS runs without a glitch, I mean how impressive is that, with JIT and everything? Mind blown.

I didn't even understand the point of the new macOS 13 ventura linux rosetta thing until I realized some people are still running x64 docker containers. (why, though?)


We had already eliminated Docker from our CI/CD and live deployments (using Kaniko for CI/CD and containerd for live), so we just modified our container build pipeline to run Kaniko twice (once for x86_64 and once for arm64) and then use manifest-tool to build and upload the multi-arch container manifest. I think we had it working within a day, and then it took a week or so to test and validate all our images, and one or two needed some attention due to having dependencies that weren't already multi-arch. Overall it was really painless.

For testing container builds on developer machines most people are either using Docker on macOS which already handles multi-arch cleanly, or using buildah on Linux which also handles multi-arch automatically if you set up qemu binfmt support. So that has been pretty painless too.

I would say if you are doing a lot of horrible workarounds, it's probably time to step back and look at improving the processes (like your pipelines).


According to Kaniko documentation [1], they don't really support cross-platform compilation. Do you solve that by having both amd64- and arm64-based CI/CD runners?

[1] https://github.com/GoogleContainerTools/kaniko#--customplatf...


Yes. We use Tekton on K8s for CI/CD, and our cluster has both arm64 and amd64 spot node pools. Each Kaniko build task runs on its native arch.


Thanks, I was really hoping for a different answer, but I guess I'll have to investigate this approach.


I use m1 max in work and personal projects and have done so since November. I primarily work in *ops, choose your flavor (dev/sec/ml/net).

The only issues I have ever really ran into were:

RKE had issues on arm early-on. Random containers didn't have arm image support. This went away quickly as an issue for me.

No nested virt. This one was painful for a few reasons, particularly when I was attempting to use the Canonical tooling to create preinstalled Ubuntu images, which I was doing in a vm via Multipass. Maybe M2/M3?

That's about it, really. I had to buy two Safari extensions when moving from Windows, but they were cheap and worth it (dark reader and some other one I can't remember rn)

I currently run Rancher Desktop every day as a replacement for Docker Desktop. Works spectacularly for me, and I can just not care about the environment. Just works.

I use Multipass when I need linux environments, and it's been spectacular.

Universal control has been the greatest enhancement in my workflow (and general daily use)


Our entire dev team switched from MacBooks to laptops running linux.


Yea I honestly don’t understand why many devs use a mac to develop software for linux. Sure if you are developing native OSX or iOS software it makes sense, but why torment yourself otherwise?

The main pain point with running linux on a laptop is finding a suitable pairing of the hardware and software so that everything works (device driver support, no bios/uefi/acpi bugs, etc).

Finding some good hardware to run linux only has to be done once maybe every 3-4 years and them you can just carbon copy the setup for all of your devs.

Heck, I will save you some work: the dell XPS 13 works great with the latest stable release of Ubuntu.


Same, and I'm not looking back. Docker performance is stellar, everything is dev friendly, and the OS actually treats you like an adult.


Which laptops?

Does everything work? Special keys on the keyboard? GPU acceleration? Wifi? Going to sleep when you close the lid? Everything "just works" when you reopen the lid? Gestures on the trackpad?


It would have made sense to simply ignore the M1 hype altogether since the tools you require do not work / run on ARM, or run worse than on Intel, you're better off staying on Intel to wait until the situation for VMs in Apple Silicon improves first: [0].

For developers using VMs, Docker, multi-pass, etc I think it is more trouble than it is worth to jump on to the new shiny thing and invest time in workarounds that break on a new update. At least you weren't part of the November 2020 launch day chaos otherwise you would be waiting 6 months to do any work if you went all in on the M1.

Looks like Intel is (still) the way to go for VMs until Apple Silicon gets better (eventually).

[0] https://news.ycombinator.com/item?id=26159495


Intel MacBook supplies are decreasing which has actually caused them to go up in price. In a few years they will be difficult to get. Any company which uses MacBooks is going to have to make the switch at some point - better sooner than later.

Also, the post you linked is over a year old and the situation has changed since.


When I first joined my current employer late November/early December (employee 1 with an M1), we could not source Intel directly from Apple. The only option was to purchase a refurbished device.

If you aren't ready to switch to ARM, consider Linux.


> Also, the post you linked is over a year old and the situation has changed since.

Yes. Exactly. It tells us that had the OP jumped all in to Apple Silicon since the day it launched, then they would have been waiting months to do their work. Little of the software from Intel was actually working in November 2020.

Thus, the sensible and smart action to do was wait and stick to Intel. By the time the software ecosystem caught up to Apple Silicon, the M2 Macbooks were announced meaning that they can upgrade directly to M2, skipping the M1 altogether with more working software than on release day.


I use Docker and Colima constantly on M1 and have had very few issues. Granted, my use case for those things is probably quite simple compared to someone in Ops.

For web development, I believe that Apple Silicon is really the place to be right now (especially if you also work on design projects!)


If this were a Debian machine, you could probably just crossgrade your existing amd64 install to arm64 and everything would continue to work. The process would involve qemu user mode until you move the SSD over to the new machine. Once the work of the Asahi Linux folks reaches Debian this will likely be possible for M1 machines, can they accept non-Apple SSDs?

https://wiki.debian.org/CrossGrading https://www.qemu.org/docs/master/user/ https://asahilinux.org/


We went multi-arch for Graviton a while back so our pipes are multi-arch anyway, not much of a problem. The tooling is mostly Go-based so after Go got ARM on M1 it was a reasonable switch. We had switched to podman too but once rancher is M1-ready we'll use that. So not much of a change here, except for some Electron-based apps that were slow to update in the beginning.

Most of the problems were foreseen because we had AIX and PowerPC systems in the past where we had to have multi arch pipelines already, I suppose most of the problems with the M1 were around monoculture setups that we see much more often around the world. Same architecture, same OS, everywhere. But that's actually much less 'normal' over the existence of computers than people think.


Two big issues with my workflow caused by M1:

-occasional Postgres failures (i/o errors, especially with parallelization)

-kernel panics when connecting an external Sandisk ssd (known issue according to Apple forums)

It’s a shame because the machine is so much faster and energy efficient than my 16” intel MacBook Pro


Haven’t had any major issues since switching. Still easily able to get on with my day job and toy around as I always would.

Glad that the world is catching up a little but it’ll take time like anything else.

Biggest issue for me early on was android emulator, once an M1 version was released it was all easy going.


I don't like using Apple computers, and don't have an M1, but I've been having similar fun with a Microsoft Surface Pro X (running insiders build of Windows 11 which has x86-64 emulation for NT processes, and runs Android applications, but doesn't support emulation in WSL2). Overall much the same experience with things assuming the only execution environment is inside an x86 docker container. I also found that the "stock stack" Haskell development toolchain install (in WSL2) won't work, again due to lack of build runners at GitHub. Eventually I was able to find workarounds for all the annoyances, mostly involving building components myself.


You say your job is DevOps work so you probably feel the pain more than most people do.

Not being able to run amd64 containers hit me hard. I fought it until I just gave in and made sure that everything we built could be built under amd64 or arm64. For specific builds on a specific architecture, GitHub action runner on a cloud box. (Or pick your flavor of CI/CD).

Once I looked past my machine into an ecosystem and embraced the arm as just another build artifact it was easier.

I also reject testing locally as a measure of working software. So that eliminates some pain. If your coverage is high then this is an easy shift. Have a dev environment that you can test that matches your assumed architecture, toolchain wise.


Mine has been great, but it's not a fair comparison. I write native apps for Apple stuff in Swift, so I'm pretty much who the new stuff was optimized for.

I have noticed that some apps can get "hangy," including Xcode, SourceTree, and Slack. I sometimes need to force-quit the system (force-quitting apps seems to now have about a 25% success rate). SourceTree also crashes a lot. A lot of this happened after I got my MBPMax14. I don't know if it would happen with any other machine.

These are not showstoppers (I've been having to force-quit for years. Has to do with the kind of code I write), but it is quite annoying. I have faith that these issues will get addressed.


I just had a leap of faith and god an M1 MBA with 8GB of RAM in January 1st around 1 month after its release

And I never had any problems with it up until now. I use Chrome (Vivaldi) with tons of tabs and VS code with NodeJs and Java development and it was all snappy all the way

The main problem I had was at that time whether there will be application support for ARM and almost all the apps I use started supporting ARM as soon as the M1 came out

I only had to use Rosetta few times and NodeJS also started supporting ARM architecture as well

But next time I’ll be going for a higher RAM than 8gb for longevity

But I think I’ll be using this for couple more years and I think I’ll be skipping M2 since my M1 is good for now


I do most of my work in Go (with the very occasional splash of Swift or Kotlin) and the move to M1 has been utterly seamless for me. So much so that I often forget I'm working on an ARM64 machine until I forget to set GOARCH when compiling and then try to copy a binary to a remote machine.

The majority of Docker images that I use are available for ARM and the few that aren't perform fine under Docker for Mac emulation (although the big performance boost that I saw ultimately came from enabling VirtioFS accelerated directory sharing).

Just about all of the tools that I use are now available as universal binaries, but before that, Rosetta was utterly seamless.

I really can't complain.


It seems like if someones workflow is heavily local container based, then the m1 has been a rough transition. Otherwise it's been pretty seamless.

I'm on my second m1 machine (m1 mba, now m1 max mbp), and I only had a few issues early on with terraform. My day to day software dev is web, go, and java.


I think I've commented before on this but I've had great success using VSCode Remote Containers. Essentially using the M1 as a frontend to an x86 environment.

Works great and I can move between a local and cloud servers depending on requirements


I just have a headless x64 linux machine running docker and use the docker cli from my mac to interact with the remote docker (via docker context), and use a synced directory structure for any funky volume mounts I need. Works great.


I don't have Mac, I'm a Linux user but many if my colleagues have Macs, some of them are M1. And the arm thing is really a pain, much more than the difference between OSes. Just a random example: we have some app which uses MySQL 5.7, and we use MySQL in docker for integration tests. Unfortunately MySQL 5.7 won't run on arm (current workaround: they use a mariadb image, which is apparently good enough, and the CI would catch any difference). There are many small things like this, I would currently not recommend using those new Macs until things improve if you want to avoid wasting time on uninteresting issues.


Sucks you need 5.7 I guess. I have a native install of 8.0.29 on my Mac Mini for my gitea.


You can run x64 containers on M1, it's just slower. Just add the `--platform linux/amd64` flag


Yes it does work but then those tests take an unacceptable long time to run.


Not a Mac user but from running ARM64 both on servers and desktops for some years now, 99% of friction came from having to source-build projects which don't provide precompiled binaries. For me this is mostly a positive since it helped making me more consistent in actually building all container images and as much of the software I use as possible from source.

With good habits, it's rarely an issue anymore (though there is the occasional project when it turns out to be a hassle, usually something with an obscure node-gyp build).

If you rely on closed-source software it's a different story, I guess.


Azure has ARM in preview, and AWS has had it for ages. You should be able to create multi-arch builds in CI.

For actually creating multi-arch, I recommend you stay as far away as possible from Docker and use Podman and Buildah. The latter unbundles some of the Docker manifest commands, giving you far more control over how you create multi-arch images. I wasted 4 months on Docker tooling, and got it right in half a week with Podman. This meant switching from DCT (Podman doesn't support this at all) to Cosign, but Cosign is far more sensible than DCT.

There are a rare few containers that you can get away with running on x86.


> That means other tools that rely on it are also out of the picture, like Molecule

You can run molecule against an ec2 instance or Docker containers. Since you can run x86_64 docker containers on Docker for Mac, you can continue to use molecule. I run molecule tests against Docker containers or LXD in the cloud though just because of how much faster they run on large Ec2 instances.

As for everything else, I haven't really noticed many issues. Most of the work I do is built through CI/CD pipelines so what I use locally to build doesn't affect what is deployed to production.


I almost exclusively use FOSS. Most of it was ported a decade ago at least.


Great answer and definitely in keeping with the original vision of what computing should be - open and accessible.

Maybe people here haven't lived through 68K to PPC migrations, or to DEC Alpha, or Sun SPARC to Intel, or PPC to Intel, or any number of platforms and platform shifts - some lasted longer than others, but all had their ups and downs. The largest 'down' was predatory business practices in the 80s and 90s, which set computing back a decade (and still apparently continues today). It's unfortunate that many of these FUD-type articles pop up whenever a new platform/chip is announced. I'm excited for technological progress and think that every new announcement is another small miracle that I'm happy to be around for.


Multiarch builds are pretty darn easy to setup in my experience (exclusively Linux based images FYI) so I’d refocus the energy spent on Virtualbox, etc to just setting them up and then problem solved.


I am the only Mac user on a small team of web-based tool devs for my company. Everyone else is on Linux.

I occasionally have to do exploratory processes where I have to figure out how to setup the same environment locally that my teammates are using. It can be time consuming, but overall I’m able to replicate it just fine. We’re far enough into the transition that most stuff is supported out of the box.

I admit I don’t know much about what’s going on under the hood. I used podman for containers so far.


The desktop user experience has been quite good

Virtualization Framework’s VLAN support is not mature and getting more than 100 machines per rack has proven difficult. The need for additional switches, patch panels, uplinks and cooling makes multi-thousand machine installations slow due to the recent logistics unpleasantness.

Using Studios is hard because of massive delays to orders. Especially the ‘big’ machines in 1,000 unit quantities.

x86 and x86/GPU still seems to be the best approach for prod datacenter use.

Otherwise I am a fan


In our small backend engineering team, which also takes care of the DevOps work, we just have a multiarch docker image that we use as a base image to build the ARM local dev images and the CI/CD (running on amd64) builds the production images. Everything works flawlessly, docker knows what architecture to build in each case. There was some work we had to put into the base image to make it build multiarch, but then we basically forgot about it


Intel Mac -> M1 Mac this week.

Almost perfect, but I did switch development tools from Xcode/Swift/SwiftUI -> VSCode/Dart/Flutter at the same time. So I am having a lot of problems! But nothing mush unexpected.

I copied my system over with timemachine. I think that a lot of binaries got copied that I should reinstall.

> none of our pipelines yet do multi-arch Docker builds, so everything we have is heavily x86-64 oriented

Another data point for the fundamental principal: Portability matters


We had the same issue and ended up using remote docker.

Pros: - Setup is very simple. - It can run dozens of containers without overloading local machine. - Its stable. - SSL is working.

Cons: - I got issues with web sockets support. - Some times I get file conflicts.

I actually got ready solution for running remote docker with Mac but It need a bit of work. If someone would like to support project some front-end work, and a bit of docker/nginx work, please get in touch.


Maybe this should be a wakeup call to stop using docker and virtualization for DevOps. They have their place in CI but should not be used for local development.


It wasn't clear from the post, but do you work with things that actually depend on the architecture a lot? I'm dealing with the opposite (still on x86, applications get deployed on arm) and the answer was: pretend it isn't happening. If there are obvious issues, they'll be caught by the CI which is running the target architecture. If there are non obvious problems, I can spin up a vm in AWS immediately.


Recently got an M1 Mac, I couldn’t be happier. I use it all day every day for dev (ruby, node, react)

Everything was far easier than I expected it to be, the only issues I had was with installing python (a few cli Utils required it) but everything else has been smooth sailing and a much better experience than running things on my 2019 MBP

I’m not a huge docker user, but I run it for a few things and again, it was all smooth sailing.


My first task at my current job I had to port our local dev environment to M1, out of necessity. Docker was relatively straightforward but I ran into a hell of sorts trying to get deps to compile on my aarch64 container, especially for stale projects like leveldb and eleveldb.

In short it was painful but once you get over the attrition of compiling (mainly C) deps it's smooth sailing from there on out.


Dropped docker for local development and I just run stuff natively relying on tests and CI to catch any issues but I haven't really had any.


I still have trouble figuring out how Docker works on M1 (I don't have one and I'm not sure it it meets my demands, infinite loop). Can I just pull a random image and expect Rosetta to do its thing without worrying? I can understand that 90% will work without issues, but it's always the remaining 10% that will suck up all of your time.


By default with Docker Desktop it only runs arm64 images, which are already quite popular (due to Raspberry Pis, Graviton, etc.).


When I started, I had Rosetta for almost everything, and I was able to do my workflow without Docker (super broken that was). Several months later, I reset my Mac and reinstalled everything, this time with far fewer Rosetta parts. Several months later, did it again, and this time was completely free of anything needing Rosetta because everything was native by that point.


> Then there's just other annoyances, like having to run a VM in the background for Docker all the time because 'real' Docker is not possible on macOS.

As far as I know, this has nothing to do with the M1 or ARM. This has always been the case. How else would you run Linux containers on a non-Linux OS?


I keep an x86 Mac mini to run my VMs. One of the VMs is an old version of OS X that runs some 32-bit programs that won't run on newer versions that I am still converting data from[0]. Apple is awful about running old programs.

0) still converting some stuff in Lineform. Shame there wasn't a 64-bit version before they stopped selling it.


At some point we're going to have the opposite issue. Stuff will work for ARM but not x86. Thanks Apple.


But apparently "thanks" unironically right? I am not a CPU expert, but from what I have read about ARM during this transition (plus with more ARM options becoming available in the cloud) it seems to me like x86 is bogged down with a fair amount of baggage and ARM/RISK is actually a better technology which has been held back by the inertia of x86.

Happy to be corrected if I am wrong.


You're welcome.


Some of our 3rd party dev packages don't have M1 support, and docker cross-compilation for M1 has been a nightmare. 8GB RAM is too low as well. Our team is staying on just Intel mac/win/linux computers until we have the resources to address the M1 issues.


If you need to work with amd64 Docker images on an M1, just SSH to an amd64 AWS instance and do the builds there while things get ironed out. Otherwise, you can do the builds with `docker build --platform linux/amd64` but it'll be slower since it's emulated.


Multipass for an Ubuntu arm64 vm. Podman inside of there to create and run x86 docker images.


After reading those comments, I am so glad I did upgrade my old mbp 2015 to a razer laptop. Battery life is horrible, but I don't care.runnkng out of battery in Bed coding, or bar coding forces me to take a pause anyway.

So so glad, running plasma.


We resorted to building multiple images manually and pushing them to ECR. Then just have an override compose file that people with M1s have to use. Fortunately, have only had to do that with a couple images that aren’t updated all that often.


React Native/iOS dev here. Everything working flawless here. Also some backend coding in Parallels + Windows 11 ARM + Visual Studio 2022. Runs great.

I also semi-professionaly use Photoshop, Sketch, DaVinci Resolve, XD, After Effects.

All butter smooth.


Losing eGPU really hurts.

I had three-up before, and now I’m back to the laptop and one central display.

I could get a cheap DisplayLink hub but the performance is poor and I’m not happy with granting their driver screen recording permission. Or trusting it at all tbh.


I really don't want to give up my four external monitors, so I will have to wait until I can get the M1 Max or something.


It’s unfortunate. In every other way it’s been an almost shocking leap forward, on the order of the HDD -> SSD transition when that happened.

For me, the performance and longevity I’ve been getting from a machine this light and without fans feels borderline unbelievable. Even now almost two years later.


TL;DR - I tried it when the first m1's came out and it was a huge pain, ended up going back to x86 for my primary machine.

I got an m1 right when they came out because I started a new gig right around that time, literally happened the same week. Trying to get all my dev tools installed became a rat's nest of issues. I work as a backend / dist sys / systems engineer for my day job and so I have to write and use things that are fairly close to bare-metal. Brew hadn't been forked yet, so that added a whole new layer of issues.

Docker still doesn't work, Rust libs compile in weird ways... just all kinds of stuff that I'm not smart enough or paid well enough to figure out. My title is "Developer", not "M1 developer advocate" so after about a month of running into issue after issue, I went back and found a used MBP with an intel chip. I'm excited about the future of Apple silicon, and ARM as a whole, but it needs another couple years of refinement.

I will say that I've been using an m1 mac mini for general office work as apart of my side business and it's quite good.


I just learned how to run our apps outside of docker and virtual box. Setting up two Postgres DBs, a NodeJS process and a Python process wasn't completely trivial, but it wasn't all that difficult either.


Of those I feel python is the big problem, at least if one deals with multiple projects needing different versions. When you've finally got the paths and venvs correctly set up, many packages won't be installed correctly because there is no wheel for M1 / your architecture. So then you have to compile everything yourself, which then fails on some new steps.


Whatever this years macOS version is called includes support for running Rosetta on linux VMs at least which sounds like once VM apps adopt the appropriate APIs that will solve many of the problems in these comments


At my company, we have been using Github Codespaces for development and it has been quite nice, since it offloads all your actual development to a simple VM (and uses VSCode's remote feature for that)


I switched basically at the very beginning of M1 release. It was an absolute nightmare until I got everything working (was using Docker Desktop extensively), and then I haven't thought about it since.


iOS dev here. Things have been pretty smooth for my team except for some differences with where homebrew installs packages on ARM64 and that we still have to run the iOS simulator via rosetta due to some of our 3rd party frameworks not providing ARM64 builds for iOS-simulator.

In my personal hobby stuff I do miss being able to virtualize x86 machines but have been able to get by with arm versions of Windows 11 and Linux running in parallels, and qemu for operating systems that don't have arm builds.


I've been using an M1 since it came out, I'm a platform / automation engineer, lots of aws, cdk, typescript, golang, terraform etc... I haven't had any problems at all.


I do a lot of devops work and never had any issue. If I have doubts I just use a VM on datacenter or an EC2 through ssh. I guess maybe some people don't have access to these resources.


M1 has meant shifting some dependencies, slower builds that need to build from source, updating databases out of band, and container crashes from poor x86 emulation.


I'm lucky enough that I need high performance and have native dependancies that don't provide m1 binaries. So, I can worry about other problems


Vagrant + parallels on m1 and intel. Works beautifully.


It's still a year or two away for things to be ready to own an M1 for development with the types of topics listed above.


There are comments with a lot of detail here. I’m just sitting here annoyed that I can’t use Bootcamp to dual boot windows.


Adopting m1 has been virtually pain-free for us. Projects are all rust, just specify the cpu-target in the docket builds.


Worst thing I've had to deal with is Terraform Providers not supporting ARM64


I've had 0 issues. Everything has worked for me out of the box, so to speak.


Here's a few pain points I had:

* Virtual VM's don't solve my problem everytime. There's software that still requires x86 and a VM isn't going to solve that problem in a few cases. I wish I could get into more details here but I'm kind of a noob in this realm. (TLDR: I need to use something called UAExpert and to resolve this, I have both a separate Linux machine and Windows machine in case I need it)

* Have to install homebrew and an x864 version of homebrew to run the right software. Homebrew does not document this so this solution was based off stack overflow posts.

* While docker states that it supports multiple architectures, I don't find that to be fully true. For our codebase, I need to push up x86 docker images but accidentally pushed up arm64 ones instead. There's a solution for it but it's definitely not an out-of-the-box solution at the moment.

Overall, still pretty happy with it. My older macbook pro had gotten sluggish so the tradeoff for me was worth it.


I did buy MacBook M1.

Installed Asahi Linux, it made possible to keep OS running all the time, keep HexChat IRC running, and not shutdown when away from keyboard like on macOS.

But that M1 did only last 4 days. Then it did not boot anymore. So I returned it to warranty repair, and canceled buying it.


Asahi is still in alpha status, I think it’s only fair to give it some more time before running it as your main OS.


GDB still doesn't work on MacOS M1.


I miss counter strike 1.6


Its simple: I dont




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: