Hacker News new | past | comments | ask | show | jobs | submit login
M1 dev setup with a virtual Linux box (kristiandupont.medium.com)
211 points by kristiandupont on April 11, 2021 | hide | past | favorite | 169 comments



Everyone is talking about running a separate Linux dev environment, but I'd actually love to run a separate macOS dev environment in a clean way (i.e. without messing with partitions for dualboot, especially on the M1).

I get my stuff from Homebrew, just Rails + MySQL, no Docker, no fancy stuff. I'd love to have a fast macOS VM to run software I don't trust like Zoom or Skype and a second one to run my work projects, so they don't spill over my personal stuff, but AFAIK the virtualization story is still pretty incomplete on the M1 (or is there a way to run an arm64 macOS VM without a gigantic performance hit?).


If I recall, VMWare Fusion and Parallels can run macOS VMs with little or no speed hit, under Big Sur. Big Sur added GPU virtualization support so VMs aren’t slow, previous macOS versions lacked this.


IMO the graphics performance when running macOS Big Sur VMs under Fusion is better but still atrocious. Not anything like a native experience.

Not sure about Parallels.


3D acceleration is not enabled by default for macOS Big Sur in VMware Fusion because it's not considered stable yet. It's possible to activate by tweaking the vmx file: https://kb.vmware.com/s/article/81657

In Parallels it's on by default but you need to allocate a lot of RAM to the VM: https://kb.parallels.com/en/125105#section6


Just tested Parallels latest beta on M1 and it couldn’t find an OS on a Big Sur ISO that I’d just created. Doesn’t appear to support virtualizing macOS on M1 yet.



Parallels only supports arm images on M1, such as Ubuntu arm and Windows arm. No MacOS arm image as client.


Yes, Parallels runs great for Ubuntu arm on M1, both headless terminal and X11, since December 2020.


Fusion doesn’t work on M1s.


I’ve been looking for this for a while, not sure when or if it will get done. X86 emulation on an M1… I wonder if there will be a massive hit


Been dealing with this this week. It’s about a 40x performance hit to run x86 via qemu in our estimates.


Qemu isn’t representative of anything else perf-wise. You may want to use user-mode emulation too.


I don’t entirely understand your comment (actively looking for solutions for these issues so all info is good).

My initial attempt at getting a running system was to install an x86 VM via qemu / utm. It worked but was about 1 /4 speed of my old MacBook.

Round 2 was to an arm64 VM via qemu / utm (utm is using qemu here, but I guess it’s almost running purely on the M1?). That’s blazing fast but I don’t have everything I need compiled for it. So I’m recompiling what I can and whatever I can’t I’m then using qemu user mode within the arm VM to run the x86 binaries. The VM itself is really fast, the x86 code, obviously isn’t, but I’ve at least minimised the times I need the emulation.

Not sure of a better approach, but as I say, definitely open to hearing what else I can do.


You should try running an Arm64 OS and then using ExaGear (Linux VM) or just run your x86_64 app (Windows).

Forget about full system emulation, especially with Qemu’s approach which doesn’t leverage the host MMU. Emulating an MMU alone is very costly perf-wise. That means that you should use an arm64 OS and then use an x86 compat layer on it.

For Qemu’s user mode emulation, it doesn’t exactly provide good performance either. But it is still much better than full system emulation.


Isn’t that what I’m doing?

Mac OS is arm64, with an arm64 Ubuntu VM, then I’m running qemu in that for specific single x86 processes.

To clarify, I’m using UTM to run the VM, so it’s using qemu - but it looks like that doesn’t give you a performance cost in itself.


Yes that’s what you do, you might want to look at options other than Qemu though for more performance.

(I assumed FSE from your 40x worse perf figure, which is a very bad case)


Ahh, so it’s just Parallels then. Don’t yet own an M1 Mac myself, but saw Parallels had made the jump an assumed VMWare followed.


It’s not smooth enough for me to consider regularly using for most work but you can setup macOS VMs from parallels/fusion inside of macOS since apple allows it on their hardware (...or a hackintosh).

It’s not as easy to get working compared to a windows vm because I believe it uses the OS recovery image but it’s not that hard to setup and works well. I’ve used it in the past to test my dotfiles setup on a perfectly clean install.


I still haven't tried it, but what if you downloaded the iOS apps on the M1? Wouldn't it be more containerized if you ran those apps from the phone "emulator"


You're absolutely right, but I would assume Zoom has disallowed its iOS client from being installed on macOS. Since Apple closed the loophole allowing sideloading, we're out of luck on that front.


Just decrypt the app and run it.


I use a separate account when I need to split something.


Homebrew is sending analytics as well, FYI.


“brew analytics off” from your terminal.


I think running your dev environment in a VM is the future on all platforms.

As developers we trust so many different libraries. And it is important that they are safe when used in production code.

But we shouldn't have to worry about accidentally installing a library which uploads our emails or our browser data. By working from a VM we can prevent that.

The worst a malicious library can do from a VM is upload our SSH keys or source code (which is still bad).


I hear what you're saying, but the last time I worked through a linux VM on my mac laptop, everything got much slower. Our nodejs service took 2-3x longer to start up or to build. I think the issue was filesystem overhead for some reason. The future might involve working from a VM when we have CPU cycles to spare. But right now I need every iota of speed my computer has for my rust compiler.

I'd much rather we solve the security problems using better local sandboxing for software, like how it works on our phones. That would help end users as well, and it would stop crypto ransomware and all sorts of other attacks. Or alternatively, run my dev tools from a solaris zone or a freebsd jail or something, both of which have no performance impact.


This is why running something like VSCode on your Mac with the remote ssh dev setup is the best of all worlds.

https://code.visualstudio.com/docs/remote/remote-overview

Virtualized Linux on x86 AND being able to run VSCode on your Mac for great fonts, WiFi, screen etc without having to fuss with drivers.

Of course ssh and tmux also work well for this purpose.


This is pretty neat, but I think the overhead saved with complete remote development might be cancelled out by using VSCode instead of a JetBrains IDE.

People are advocating remote development heavily, so there must be some *huge* benefit that I don't understand. So much so that people are willing to give up amazing features of an IDE. To get VSCode to a usable state requires 16 third-party plugins, and you're left with about 1/3 of JetBrains functionality, in which the features aren't quite as good. YMMV I guess?


I've been paying for the JetBrains toolbox for a few years now, I'm a huge fan.

However, JetBrains unfortunately has nothing coming close to the transparency and speed of the VSCode remote connection, with your source living only on the remote side.

In cases where being able to work on remote, for example very particular configurations, or docker containers, or WSL2, this makes a huge difference.

For me this is mostly Python and TypeScript these days, where VSCode has grown particularly strong in terms of IDE features.


The huge benefit comes in when your project involves loading large datasets or talking to cloud APIs. Both cases can be a non-starter on a local Mac (no local disk space, no bandwidth, or the latency overhead of a 100 ms RTT vs. 5 ms RTT adding up over thousands of requests). I would also point out that Docker on Mac is far less efficient than Docker on Linux, since it runs a separate VM.

VS Code Remote is a game changer. It's how IDEs should work. It allows you to run all the GUI chrome locally for responsive editing, while letting the remote do all the heavy lifting (including building, debugging, testing, and deploying). It finally overcomes the latency and other usability issues of using a wimpy local box to connect to a powerful remote box to write software.

I never used VS Code much and otherwise prefer JetBrains myself. But the remote development extension changed my workflow permanently, and I now recommend it to all my colleagues who develop cloud/data intensive code.


And Jetbrains is working on a solution: https://blog.jetbrains.com/blog/2021/03/11/projector-is-out/

It's still beta so they have a lot of work to do.


Oh wow, I'm a JetBrains fan, but didn't know about this - if it delivers in the same way as the remote extensions for VSCode, it's going to be great!

Have you tried the Projector beta yet?


I have. It's very impressive but not ready for daily use IMO. (I tested it with IntelliJ).

However, the 2012.1 releases now support running code on via WSL2 or SSH, which is closer to VS Code's setup. https://blog.jetbrains.com/idea/2021/02/intellij-idea-2021-1...


Yes. It's based on toolkit remoting. Think RDP.


If only Jetbrains added something more modern to sync files with a remote host, instead of the SLOW SFTP way they do it now, I'd be an instant convert.

Recently, I've settled for using my remote machine as the source of truth for my codebase (a monorepo) and only sync specific directories I work with locally.

Remote docker interpreter (python) + this is a workable-enough solution. It's not ideal - I still have to switch to the terminal to run git, tests, etc. Compared to this, the VSCode way is just simply better; especially if you don't use all the features of a full-blown IDE.


I hope IntelliJ eventually figures out remote dev.

Benefits are more obvious on massive code bases that take a long time to build, test (or really just do anything).

IntelliJ will get stuck indexing forever and can be frustrating to use - having the code live on beefy servers that can run the computationally expensive bits faster can make a big difference.


I think Jetbrains IDEs can do the same thing:

https://www.jetbrains.com/help/go/creating-a-remote-server-c...


I think what you linked to is different, but they recently announced working on a real remote dev option: https://blog.jetbrains.com/blog/2021/03/11/projector-is-out/

Their old server config is just pointing to a repo I think, but isn’t remote dev.

I looked into this last year before their recent announcement and at the time they said it wasn’t on their roadmap, but looks like that thankfully changed.


> To get VSCode to a usable state requires 16 third-party plugins

You mean this literally? It took just that one plugin for me, when experimenting with using another machine on my network as the dev environment.


Believe them, VS Code is a strange beast in that it's really powerful and has a very flexible extension system... but that's a double edged sword because it lets the devs easily ignore features with the excuse that they should be done with a 3rd party extension.

EDIT - child comment is right, the following paragraph is not true! I was writing from my phone and remembered installing this extension [0] but now that I've checked it is for counting offsets from the beginning of the file, not lines and columns. Which I'd concede is not so much of a basic functionality expected in any editor.

[0]: https://marketplace.visualstudio.com/items?itemName=ramyarao...

  And I mean *really* basic stuff... like a status bar label 
  that shows the line and column numbers where the cursor
  is! Yes, you already need an extension just for that.


I mean, that's just not true. The basic VS Code install will tell you the line and column numbers in the status bar without any plugins.

Usually, the only plugins I need to install is the language server plugin for whatever language I'm using. At least, that works for Go and Python while Node and JS/TS work right out of the box.


You're right. I've since arrived home and could check my VSCode, which led me to edit my comment above.


I meant to get VSCode to work as a decent development environment, more generally.


would you expand on this? i'm curious what i've been missing, and what possibilities there are!

for me, a TS using node nerd, vs code does these things well:

  1. ts language server
  2. decent plugins (that i can commit to scm)
  3. yarn 2 support 
  4. nice debugger interaction
is there more for me to have? is the core difference i use so much m$ stuff?


There are too many to list. I'd recommend using a JetBrains IDE to see what it has.

Off the top of my head, VSCode lacks a decent code formatter and a "search everywhere"-type command palette. And the Git and database support is not good either, but that's kind of a given.


I love the gitlens extension personally - I find it to be a big productivity boost when navigating a complex/legacy codebase.


Ugh, I don't want to descend into "that thread" again, but Wifi/Screen have not been an issue on linux (as long as you don't use nvidia) in over 10 years.

Mixed scale DPI screens are not a problem if you use Wayland, which is basically the default now.

Font rendering though? YMMV.


Nope, you people are living in a fairy tale. I’d love to have a decent linux desktop installation. I spent hours yesterday and gave up.

I have a laptop running Ubuntu and connected to my 4k monitor in front of me. Has an Intel gpu. I do not even want to use multiple displays, I’m gonna go with the external only.

Wayland can do fractional scaling but then VsCode, Firefox, Discord and whatever apps I tried became blurry shit. Firefox has an environment variable to fix it. VsCode has an experimental build and command line flags. I have no idea about Discord. I don’t care if its the app developers fault. It is just plain bad.

X11 can do fractional scaling but fonts are a little blurry for some reason and I have the worst screen tearing I’ve ever seen in my life. It is unusable.

macOS and Windows do this perfectly. It is not something you think about. Windows has some weird looking apps here and there but the shittyneas is not even close.


I don't know what to tell you, maybe I won the lottery on hardware. (though, admittedly I buy computers with linux support in mind). Everything works completely flawlessly for me (even nice font rendering, though idk if I did something manual for that).

For context, I have (currently) a precision 5520 laptop;

I run arch with sway, I have a big USB-C 4k at work and 2*16:10 FHD Dell USB-C monitors at home.

The only issues I have with linux is that I don't have Microsoft Office, and Zoom+Wayland is buggy as hell.


> Wayland can do fractional scaling but then VsCode, Firefox, Discord and whatever apps I tried became blurry shit.

X11 apps do that. Firefox can run in Wayland mode, so run it in Wayland mode. vscode and other electron apps do not support wayland yet, so until they do (yes, they are taking their sweet time), you either need to run with integer scaling, or tolerate them being blurry.

In Windows, the shittyness is much worse. Basically any Qt app is unusable with hidpi in Windows.


> In Windows, the shittyness is much worse. Basically any Qt app is unusable with hidpi in Windows.

In my experience, it isn’t. On my setup (200% + 100% screens, Retina MacBook Pro in Boot Camp) KeePassXC, qBitTorrent, Qt Designer all look great on both screens; VLC has slightly weird fonts on the HiDPI screen but still works fine.


Native Windows 10 on a threadripper machine; display at 175%. QGIS, FME completely unusable.


Don't use fractional scaling.

Use the closest integer scaling and adjust default font sizes (and if needed other sizes like window title height or panel height).

It's also best if you just don't buy monitors that have dpis that are not close to integer multiples of 96.


Yeah I tried this but I used 1x ui scaling + 1.5x font scaling. It works but then all the buttons and stuff are tiny.

Maybe I should try 2x gui scaling and 0.75 font size? Never occured to me that I could go below 1 on the scales, it might work.

edit: 2x on wayland is still blurry. 1x + 2x font works but as I said, tiny buttons and stuff.

Still, my point is; why the fuck am I dealing with this?


Because it’s free and nobody is willing to put the effort (and pay the cost) to make a pleasing UX for desktop users.

In mobile where there is market, Google did it.

I stopped trying a couple of years ago (after I got a 4K monitor) to convince myself that desktop Linux is workable.

Linux for desktop = console


Ubuntu Mate just works, been using with two 4k monitors for over five years.


Not according to my experience with a top-spec Dell XPS "developer edition" with Ubuntu pre-installed about 2 years ago. Not only did I have intermittant Wifi connection drops but I was never able to get multi-monitor mixed DPI to work at all with or without Wayland. Even closing the lid/entering sleep mode was broken - the battery would continue to be drained (at about half the rate of non-sleep mode but still unusable).


Was the sleep issue not resolved by a BIOS update?

Tangentially, I want to mention that the archlinux wiki is a terrific catalog of machines and their known quirks (and sometimes, fixes for the quirks!).


If you use fractional scaling on Wayland, Chromium-based applications will render blurry. Obviously it's a Chromium issue but it's been around for a while and is a bad experience.


This just got patched, so electron apps are slowly able to update and fix this. VSCode has a beta version with the fix already


Linux generally works great. I'm typing this on a linux mint, using mixed scale DPI screens and generally it all works pretty well. I think all my hardware is fully supported - I haven't had to mess with drivers at all. ... Well, not recently. There were some zen3 bugs in the stable kernel releases a few months ago, but it all works fine now.

And thats sort of the theme everywhere. Its super fast and I love it, and its mostly stable and mostly great. But tiny bugs shine through all over. I've been totally spoiled by Apple's spit and polish.

For example, I get random graphics bugs after waking from sleep sometimes. Sometimes my mouse cursor is either invisible or for some reason duplicated, so I see a second stationary cursor hovering over my windows. And for some reason my second display doesn't vsync, so I get obvious tearing when I scroll or move windows around.

I use a trackpad, and smooth scrolling works properly in most apps. But Firefox needs an obscure XInput environment variable to make it work. (That trick is only mentioned deep some a bug tracker). Smooth scrolling doesn't work at all in IntelliJ. Intellij also doesn't let me use the Meta key (Cmd / Start) as a shortcut modifier - so my muscle memory for navigation is all messed up and I can't rebind keys to fix it. I hate how Ctrl+C is copy everywhere except the terminal, which needs me to Ctrl+Shift+C instead. Etc etc. Forever.

Its unbelievably responsive compared to my 2016 MBP though. If you haven't upgraded in awhile and you can afford it, its a fantastic time to get a new system. But linux on the desktop still isn't entirely pain free. Way better than it was a few years ago though.


Do you know what that firefox variable is? It's the only reason I'm not using firefox right now.


Probably MOZ_USE_XINPUT2=1 (note the '2' at the end)

...or use Wayland where smooth scrolling should work by default


> Wayland, which is basically the default now

In Fedora maybe. Not in most distros. And recall many apps have to run under xwayland for compatibility so all of those apps won't support scaling properly.

It's still not there yet.

Battery life also takes a huge hit on the Dell XPS and HP Elitebooks I've tried, even after trying to apply some tweaks (not that I should need to). For this reason I use Windows+WSL on laptops even though I'd prefer Ubuntu.


Wayland is "basically the default now"? Since when?


> the last time I worked through a linux VM on my mac laptop, everything got much slower. Our nodejs service took 2-3x longer to start up or to build. I think the issue was filesystem overhead for some reason.

Docker on Mac has a lot of IO overhead. This is mentioned in the article:

> The way I made it work is by having a full “dev environment” running virtually. That means that I check out repositories on the virtual disk and run everything from there. This has the slight inconvenience that I can’t easily access those files with Finder, but the upside is that there is no noticeable IO latency issues like when running Docker for Mac.

If you just want improved performance with bind mounts instead, you can use :cached or :delegated flags.


I have been wondering if using VMWare Fusion docker compatible solution that even comes with their free version of Fusion has better IO/disk performance when running containers https://blogs.vmware.com/teamfusion/2020/05/fusion-11-5-now-...

I think Fusion comes with a driver for Docker


Since the earliest days of Docker for Mac, I've found docker-sync to be very effective. (I've haven't done much follow-up over time to see if using those flags matches performance) To gain more speed, add folders you really don't care about to docker-sync's ignored folders, especially if they are high churn.


Fedora Toolbox is quite interesting from this perspective. It does not provide a proper security boundary, since it shares the home directory, but it is really nice to make ad-hoc development environments. Since it is built on podman/buildah, it does not require the Docker daemon and can be used by regular users.


Did you use a native filesystem inside the VM or a host filesystem remoted into the VM?


>I need every iota of speed my computer has for my rust compiler

If so, why do you need rust compiler in the first place? May be you don't? May be it's wrong compiler?


A properly secured unprivileged Linux container is not particularly worse than VM from a security point of view, but its impact on performance is very minimal. The drawback is that one cannot use Mac or Windows as a host, but as long as one is OK with running Linux on machine and accessing, say, Windows occasionally via VM, this is a very nice setup.


My first attempt at this configuration was using a podman container (my laptop runs Fedora).

It worked well, except that I occasionally want to run a container within my dev-environment. Running a container inside a container is possible, but not easy.

I settled on running a Vagrant Libvirt VM. I do not know what the performance impact is- but I have not noticed either my laptop or the VM being slow.


I'm just now learning about the container technologies that are alternatives to Docker. More precisely, LXC, in case you had it in mind in your comment.

Do you know of a set of settings or tips to follow for making containers as safe as a VM?


I use systemd-nspawn containers with max security options, like user namespaces, syscall filters, no-new-priveleges. Even with the user namespaces I do not run as a root in the user namespaces unless I need to install extra software inside the container.


The need for VMs or containers for security (on end user machines) is the fallout of insufficient OS security mechanisms.

A better solution would be a desktop OS with proper application sandboxing. Mac OS is taking many steps in that direction.

Linux - as usual - has a multitude of solutions, all of them problematic. (AppArmor, SeLinux, Firejail, Snap, Flatpak)


I believe the real problem is that devs install random binaries, or modules, without any real concern for what malicious activities, even if unintentional, these artifacts may perform. Forget the fact that it’s on the dev’s machine, they move this stuff into production without proper security vetting.

Many devs just blindly install things from npm, copy random snippets from GitHub gists, execute remote shell scripts via curl that they just copy and paste into the terminal because they read it in some blog post, and on and on.

Anyway, just my worthless < 2 cents.


Microsoft tried to push for a better model in the Windows Store apps. But the world reacted with apathy. Now they seem to try to leverage their built-in hypervisor instead, for example with Sandbox [1] for temporary VMs with minimal overhead.

1: https://docs.microsoft.com/en-us/windows/security/threat-pro...


I believe VMs are not just for security - it might be the only way to run completely different operating systems (like macos on linux, or linux on windows, etc)

Additionally, VMs might be a great help to complicated debugging. You can crash a VM without taking down your desktop. I'm not sure about things like kernel debugging, but it might be easier in a VM.


Actually I think VMs for dev are evil. I hope it's a stop gap measure until tooling like nix catches up enough. Running an entire seperate OS just for development is completely bonkers to me.


If you think that's bonkers, I'll give you an even worse idea: a dedicated desktop machine. I have a dell optiplex running Intel i3-2100 with 8GB RAM and 1TB Samsung 860 Evo SSD. No display. Fedora runs like a dream with dnf automatic. Use cockpit if you'd like.

You can use your windows or Mac laptop with visual studio code to do web development. ng serve works as if you ran it from your laptop.

The biggest drawback I can think of is if you ever needed to leave the house. Thankfully, I have no life so I don't have this problem.


Having both your laptop and desktop on a WireGuard vpn with the gateway on your choice of cloud host would solve that problem, also it would solve having to deal with wandering IP addresses as you may find with a home internet connection


> The biggest drawback I can think of is if you ever needed to leave the house.

Add ZeroTier to the equation and as long as you have Internet connection when you're outside the house you'll probably not even notice it.


I see WireGuard and ZeroTier mentioned.

I just use ssh. SSH to a VPS and map 22 to a port on the VPS.

I have a really small VPS for this.


I think the reason they mentioned more complicated setups is because I'm using an actual old machine at home and unless you know what you're doing you don't want to expose it to the world.

I think poking a hole on port 22 is mostly ok if you only allow key authentication and no password authentication but I don't know enough to give advice on security.


Just posting alternatives.

I use a VPS with SSH. I have to ssh in, then I can ssh into the machine at home.

For safety, key authentication and fail2ban would cover a lot. I mainly have the 1 port.

If I need to expose another SSH port to the internet, I can do it, but yes, it would need extra protection since logs are coming from the machine ssh'ing in.


> For safety, key authentication and fail2ban would cover a lot. I mainly have the 1 port.

Changing the port from the default 22 to something else is also recommended, if only because it makes fail2ban logs way less verbose.


Running desktop applications inside a web browser is bonkers too. And the cloud. It's better to embrace the madness. I recommend watching Dr Strangelove.


I have it an easy way I think. My development for Linux except some RPi projects is strictly limited to some high performance business servers. I use C++ with a couple of multiplatform libraries for such development. So I simply develop and do main testing on Windows using Microsoft Visual Studio C++. The big one, not VS code. When the time comes for release I just check out project on Linux, run build and tests and that is it. I do have Linux VM on my development computer (VMware, full Linux OS with the GUI and dev environments) in case Linux tests fail due to some compiler differences / whatever. During last 3 years I had to use like it twice for that purpose. I mostly use Vm's to test my server setup from scratch scripts.


Nix has been something I’ve played with for a while, but increasingly I’ve been using it. I would love to see Nix working on Windows - there was a project to make that happen but I don’t know what the current status is.


I don't know much about nix.

Does it solve the security problem by preventing access to the majority of your home directory?

Or solve it by only allowing you to install vetted libraries?


You cannot access the home directory at all during a nix build.


Interesting, what about when your software runs? Is it still in a chroot or similar?


It's not running in a sandbox. That's not the goal of nix. It's only a package manager/build system.


Oh, then why can it not access the home folder?


The build process is sandboxed. Not the environment you run the application in afterwards.


I'd phrase it this way: it is completely bonkers that this isn't bonkers.


I started doing exactly that.

I've been reluctant to contribute to nodejs (e.g. electron) projects for some time, because I just don't want to run npm on a computer with any kind of remotely private data.

Lately there were just too many itches to scratch, so I went for a VM replicating my normal setup (dotfiles etc.), and I just use x2go, locally. Quick and dirty setup which is good enough when used infrequently.

My ideal setup would probably be closer to https://blog.jessfraz.com/post/docker-containers-on-the-desk..., but it's more setup than I could be bothered with at the time. Maybe one day.


I was going to say container or vm, but maybe ...

Are containers a hack that will go away as VMs become lightweight or will containers replace VMs?

I run proxmox, and when I first set things up I used VMs, but over time I moved most server kinds of things to containers.

EDIT: docker is a special thing - creating an entire environment from one Dockerfile is pretty powerful.


I've been playing with this stuff on and off for the past few weekends, getting an environment set up on my M1 air.

I ended up going with a VM, as it still allows (IIRC) the thin client to be thinner. There's definitely bits of containers I miss, like I'm back to keeping notes in a markdown file for commands that would be added to a Dockerfile, but I'm also not having to do weird things to get something like my shell history to persist.


> I think running your dev environment in a VM is the future on all platforms.

We probably have a long ways to go before we get there and it does come with its own sets of challenges and usability quirks even if the technical implementation is good.

For example, 8 years ago I used to run Windows 7 with xubuntu running in a graphical vmware VM using Unity mode[0]. Basically a way to seamlessly run graphical Linux apps in Windows. Each GUI app you launched from the Linux VM would have its own floating window that you could move around like any other Windows window. As an aside, this feature has been removed from vmware for years now when it comes to Linux guests.

It worked well enough for then, and I spent 99% of my time in that VM (browser, code editor, everything) and I only used Windows for recording / editing videos and playing games to avoid having to dual boot.

But even with vmware's really good disk performance there were performance issues at times, you're also splitting your memory up between your main system and your VM, it's not that efficient. Then there's little quirks like your main OS not really fully being able to integrate with files and apps from the VM, so you have to do hacky things to get apps to launch from a taskbar, search doesn't work because your stuff is in a VM, etc.. Plus you always feel like you're split between 2 worlds, the main OS and the VM. It doesn't feel really nice and cohesive.

To a lesser extent nowadays we have WSL 2 on Windows which is what I use. It solves a lot of the VM problems from above and running an X-server lets you run graphical apps very well, but you still feel like you're running 2 operating systems and the user experience suffers because you don't feel like you're running 1 streamlined OS.

A prime example is having to put all of your files in WSL 2's file system to get good performance but having certain types of files there is an inconvenience or you may not want to do it because it doesn't make sense to put 100GB of storage files on your SSD. That happened to me because I have a podcast site which is mostly code, except for a ton of raw wav file recordings + mp3s. Instead of just having a git ignored directory in my project, I had to create a 2nd directory outside of WSL to store these files. There's many other examples like this too.

I don't know what the Mac story is like, but I would imagine at minimum you're dealing with files being split into 2 worlds and will experience the unfriendly split OS feeling overall. Does Parallels let you seamlessly run floating windows across your VM and macOS?

[0]: Here's a really old video of that set up https://nickjanetakis.com/blog/create-an-awesome-linux-devel...


It's a shame it didn't work out for you.

> we have WSL 2 on Windows which is what I use

My new work laptop runs Windows, so I'm interested in WSL2. I gather that it has good integration with Windows (eg you can type notepad and notepad will open)- which is convenient but removes any security boundary.

> Does Parallels let you seamlessly run floating windows across your VM and macOS?

I don't use Parallels, or Mac. But I believe so- they call it Coherence https://kb.parallels.com/4670

I agree that not everything is as convenient. My papercuts were not being able to type `code .` to make a new VSCode window. And not being able to use the new tab shortcut in my terminal to make a new tab in the current directory.

I've made a little project to solve both those issues for me https://github.com/ccouzens/ssh-nicety It works, but is fairly bespoke to my setup. It uses Unix Domain sockets (which probably excludes Windows). The only new terminal it can launch atm is gnome-terminal.


I still do use WSL 2 btw. It's pretty good for general development. It's just not ideal due to the split file system concerns.

If you decide to use it, I have another more up to date video with my whole WSL 2 / Docker / all the tools I use / etc. set up at: https://nickjanetakis.com/blog/a-linux-dev-environment-on-wi...


I agree. We need something like Firecracker and quick boot times and something like Bottlerocket (immutable os) as the host. That woukd help my workflows very much.


Personally pushing the edit, compile, run cycle time as low as possible has always been the reason I have stayed away from dev VMs. For 99% of computer uses a VM is fast enough but unfortunately for many programming tasks, it is not.


I’ve been doing this with minor variations for a while now (from my iPad, from my Mac, my netbook, etc.) towards VMs in various places (your favorite flavor of cloud, my favorite closet, etc.).

It has become remarkably seamless and trivial to switch any of the local/remote pairs over time, and definitely cleaner than managing various app runtimes on my local machines (I have cloud-config templates to bootstrap fresh Go, Java and Node boxes as required).

edit: forgot to mention I'm posting this from another of those combos, a Windows VM I remote to from my iPad whenever I need a desktop browser


Can you explain more about your setup, in particular the iPad part? What apps are you running on your iPad to facilitate dev work? And when talking remote instances, do you mean something like a droplet/vps?

My dream is to code web apps from my iPad from the couch/bed as I sit at a big desk and monitor all day for work and I just want to chill in the evenings on something smaller and more comfortable.


Sure. I use Jump Desktop for RDP/VNC over SSH (with a Citrix X1 mouse) and Blink/Prompt for tmux sessions. A typical setup of mine has a remote container with Xrdp, Firefox, VS Code and barely enough window management to do full-screen windows and workspaces (typically blackbox).

Remotes can be anything: I have a KVM host at home (that I remote to from my Mac for Docker dev) and plenty of Azure VMs.


Not OP but I use an iPad occasionally to remote in to a linux box and work on projects.

I use blink for the terminal app and connect using mosh instead of ssh. I found that mosh handles the connection (reconnection) way better since iPadOS is pretty aggressive in killing the terminal app if I switch to a different app. I use also tmux on the server and just detach it when I'm done in case I want to work on it on my laptop or desktop. Overall works great, my only issue is that the 10.9" iPad screen is a _little_ bit too small for my liking so I don't do work like this that much. If I had the 12.9" iPad it would probably be something I use daily.



Ha! I could never do that, but kudos to you


I wrote pojde[1] a while back to solve that exact usecase, using my iPad (or any device with a web browser) to code from anywhere. It creates multiple isolated code-server instances on a remote server and handles toolchain installs, updates, authentication and TLS for you :)

[1] https://github.com/pojntfx/pojde


To give an additional data point for people who are interested how a setup like this performs for daily use:

I am currently running Parallels Tech Preview on the MacBook Air M1 and primarily use PyCharm (remote interpreter and deployment to the VM). The whole thing works better than expected considering it’s still a preview release. Battery lasts around 12 hours, sometimes an hour or so more depending on what else I run.

I am currently working on a Django app. When saving while the debug server is running I can command tab to my REST client and make an API request and the change was already deployed and the server restarted. Despite dealing with a VM the whole thing is just fast.


If you don’t absolutely need a local VM I’ve found it much nicer to have a beefy ec2 instance be the Linux vm that you connect to in order to work in Linux on x86.

Recently I’ve been doing this with VSCode which has a remote dev mode that works amazingly well. Before that I was just using ssh and tmux/screen which, as we know, also works and has worked for decades.


In our basic testing on M1 performance this week we’ve found that an arm vm on the M1 runs about 2x as fast as a c6g.2xlarge graviton2 instance. So you’re probably looking at about $0.50 / hr to compete with the Mac.


I am curious what you find „much nicer“ using an EC2 instance than a local VM.

When running remote VMs I usually run them on my ESXi box in the basement and VPN back home when traveling. This is especially nice when a project needs more resources than whatever device I’m working on has to offer. But beside this very specific use case I haven’t personally found any advantage of this setup.


I'm curious what the author means by

> I had two requirements for developing that I wanted to achieve: macOS UI, Linux-based dev environment

What exactly is meant by a linux-based dev environment? Seems like the idea is to run the whole dev environment is in a virtual disk in a VM. I'm puzzled, but ok. It then goes on to set up Ubuntu Server in this VM, which is then used to host the dev environment.

Wouldn't simple running a docker instance both be less cumbersome, far more resource efficient, and quick to iterate, than literally installing an OS on a virtual disk?

---

To summarize, unless I missed something, this looks to explain that it is possible to run a VM on a MacOS. Add "M1" and it's the top post on hacker news? What's going on here?


That this is 'M1' is relevant for me personally, as my current Mac is dead old, I 'need' a Mac for iOS development, the Intel ones are a dead end and seem to run hot often, and it is unclear if all my dev needs will be met by the M1.

Any piece of information that untangles this mess is helpful to me. Of course this may not be the same for others, but it could be 'what's going on here'.


We’re going through the transition at the moment - tried to find info online but decided the only route was to try it on a real machine to see what works / doesn’t work.

Have built an Ubuntu 18 environment running through UTM (https://getutm.app/ - running qemu). Took a bit of tomfoolery where I had to install using a console only display and then flick back to full graphics to get the machine to boot.

Use port forwarding to talk to the machine - haven’t figured out a way to do bridge mode like I can with virtual box.

We’ve got a bunch of crazy dependencies that I’m in the process of rebuilding. Most seem to be ok. There’s a third party one we’re a bit trapped with but we’re running that emulated x86 within the vm. You can also use docker the same way within the vm.

Performance wise the arm vm is blazing fast. Seems to be about 20x faster than an x86 vm on my old MacBook 2015. It’s about 2x the speed of an 8 core graviton2. When running emulated x86 code, the speed drops to about 1/4 the speed of my old 2015 MacBook. Not ideal but in our case we’ve only got a single non arm dependency and it’s not used often, so it works for us.

It’s a bit of a leap, and if you have more crazy binary dependencies like we do, you will have do to a little work to get things running.

Having said that, the machine itself is _amazing_. It’s a real joy to be on a machine with this level of responsiveness.


I weep for you.

As an alternative could you use React Native and borrow an Apple product during compile day?


Rebuilding your app in a completely different technology that you may or may not know seems like overkill for short-term computer woes.


Fair point. I find Front End theraputic so it doesn't seem daunting to me.

It's definitely a long term strategy, but people love eating Candy even if it gives you a stomach ache.


> Wouldn't simple running a docker instance both be less cumbersome, far more resource efficient, and quick to iterate, than literally installing an OS on a virtual disk?

On Mac, docker works by installing a VM- so the two aren't so different.

I (not the author) prefer using a VM as a development environment, because at some point I'll want to run a container and nested containers are tedious.


Docker isn't portable. Docker runs on Linux and only on Linux.

All the Docker pseudo-ports just run a Linux VM on the host OS and set up Docker inside it, for you.



No. Can you run those images on Linux? :-)

Docker images are not portable, they run on Linux or on Windows, but they can never run on both.


Is this because you consider WSL2 to be Linux, and not Windows?


> Windows requires the host OS version to match the container OS version. If you want to run a container based on a newer Windows build, make sure you have an equivalent host build. Otherwise, you can use Hyper-V isolation to run older containers on new host builds. You can read more on Windows Container Version Compatibility in our Container Docs.

https://hub.docker.com/_/microsoft-windows-base-os-images

And WSL2 is Linux running in a VM on Windows.


Ah, my bad. I thought you meant not being able to run linux based images in docker when run with wsl2. There is also a significant difference between the vmware like implementation and wsl2. Spinning up a linux machine in VMware and running docker on that, as mentioned in the article, is quite a different ballpark. Especially when considering the limited CPU and ram resources.


> Seems like the idea is to run the whole dev environment is in a virtual disk in a VM

Pretty much.

I had bad experiences running Docker directly on MacOS. The IO latency was unbearable. I know they are working hard on it so maybe it's better now, but this setup works well for me.


I run Oracle DB (which is a scary piece of software) inside Docker on an Intel Mac. It runs perfectly fine.

What exactly are you guys running that causes it to be "unbearable"?


Are most of the database's files within the container's filesystem (as opposed to a volume mount)?

Most of the performance issues with Docker on Mac happen in setups where source code or other files are volume mounted from the Mac filesystem into the container filesystem.


As far as I can see everything goes into the massive ~/Library/Container folder. No volumes.


"but the upside is that there is no noticeable IO latency issues like when running Docker for Mac"

Docker on Mac can be dog slow. This is appealing to me.


I've been doing something like this for about 3 months with very good success. This is also pretty much the only "complete" solution I could come up with that doesn't involve duct taping 3-4 different things and keeping them all in my head.

A simpler solution I had - One linux vm, SSH connection plugin in VSCode and a simple 4 line SSH config file (~/.ssh/config) does magic.

Here's my config file -

  Host <hostname>
     HostName <Hostname IP>
     User <User>
     IdentityFile <Identity File Path>
     LocalForward 127.0.0.1:8000 127.0.0.1:8000
     LocalForward 127.0.0.1:7000 127.0.0.1:7000
The LocalForwards are key in setting up any tunnels I need working locally - you can tunnel as many ports as you need.

I use the terminal inside VSCode - which means I can manage docker(-compose), microk8s, etc and anything I spin up, I'll just be able to access from my local host during testing.


I am looking for a quiet and fast machine for development. I've been trying to find a reasonable AMD laptop but all are out of stock and I think these will still have fans buzzing under heavier load. I personally hate Apple practices and I never clicked with macOS (was forced to work with it for many years), but if I could install Linux on M1 it would be hard to swallow, but I may consider using it. My Intel laptop has fans buzzing now even when it is idle. It drives me crazy.


Do you absolutely have to have a laptop? I realized years ago that I spend 99% of my time in a single place and have since built custom desktop systems for my primary development machine. They are faster than any laptop I ever owned, quieter and much easier to live with. And because I can put together a system to my own specifications I can end up with something that works perfectly with Linux. I haven't bought a Mac in years and even with their new ARM hardware I don't see enough of a benefit to go back.


You may already know this, but there is active work happening now to port Linux to the M1: https://asahilinux.org/


Initial support was also just merged a couple? days ago.

https://news.ycombinator.com/item?id=26746983


I'm in a similar situation. I want a new x86 laptop for development, but it's not super urgent.

Some laptop built around the new Ryzen 9 5900HS cpu [0] seemed like an obvious choice. But although it seems like AMD has released it, I'm having trouble finding any actual laptops that have it as an option. Maybe I'm just not looking hard enough?

[0] https://www.amd.com/en/products/apu/amd-ryzen-9-5900hs

UPDATE: Maybe I just needed to wait a little longer: [1]

[1] https://www.ultrabookreview.com/35985-amd-ryzen-9-laptops/


I got an HP OMEN 15 for $1200 and added some memory - 32gb with 8 cores, 16 threads and a nice IPS 144hz screen. I run VirtualBox Ubuntu 20.04 VMs with docker inside them and connect with VS Code SSH - and I have no performance complaints.


Unless you are doing Linux GUI development, you have several options.

1. Low-end Chromebook (good battery life), remote server, VPN.

2. High-end Chromebook (there are a few i3 and i5 models with 8GB RAM), Linux environment.

Are you often in locations where you don't have Internet access?


High end Chromebooks have i7 and 16 gb RAM fwiw, and run Linux VMs just fine.


I recently Dockerized a Rails development environment for working on an M1. It’s gone well so far and may provide some guidance for other development workflows (YMMV). https://gist.github.com/hopsoft/c27da1a9fda405169994a0049575...


Does vagrant + virtualbox work on the M1s?


Vagrant works great on Macs with M1, the issue is finding a compatible 'provider' (VirtualBox, VMWare etc).

For my personal projects I've been able to switch from using VirtualBox to Docker as a Vagrant provider, and it works well enough for what I need it to do.

I created a cookiecutter template for Django projects at https://github.com/tmiller02/cookiecutter-django-react-ansib... that I use for development on my M1 mac using Vagrant + Docker.


(I work for Docker on the M1 support) I'm glad it's working for you! There's a bug in the recent Docker Desktop on Apple Silicon RC build which affects some users of vagrant at the provisioning stage when the new ssh key is copied into the machine. It turned out that the permissions of `/dev/null` inside `--privileged` containers were `0660` (`rw-rw----`) instead of `0666` (`rw-rw-rw-`) In case you (or someone else) runs across this there's an open issue with a link to a build with the fix: https://github.com/docker/for-mac/issues/5527#issuecomment-8...


Hey, thanks for all your hard work, it's much appreciated!

Thanks for the tip, that's good to know. I'm running RC2 and haven't come across any issues like that, although I don't run my Docker containers in 'privileged' mode when using Vagrant.


Thank you so much for all the hard work. Very much appreciated!


Docker makes things really slow, VirtualBox much better on my Intel Mac.

Any idea if Parallels would work better on the M1?


There is some official claim somewhere in the vbox forums that vbox will never support ARM, so you might just as well consider it dead.

I'm currently using Docker for Mac but will move to UTM (a.k.a a nice UI atop qemu-hvf) when I have some time at hand.

Vagrant I only used for some other OS VMs (e.g smartos) but the base images are x64 so there's no chance it works well (if ever) on ARM either.



> vbox will never support ARM,

Is it because some kind of beef with ARM or they think it is technically impossible?


It’s strictly x86 only, and wasn’t written with other platforms in mind at all.

A port to Arm wouldn’t be really a port but a rewrite.


The article author uses VSCode remote support to work with VM. This is not ideal from a security point of view. VSCode is huge, typically uses a lot extensions and all that has access to all local files and ssh keys. So for this reason I run VSCode inside a VM or a container with VNC server to provide X session. This works OK without GPU even on 4K screen while providing much better isolation.


There are plenty of ways to mount a filesystem via SSH.

Then the author could access the files via Finder.


Ah, echoes of the "why use this new Dropbox thing when I can use FTP, SVN, and some FTPFS" attitude. You're right, what you've suggested works.

It would still result with VS Code running more on your client than it would when using VS Code Remote.

Port tunnelling, while totally possible with an SSH command in a new terminal, is something VS Code just sets up automatically (and makes it easy to add your own).


So he’s basically recreating WSL but on an Apple Silicon Mac?


> That means that I check out repositories on the virtual disk and run everything from there. This has the slight inconvenience that I can’t easily access those files with Finder, but the upside is that there is no noticeable IO latency issues like when running Docker for Mac.

Yes. Except on Windows it is easy to access WSL files.


I used to use a setup similar to this before WSL was a thing (Hyper-V VM). The "accessing your files" issue trips people up but there's a trivial solution.. install Samba in your dev VM and mount it on the host.

Unsurprisingly, WSL2 does something similar (it uses the 9p protocol instead of SMB though).


Why the universal misuse of "weary?"


I dont understand this HN infatuation with apples new chip laptops. Ppl are doing everything under the sun to make it work in random ways that arent supported yet etc.


>I dont understand this HN infatuation with apples new chip laptops. Ppl are doing everything under the sun to make it work in random ways that arent supported yet etc.

(a) HN has people who like new technology.

(b) This particular new technology is also very good.

What's difficult to understand about it?

People have been tinkering with WSL and Linux to have all kinds of things working "in random ways that arent supported yet etc", M1 is not that different...


The chip is insanely fast, runs without a cooler, consumes very little power.


Who do you think turn unsupported things to supported? Developers. Developers who also happen to hang out at HN and like to talk about new developments around this chip.


It’s new tech with a lot of promise, in a decent laptop. Experimenting to see what you can do with it is very solidly within HN’s typical audience.


It's almost like the same people that try to make things work in random ways are the same ones who frequent sites like these....


Personally I can't wait to upgrade my iMac to the one with the new chips as soon as they come out, but the fact that setting up a dev env is still a pain (whether it's Docker or a VM) makes me hesitant.

I want to run multiple VMs at the same time and my current quad core iMac struggles. So I was thinking of getting a beefy Lenovo Workstation to use as a dev environment and use the VScode remote ssh thing. But in that case I don't really to upgrade my iMac.


> Working keyboard. And the feature of not having a touch bar is not only included but it’s the cheaper option — I would have paid money for that!

I've been using Linux development laptops for the past decade and have had all of these benefits!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: