Hacker News new | past | comments | ask | show | jobs | submit login
Things I Wish I'd Known Before Using Vagrant (zwischenzugs.com)
304 points by zwischenzug on Oct 27, 2017 | hide | past | favorite | 147 comments



Here's what I wish I'd known before using Vagrant:

1. How to properly do cross-platform, high-performance two-way synced folders between host and guest. Most providers only support one-way syncing. Virtualbox has shared folders, but their performance is pretty lousy and they have issues with relative symlinks. In fact, I still don't fully know what the correct setup is for a dev environment where the files are edited on the host and the guest immediately picks up on them...

2. This idea that with vagrant you'll never again say "It works on my machine" is a lie. So many inconsistencies between Vagrant on Windows, Linux and macOS. Internet connection sharing, line endings issues, symlinking issues, ...

If anyone wants to see how we're using Vagrant:

https://github.com/hearthsim/hearthsim-vagrant

I don't want to make it sound like Vagrant isn't solving a real problem though, it is. It's just not the unicorn it claims to be.


The same is true for cross-platform Docker.

1. On macOS, host FS volumes are two magnitudes slower. There are a series of hacks that try to mitigate this problem and progress has been made, but it’s still too slow for a dev environment. Linux doesn’t have this problem. I don’t know about Windows.

2. Also on macOS, the host networking bridge doesn’t work the same way it works on Linux. There’s a different hostname that’s used to access the host from within the container that’s inconsistent with Docker on Linux.

Docker is working through these issues, but they’ve been ongoing for years.

I’ve written about Docker dev macOS environments at bradgessler.com, but have given up mostly because FS performance is so poor. Now I spin up what I call “Dockerish” dev environments where my service dependencies run in Docker and the app runs on my host.


Regarding 1), Docker implemented a cached volume mount option to mitigate the performance issues on MacOS [1], but last time I tried it, file system events stopped working. I'm not sure if this is an inherent limitation or if it's something specific to my environment. I'd appreciate if someone more knowledgeable could clarify.

As for Windows, it doesn't have the same fs performance issues on volume mounts, but AFAIK file system events have never worked there [2].

My current reluctant compromise to make volume mounts work seamlessly across platforms is to use polling instead of relying on file system events. That way I can turn on caching on Docker for Mac, and everything just works. The downside is of course polling is inefficient, but I've found when you take care to exclude all non-source directories from the polling, it's usually not as bad as you'd think.

[1] https://docs.docker.com/docker-for-mac/osxfs-caching/#tuning...

[2] https://github.com/docker/for-win/issues/56#issuecomment-242...


Given Docker for Mac controls an internal VM instance to actually run the containers, is there much benefit over just using Docker in a Linux VM?

It's pretty easy these days to use tools like Vagrant to spin up the VM then run a Docker Compose file or whatever to get your development environment ready, and would result in identical cross platform environments.


I think it's easier for most people to get started using Docker for Mac, but yeah – I run Debian headless in a VM, SSH in and use that for Docker. Way less magic, and I have full control of the underlying OS to test out new kernels etc.

I also like that my underlying code is also within an encrypted VM disk, not laying around on my host mac os.


Do you use a GUI editor in the VM? I have a similar setup (Mac hosting a headless VM running Docker) and can't stand performance when using X-Forwarding and XQuartz


Nope, neovim and tmux is all I use regularly. I tried to proxy X11 to quartz and it was way too slow.

When I use Reason I'll sometimes mount over SSHFS to edit using neovim in visual studio, though right now I'm building native iOS apps with it so it's actually not in my VM so that xcode can build faster


You could use emacs with tramp from outside: https://www.emacswiki.org/emacs/TrampMode


These days I’m just using docker-machine instead of Docker for Mac, as the cost of the latter’s magic is simply not worth it.


>Given Mac, is there much benefit over just using Linux?

FTFY


Would you please stop posting unsubstantive comments to HN?


I love VSCode, but moved back to AWS+VIM for dev because the FS is just too slow in Windows and I am forced to use a Windows machine for work. VNC is too slow with 4k monitors as well.


I hesitate to call it the correct setup because I think only a small number of people use it, but the only solution I've found for Virtualbox that 1. does 2-way sync between host & guest 2. Propagates fs events (so you can run an auto-reloading service like webpack-dev-server on the guest that picks up changes on your host) and 3. is not significantly slower at disk reads & writes, is using Unison which is basically a 2-way rsync.

There's this Vagrant extension [0] that works pretty well on mac at least (which I actually forked and pushed to rubygems a few years ago when I discovered it because it had been abandoned and no longer worked)

Unfortunately the Vagrant team seems to have no interest in having all 3 of the properties I mentioned just work out of the box, I'm not sure why. Maybe it's just a lack of resources at Hashicorp, I'm not sure how much dev time they put into Vagrant these days since it's otherwise very mature and they have such a large number of open source projects.

[0] https://github.com/dcosson/vagrant-unison2


This looks very interesting - unison is a great tool. I rember fighting to get plan9 networking working with qemu/kvm and Linux, and never quite figuring out how to connect the dots. On paper it appears 9p should be a perfect fit for the use-case. (However, I just found [1] - maybe I'll give it another shot).

But it seems kind of obvious that the functionality needed to get this stuff to work, is an entire network filesystem stack. It seems the obvious choices would be to build into VirtualBox either nfs, or webdav - or leverage host/external support for nfs, cifs, or something like OpenAFS. With sshfs as another pragmatic option. Maybe in the future ipfs will be a viable option too.

But I think cifs is still the "mostly works, with/without complex auth, cross platform" - with webdav a distant second.

[1] https://www.ueber.net/who/mjl/plan9/plan9-obsd.html


Thank you for your work!

I used your vagrant-unison2 plugin for developer environments at Airbnb for a time, although we've now moved on to a more complex, home-grown Unison wrapper with Watchman for filesystem watches.


Are there any plans to open-source this Unison wrapper?


>1. How to properly do cross-platform, high-performance two-way synced folders between host and guest. Most providers only support one-way syncing. Virtualbox has shared folders, but their performance is pretty lousy and they have issues with relative symlinks. In fact, I still don't fully know what the correct setup is for a dev environment where the files are edited on the host and the guest immediately picks up on them...

I set up a TCP server, one light process watches file changes and the other side reacts to it. (Rails). It was one of my silent victories that the entire company used, but I had to do because I was the guinnea pig on the vagrant migration.


How do you watch for file system changes on Linux? Spinning loop?

On Windows you can register a handler/callback, is that available in Linux?


I remember we tried to use something native like inotify, but it didnt work for vagrant or something. I ended up using the Listen gem.


Check out inotify (built into the kernel).


I've had much better results mounting host directories via sshfs than I ever have with shared folders. Cygwin sshd works on Windows; there are probably faster ones, but I've never felt the need to bother.


I've been meaning to look into vagrant-sshfs but the fact it requires extra setup (quite a bit of it on Windows) has put me off. Our vagrant setup right now is "Install Vagrant, Virtualbox, run this one command and you're done".


We get around the shared folder performance problem using NFS. Adds one extra step to setup (mount a network share), but we've never had performance issues with it.


Depends on case, for me NFS performance (MacOS as host, Linux as guest) is a major PITA and source of instability in vagrant dev setup.


Not so familiar with vagrant on mac (though my time is coming), but having used loopback KVM's on rhel I can say that fiddling with mount options can drastically improve stability / performance (though still much slower)

e.g. tcp mounts, getting read/write blocks matched up btw/client server and sized to be digestable but big enough to move data, etc.

also, nfs is mainly only suited for 'NAS-like' operations - things like rdbms's do waay better on iscsi or eating the vdisk performance.

last I messed with macos nfsd (which has been a while), it a way happier with smaller blocksizes (e.g. 8-64k range) - modern linuces will attempt 1MB which is too much for the older 4.4BSD based code

another thing to look at is timeouts / backoffs - it's easy to kill performance by setting these things too agressively so that the system double-chokes when it gets bogged down..


I used NFS when using vagrant shared folders a lot a couple of years ago. Didn't really pay much attention to the performance, I must admit, which was fine for my purposes; for me it only really needed to crash less than the default shared folder stuff...

I did have to supply an extra "mount_options:['actimeo']=1" option to the synced_folder command to make file watcher-type systems work tolerably. With the defaults I'd often be waiting several seconds before any changes made by the host would be picked up by the guest.


I've been struggling with 1) for years. My current attempt is to exploit ZFS snapshot shipping, since modern ZFS ports are available for both mac and linux (and apparently one for windows is on the way).

Unlike rsync, ZFS always knows what's changed since the last snapshot, so there's no directory scanning.

Unlike inotify, there is (seemingly) no(?) risk of dropped events, which I've experienced with tools like watchman. My ZFS solution still needs prompting event to start the snapshot, but it no longer needs to be told _what_ changed.

Critically the sync flows in one direction. This is a deal breaker for applications that need to communicate file changes back out to the host, but that hasn't been an issue in my use cases.

Unlike NFS, it also means we must (and can) relax the strong consistency model, and so using the filesystem is not so latency bound. This becomes important if the virtual machine is not running locally..


2 has been a big issue with a recent client because Virtualbox is a fucking pig in terms of features and performance compared to the commercial alternatives.

My advice is to cough up an hour or two salary and pay for parallels or VMware (if on Mac)


I've been using Vagrant + Virtualbox and have been happy with it. Is Vagrant + VMWare really that much of a step up in performance? Could you explain which features have been helpful? Thanks.


Not the parent, but anecdotally, Vagrant + Parallels runs MUCH faster on my 15" MacBook Pro than Vagrant + VirtualBox. VM startup time is shorter, and CPU usage seems to be lighter. For instance, I worked on a project where I had a long-running process that would do a lot of file I/O and periodically collect the results and run some calculations. On VirtualBox, this basically pegged a CPU core while it was running, but when I switched to Parallels, CPU usage hovered around 5%. (I'm guessing this particular example is more to do with VirtualBox's dog-slow shared folders, but still relevant, I think.)


Vbox shared folders don't support POSIX permissions (so you can't chmod/chown in the /Vagrant mount)

Vbox (or the Vagrant use of it) is hard coded to use a NAT as the default interface

The common "Vagrant synced folders are slow" relates exclusively to Virtualbox shared folders.


Also a recommendation here against Hyper V.


Can you elaborate?


Well one of the killers I found very recently was if you open the console of the VM before it has booted it sets the caps lock on which instantly shafts entry of the grub commmand line so initial configuration of that is pissing in the dark because you can’t observe it without affecting it.

Then there’s the periodic refusal to talk on the network even if the virtual switches are configured properly.

Then the inexplicable “slow boot from hell” which happens randomly where it’ll just hang starting up the kernel at 0% CPU for up to 8 minutes.

This is with CentOS 7 on a gen2 machine.


At least for me, using Hyper-V is problematic because I can't also run VMware Workstation at the same time. I have to boot into different configurations to run them side-by-side. My macOS experience is that running both on your machine is straightforward.


But how did hyperv perform when you did run it? Did you use it with Vagrant or just on it's own?


I did that a few years ago, and I found that VMware support lagged VBox. There were some plugins that looked like they might help, but I could never get them to work.


Could you elaborate on what features the VMware provider didn't support?


at least in the US, anything work-related that you pay for out of pocket is tax-deductible if your employer doesn't reimburse you or you're self-employed.


Doesn't make a lick of difference if you lack enough itemized deductions, otherwise you take the standard deduction like everybody else.

Honestly though, VMWare is pretty cheap. VMWare Workstation Player and VMWare Fusion aren't cheap upfront, but the upgrade pricing to stay current is decent - I spend more on my JetBrains licenses every year. It's worth it to have a fast and mostly headache-free virtualization experience.


I find that Vagrant works relatively similarly between Mac and Linux to the point where "it just works", but Windows is a whole different ballgame with shared directories and symlinks. On the current software that I'm working on for example, the first step of the onboarding process for new developers using Windows, is to install Linux, which seems pretty insane for something that lives inside of a VM!


What I do on some projects (not using vagrant) is install samba on the dev VM so that it can be mounted from a Windows workstation. Obviously much simpler and easier to do if you're developing on Linux (nfs / sshfs).


You can just use NFS instead of Virtualbox's integrated folder sharing, it pretty much completely resolves the performance issues.


Resilio Sync? (Formerly BitTorrent Sync)


Check out Syncthing. It's way better and open source.


Thanks for the tip. Trying out Syncthing and I love it. It's made for me.


Are you talking about symlinks on synced folders? I absolutely love synced folders, but the whole idea is fundamentally incompatible with symlinks. Think about the VM actually interpreting the link and resolving the path against its own filesystem. Just doesn't make sense. If you want that, share the directory on the host and mount it in your VM so it's your host resolving the link.

My experience with Vagrant is that it's amazing, but I've only used it for a web application with Github and Jenkins for CI. Dev team doesn't have to spend a second thinking outside the git repo.


A couple more for the list:

11. Shared folders get setup before provision scripts are run

12. You can detect provision has already run (https://github.com/hashicorp/vagrant/issues/936#issuecomment...) using:

  if File.exist?(".vagrant/machines/YOUR_BOX_ID/virtualbox/action_provision")
13. Vagrantfile configs are actually Ruby scripts, so you could do things like storing your box configs in JSON (like I did in https://github.com/wildpeaks/boilerplate-vagrant-xenial64 ) instead of hardcoding them in the Vagrantfile.

14. Virtualbox needs Hyper-V disabled whereas Docker for Windows requires Hyper-V enabled.


> Vagrantfile configs are actually Ruby scripts

I took advantage of this by storing all my settings in YAML and then using the same YAML for both Vagrant and my Ansible provisioner:

Here's the Ruby Script that loads the YAML:

https://github.com/BigSense/vSense/blob/master/core/vagrante...

That I call from my Vagrant file:

https://github.com/BigSense/vSense/blob/master/core/infrastr...

That is also loaded in all my Ansible roles:

https://github.com/BigSense/vSense/blob/master/ansible/bigse...


For 14 - just use a hyperv as your provider with Vagrant. You'll probably get better synced folder performance as a bonus.


I like 13).


One thing thing I can add to this that was driving me nuts (though not strictly Vagrant's fault):

The DHCP client on Ubuntu 16.04LTS doesn't always play nice with multi NIC vagrant machines. All my vagrant boxes are dual NIC (eth0 is the standard 10.0.2.x NAT interface and eth1 is a private interface in the 192.168.56.x range with a static IP - which makes it easier for various vagrant machines to talk directly to each other).

I had an infuriating issue where my boxes would startup and then randomly (it seemed at the time) stop responding to network requests. Initially I thought they were hanging for some reason and would tear them down and re-init them.

Finally thought to enable GUI mode and I noticed that even though a box had stopped responding, I could login fine via the virtual box GUI.

It turned out that Ubuntu's DHCP client was ignoring /etc/network/interfaces (auto generated by vagrant) and wrongly refreshing IP leases on both interfaces (eth0 and eth1).

The trick is to kill the running dhclient during provisioning and restart it with switches that force it to only maintain leases for eth0:

  machine.vm.provision "shell", inline: "kill $(pidof dhclient) && /sbin/dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /var/lib/dhcp/dhclient6.eth0.leases eth0"


That's not even a little bit Vagrant's fault - that's 100% Ubuntu nonsense, the sort of thing that convinced me about a decade ago never to install Ubuntu again. Sure, running Debian means if you want the latest and greatest you might have to build it yourself. But it's also much less likely to hose you for stupid reasons. I'm glad to see that the state of play has improved so much in the intervening years!


I'm also a Debian fanboy (been my preferred distro since "potatoe" days) but I had to switch to Ubuntu for the first time for this current project. It's not all bad but I'll return to the warm embrace of Debian for my next project.


Oh yeah, two other handy networking related commands I use are:

  vb.customize ["modifyvm", :id, "--macaddress1", "auto"]
Which forces a fresh MAC address.

  vb.customize ["modifyvm", :id, "--nictype1", "Am79C970A"]
Which changes the virtual NIC type (for whatever reason, I find on my current host Am79C970A gives much better transfer speeds than virtio)


I wished I'd known that Vagrant + Windows is always full of surprises. So many problems with Windows developers because Vagrant will not work as expected there :(

Anybody know any best practices what to do with Windows developers? I'm thinking of going to docker... but I'm not sure if this really helps.


> Anybody know any best practices what to do with Windows developers?

I always just run a Linux VM and do most things there, with host directories cross-mounted via Cygwin sshd and sshfs (faster, more reliable, and better permissions mapping in my experience than Virtualbox shared folders) and anything that needs to take advantage of filesystem capabilities not supported in NTFS, such as the symlinks mentioned in a sibling comment, done outside a mounted directory.

This lets me get work done, but it's a suboptimal experience to say the least, and relies heavily on my prior systems administration experience to stay working when it goes weird. Absent such experience among your developers or at least someone you can put in support of them, about the best I can recommend is to struggle through with Vagrant and your local handful of best practices - at least with Vagrant your dev environments are reproducible, so that when things go too cattywampus you can just burn down the VM and pop a fresh one out of the oven and get back to work. (If your Vagrant boxes aren't reproducible, that's the first thing you want to fix.)

Docker for Windows is not going to be an improvement here; on the one hand, it's newer and less battle-hardened, and on the other, it has to run in a VM anyway. (Either Hyper-V or VirtualBox, each of which has its own idiosyncrasies, and only one of which can be used at a time - enable Hyper-V, VirtualBox doesn't work any more.) Docker for Windows, like Docker for Mac, comes with some plumbing that tries to smooth over the impedance mismatch between platforms, but it's lossy and brings its own headaches, so you're just adding more complexity to the dev env for no real gain - you can just run Docker in your Vagrant boxes, if you actually need Docker, and have one fewer headache that way.


Similar approach here.

I usually have a "management" vagrant box that exposes my gitrepo back to the Windows host with samba. I then have two or three other vagrant machines that can access the same gitrepo via an NFS mount to the "management" machine.

This way I can still code in VSCode on my windows host but I do all my git commits via the CLI on the "management" vagrant box. The advantage with this approach is file permissions don't get messed up and symlinks work (i.e. you can follow and edit them on the windows host).

The only major disadvantage is if your repo has symlinks you can't create new symlinks on the windows host and you can't use Git for Windows (and, by extension, the git addon for VSCode). This is because Samba "fakes" symlinks on windows and doesn't truly support them (although it might be able to do so in the future). More info here: https://github.com/git-for-windows/git/issues/1195


I believe that mainstream docker development is in a much better place nowadays on windows, so it's worth a shot...

That said, anecdotally: repeatedly for the last 3ish years I've been running into docker-on-windows issues that have developed into complete showstoppers during development. This has repeateded across multiple projects and employers. I have an arms-length list of issues that are purely installation related, not even touching my code. I currently am using two dev machines with that had their Hyper-V installations irreparably broken because they were not US-EN windows, rendering them useless for most virtualization. "Full of surprises" is a nice way to put it :)

In those intances, after burning a ton of time, I've landed on a simple conclusion: it is a _lot_ easier to get a VM on OS X to be windows than it is to get windows to pretend to be a competent POSIX environment.

I am optimistic that windows will come around. I am hopeful their posix layer will work out, and their local bash shell will become nice, and the ecosystem will work. I have some faith that next-gen windows will handle this much better...

From experience, though, and for my own projects and developers in 2017: if you want to do container development on windows, get a mac.


WSL along with Docker for Windows works amazingly well.

It's what I use as my primary development environment for every day web development.

I have a full write on how to get everything working here: https://nickjanetakis.com/blog/setting-up-docker-for-windows...


that looks like an interesting guide :)

One thing you might want to add is a bit of a warning about the risks of exposing a Docker daemon to the network with tlsverify=false as that would enable anyone who can reach the port on the network to run docker commands and likely take over the host OS.


VmWare player is free and has been working perfectly for a decade.

HyperV should be alright but I don't think it comes with the desktop license for Windows 7+.

VirtualBox never caught up with VmWare when it comes to supported features and stability.

Vagrant is a wrapper around virtualbox so it suffers from the same issues. They could never use VmWare because the free edition doesn't provide API for integrations.

Docker on Windows is totally experimental. They ported to Windows for the PR and next round of funding. Don't expect anything to work.


I have been using Docker in Windows 10, and it works. It did install Virtualbox.


Agreed, symlinks and file permissions always seem to bite the people working on Windows. There are parts of our build process that people working on Windows simply can't run because symlinks are created and VirtualBox/Windows complain.


If you work with other developers and store your Vagrantfile in source control, then you can allow per-developer settings using the method shown here: https://www.glenscott.co.uk/blog/allow-per-developer-vagrant...


I prefer this method: https://gist.github.com/stephenreay/2afd4205e76836f20e176722...

Same basic idea but plain ruby constants no need for yaml parsing.


I was a long-time Vagrant user but recently switched almost completely to Docker (Compose). What features does Docker miss that make people keep using Vagrant?


To give a counter-point to all the praise here, Docker for Mac has been one of the most frustrating pieces of software I've had to use for work. It has improved significantly over the first time I used it (almost immediately after public beta), but I still occasionally run into various bugs, such as inotify events ceasing to work arbitrarily. Any kind of networking that makes your containers visible to the host/on the local network is also a PITA to set up through the VM that Docker for Mac uses. It's also supposed to abstract away the fact that it's actually running in a VM, but I have had to tweak the VM's CPU/memory settings multiple times, and it's not obvious when you have to do this. On top of that, last I checked, Docker for Windows is still missing some features like inotify events, and requires you to enable Hyper-V, which of course removes your ability to have other VMs running simultaneously.

The other side of it is that Vagrant is just easier. Docker requires everything to fit into a "one container per service, one process per container" model, which is a really good idea in production, but makes setting up background services (such as the Flow server) in development way harder than it needs to be. I'm not a devops person, so the extra overhead of trying to figure this all out is significant. Vagrant isn't without its own technical problems, but it mostly Just Works™, and I can do anything to the VM that I'd do on a regular Linux machine. As someone who really doesn't care how these things work under the hood, Vagrant has been a significantly smaller source of friction for me.


> Docker requires everything to fit into a "one container per service, one process per container" model, which is a really good idea in production, but makes setting up background services (such as the Flow server) in development way harder than it needs to be.

No, it does not. I use Docker as vagrant replacement too, and use a self-written bash script as "init script" that:

1) launches all services required (e.g. apache, mysql, sshd, elasticsearch) by running "service XYZ start"

2) records the pidfiles of each service (/var/run/xyz.pid)

3) sleeps 10 seconds, waits for all the pids having exited - if yes, exit the initscripts, if not, go back to #3

4) on SIGTERM/SIGINT, gracefully shut down the services in the correct order (e.g. apache/php first, then elasticsearch, then mysql); the watcher loop of #3 will detect all services having shut down and exit cleanly

This way I have exact reproduction of a real target server and don't have to deal with the myriad of issues that arise with docker-compose and "custom networks". Also, the documentation on how to set up a production server, especially the required OS packages, is embedded right in the Dockerfile (or, as I put the setup script in its own file, in this one). Build time for an image with identical functionality is approximately equal to what it was with Vagrant.

In contrast to vagrant, once you build that image initially it starts up way faster (10s for a LAMP+es stack), and provisioning it is a breeze compared to puphpet or whatever is the trend now. And you don't have to update your setup script unless the base OS changes, as compared to puphpet+vagrant - in fact I can use nearly the same setup and init script across stuff as old as Debian 6 (some ancient proprietary software I had to dockerize) over Ubuntu 16.04 and as brand new as Debian nightly.


I used the wrong wording: I should have said that Docker encourages one process per container, not that it requires it. It's possible to do most of the same things with Docker as you can do with a VM, including implementing a full-blown init system, it's just a matter of effort.

Having said that, what you did is definitely non-trivial (at least it would be for me), and you've basically re-implemented all of the stuff Vagrant gives you for free, for a pretty marginal benefit (IMO). Maybe that setup works better for you, but I don't understand why I should go through all that effort when Vagrant Just Works™ most of the time, and Docker for Mac runs everything through a VM anyway.


> but I don't understand why I should go through all that effort when Vagrant Just Works™ most of the time

Problem with Vagrant is you can't take the VM you created and deploy it on any random Linux server (or random Docker-hosting cloud provider) - while a pure Docker solution can be deployed literally anywhere with Docker support, as long as you give it a way to persist the data directories of the services. docker-compose is a hit-and-miss across hosters, and you can't use it on DC/OS or Kubernetes environments.

I use my script collection mainly for dev environments, but it's useful when you want to spin up QA/dev instances without having to provision real servers.


can i see the init script seems useful


Unfortunately it's corp stuff and needs a bit of cleanup. But I'll try to get it open sourced - can you send me a reminder email? My email's in my profile.


I'm far more familiar with Vagrant than Docker, so there may be Docker solutions for these problems. But I use Vagrant to:

- Test my software on a "fresh" Linux installation, to ensure it doesn't have any hidden dependencies

- Test my software with different filesystem layouts and (soon) different filesystems, without having to pollute my own filesystem with hundreds of test files

- Automatically install a set of global commands in the VM (such as "b" to do a complete build) and display help text as soon as a user runs "vagrant ssh", so a new developer can quickly get up to speed

It's been pretty good!


It seems like the author was using Vagrant to set up their base Docker machine.

So yes, the author could have run everything on Docker locally, but this was to setup those Docker clusters (Kubernetes and OpenShift).

I did the same thing when experimenting with DC/OS. Running a small cluster on my developer box with 32GB of ram was cheap. Running the same cluster on DigitalOcean gets very expensive:

http://penguindreams.org/blog/installing-mesosphere-dcos-on-...


Vagrant gives you a full VM with a traditional init system. Docker is usually used without an init (or a very dumb init), so you'll have different containers for each service. It's a very different way of working, and I'd argue vagrant's is simpler and easier to understand, but less scalable.


Yeah I understand the difference between the two but I was wondering why people prefer Vagrant over Docker for development environments.


If I'm deploying to VMs then I prefer Vagrant, if I'm deploying containers then I prefer Docker. It's all about dev/prod parity, at least at the 30,000' view, once you get into specific use cases there may be reason to mix and match.


For me, because docker doesn't do jack to help with developing in the kernel.


I think this is one of the edge cases where vagrant is useful over docker. Most ppl here don't do kernel dev though.


At my company we need to have virtual machines as close as possible (software wise) to our production environment, including the kernel version.


Because we want a full VM, to very closely mimic production.


I use both Vagrant and docker for local Dev work. I've found that we have far less problems with the vagrant box (we link a bunch of projects into a single box to save RAM) than the docker compose setup. We're now looking into a minikube + rkt option, since it's docker daemon itself, not the compose part that makes it painful. Hopefully we can find that sweet spot between working nicely and resource preservation that we want.


Me too. I use docker4mac now. Never had to use vagrant again.

I know docker gets a lot of hate here but docker4mac is one one the best pieces of software i've used in a my 10 yr career. It totally changed the way i set my dev environments, think about software development/deployment.

I am so glad they are going to support kubernetes now so I can do local-> staging-> prod seamlessly .


Same here, been using docker-compose for development and testing and it's been great. Spinning up 4 servers with docker uses much less space and memory than 4 vagrants. It's also much quicker to provision and get started.


Snapshots are pretty useful, I miss those.


I'm not super familiar with Vagrant or docker. But I've used vagrant before to do linux development on a windows machine. AFAIK that's not even related to anything docker claims to be able to do.


If you're a PHP dev (or even not) check https://puphpet.com/

I generate my Vagrant files for all my side projects with it and it's a real time saver, especially if you're not savvy in those fancy provisioners or sysadmin in general.


Vagrant's private network is really cool as well. I use it to test ansible provisioning scripts for server infrastructure.

    config.vm.define "ums-01" do |machine|
          machine.vm.network "private_network", ip: "192.168.1.10"
          machine.vm.network :forwarded_port, guest: 22, host: 2210, id: "ssh"
    end

    config.vm.define "ums-02" do |machine|
          machine.vm.network "private_network", ip: "192.168.1.20"
          machine.vm.network :forwarded_port, guest: 22, host: 2220, id: "ssh"
    end

-- https://www.vagrantup.com/docs/networking/private_network.ht...


I'd advise caution with the use of landrush, at my company it was very quick to get up and running using it, but we encountered several problems with it over time. We have several vagrant boxes coming up and down at various times on each developer's machine and it would tend to get out of sync in various cases, hold on to records for machines that no longer existed and macOS DNS cache also played a role.

Eventually we replaced it with Dnsmasq and a static IP setup with each development box getting an immutable static IP. Dnsmasq runs on a guest VM that needs to always be up for other purposes as well.

As always the effort from the landrush developers is much appreciated and it may be suitable for a limited number of boxes but it didn't scale with our usecase.


Here's my advice go to GitHub and type:

filename:Vagrantfile thing you're looking for

You'd be suprised how many hidden gems are on GitHub that you can use to figure out how to use vagrant. This tip applies more generally.


I often find Google searches for GitHub gists to be super useful.

<stuff you care about> Vagrantfile site:gist.github.com


- Vagrant Triggers plugin is super useful https://github.com/emyl/vagrant-triggers

- I've found VMware Fusion as a provider worth the extra cost/disk space


Oh and while it is in the docs, the sendfile/virtualbox gotcha has been the cause of many hours of anger

https://www.vagrantup.com/docs/synced-folders/virtualbox.htm...


Do you find vmware base box availability an issue? Is it just you or a team using VMware?


Creating your own base boxes is actually pretty straight forward since Packer.

Maintaining them is admittedly a bit of a time sink though.


Haha I'm aware. I'm the maintainer for https://app.vagrantup.com/koalephant - I was curious about vmware+Vagrant usage, we don't support it right now but I'm considering adding it.


There's between two and five of us, and we use the puphpet/ubuntu1604-x64 box.

https://app.vagrantup.com/puppetlabs/boxes/ubuntu-16.04-64-p...

Also here's a template for an initial LEMP WP setup

https://github.com/alexwybraniec/bwu-vagrant-wordpress


Also big fan of vagrant global-status. I have it aliased to vgs in my terminal.

Also very helpful for those using Vagrant on OSX is Vagrant Manager [0] which gives you menu-bar integration and a quick interface to turn VMs on and off. It's useful even to remind myself when I've left some VMs on, especially if I'm running on battery.

[0] http://vagrantmanager.com/


We've avoided many of the problems (and solutions) in the link by standardizing on Ubuntu laptops for the host OS, and using vagrant-lxc to run our vagrant guests.

https://github.com/fgrehm/vagrant-lxc

At least one or two of our new hires started with OSX hosts, but switched to Ubuntu after a while, to avoid virtualbox pain.

[edit: added vagrant-lxc link]


For me vagrant works best with declarative configuration and a runner. something like this, with sane defaults and many options available by changing fields in a dictionary.

https://github.com/Attumm/vagrantfile_example/blob/master/Va...


Fore me something that inclined me to use vagrant, was the share feature, so I can let anyone in the world check my development environment. Now they deprecated that feature and https://ngrok.com does the job very well.


Ofcourse this depends on the kind of project, but my last experience with a big Laravel project was that I was running the project working under Apache on my local machine and the rest of the team was loosing a lot of time because Vagrant was not working as expected.

I can understand the need for reproducible environments. But when so much time is lost I doubt the first thing you need is something like Vagrant. To me Vagrant is a tool that you use when the team starts to struggle with "works on my machine". But not before that. Because most of the time (well for PHP at least) it's very easy to make it work on all machines.


I've had quite the opposite experience. Vagrant has saved my team lots of time fighting environmental issues. The approach of running apache locally falls apart pretty quickly as you add more components to your application. Elastic search, mongo, one person happens to have php 7 instead of 5.2 and wrote everything with short array syntax, etc.

We also work on many different projects, often getting dropped into something new without much of a primer. Being able to "vagrant up" and not having to know all the dependencies to get up and running is very handy.

Do we spend time troubleshooting vagrant weirdness? For sure, but compared to the time saved it's a no-brainer.


I tend to agree with the parent comment, a small team can get away without it if the software stack is stable. PHP 5.2 to 7 is quite a change and should have been documented upfront. I assume that the project must conform to certain software requirements. On the other hand, npm is full of surprises.


I don't disagree that if you can get away with just a local development environment that it is likely easier to get up and running. However, it requires good documentation of project dependencies and discipline by the team to not accidentally upgrade their version of PHP without telling anyone, both things that are easier said than done. Vagrant lets you codify that through "code". In the case of my employer, it's a necessity, but obviously everyone is different.


Agreed, but the “Vagrant doesn’t start” saying is a real thing imo. Maybe it’s my experience but it doesn’t feel very robust.


By providing your team with the same vagrant environment (usually as simple as committing your Vagrantfile to your repository), then you know everyone is working from the same page and whatever you are building is pretty much portable from one development machine to another.

If you have one dev running MAMP, another running from the default Apple (or Ubuntu/Linux) environment, another running something else, there is no consistency.


We have our vagrant in git and every developer has the same environment. We run on Vbox under Mac OS X. It has been a big time saver and is reliable relative to hosting the dev env natively on mac os x. LAOP stack with the PHP being provided via Zend Server. We connect to a enterprise test instance of oracle.

We are starting to build simple docker containers for python via RHEL SCL version and will have other docker containers to "play" with for a while. Maybe someday we will get rid of Vagrant. Not likely anytime soon.


I've used Vagrant extensively in the past but since I found docker-compose, I haven't reached for it over the past year.


Docker and Vagrant are two very different tools.


Are they? Very different? Not really.


I'm pretty fustrated reading comments like these because there are use-cases where these tools are pretty much the same and use-cases where these tools are wildly different and you guys are going back and forth arguing semantics that are totally meaningless without the context or use case.


I think the parent's point is that for the majority of web development use cases (which I'd wager was a large percentage of the Vagrant user base) Docker has eliminated the need for Vagrant. Agreed, though that there are times when you need to simulate software running on a particular machine and for that Vagrant is still useful.


Of course there are use-cases for these tools where they can be applied to wildly different requirements but in essence they're not "very different tools", they're very similar in more ways than not.


Another thing to add to the list is you can set Paravirtualization Interface if you're using VirtualBox.

If your defaults are currently on "Legacy", empirically we saw 10-30% speedup by switching to "KVM".


There used to be a problem with running Vagrant on VirtualBox with more than 1 CPU: https://github.com/rdsubhas/vagrant-faster/issues/5

Explanation and workarounds: http://www.mihaimatei.com/virtualbox-performance-issues-mult...

Did they fix this?


My top tip: install your dotfiles into any Vagrant VM without modifying anything in the VMs Vagrantfile/provisioning scripts: https://gist.github.com/tadas-s/0cd468a4cc9fa4cafce6fe57a5dc...


Because you said "without modifying anything in the vms Vagrantfile" I assume you put this in $VAGRANT_HOME/Vagrantfile?

Also, nice username. You'll need a tray.


No, you put that Vagrantfile with provisioner installing your dotfiles into $HOME/.vagrant.d/ folder. Every time vagrant creates or starts any vm it will sort of "extend" its configuration by reading $HOME/.vagrant.d/Vagrantfile (if it exists).


Um yeah. That's the default location for $VAGRANT_HOME if you don't set the env var.


11. Virtual Box sucks. Try out alternatives, I like vagrant-lxc.


Could you elaborate on why you think VirtualBox sucks? Or why you think vagrant-lxc is better?


VirtualBox is slow (especially sharing files), we've had lots of problems with it becoming corrupted, and requiring a fixed partition of RAM makes it very hard to run more than one VM without starving your VM's or host of memory.

vagrant-lxc has none of the above problems, it is just containerization with no virtualization penalties.

I've had lots of people tell me that Vagrant sucks. When digging into their problems it's almost always been VirtualBox causing their problems.


This is being unfairly downvoted.

I don't use a Linux host so I haven't played with the lxc provider yet but vbox most assuredly does "suck" compared to even the commercial hyper visor alternatives.


It's being down-voted because it's a low-effort comment that provides criticism and opinion without any substance or real information and does not contribute to the discussion in a meaningful way.


Thank you for "vagrant global-status". There's still a lot of boxes I don't even remember.


You're welcome!


I've done development with all the databases, languages and packages installed on the host machine, as well as with vagrant. With the VM, I have to ssh in, or switch windows to run tests, whereas before I could just run the tests in vim. How do people achieve fluency with these?

I suppose I could install some more of my tools on the VM, but if you take that to its absurd conclusion, I'm just running a clone of my host on the VM.

It seems more useful, maybe, to just run databases and other services in the VM. Those are the more difficult bits to manage usually. A good programming language already has version and package management facilities.

Or perhaps am I missing something? Anyone have another workflow?


Jetbrains IDE has a good SSH & Vagrant support so mostly I create tools/tasks that launch through ssh (or use the built-in ssh client but that's like any console except you can stay in the IDE), same for the database tool.


We don't use Vagrant, but we use VMs. The workflow is a little bit more convoluted than directly on the host, but we get the benefits of having a consistent way to create environments that are closer - if not identical - to what our customers have.

In essence, we do most of our development on Windows, but deploy our solution on Linux. For databases, our former development environment relied on Windows drivers that didn't have the same bugs than the Linux implementations. Hence, we caught these bugs much later in the development phase.

Another advantage is that we can deploy different releases of our solution simultaneously.

Of course, the cost to get there was to define a new pipeline to build artefacts that we ship on the boxes, but that was a little price to pay.


I have automated from-scratch builds for cluster setups, eg:

https://github.com/ianmiell/shutit-openshift-cluster

https://github.com/ianmiell/shutit-chef-env

I'll be blogging about some of these soon as well


I have never been able to find a performant way to mount host files on a VM, so I just use a headless Ubuntu running in VMware as my dev environment. File mounting in docker containers doesn't have the same speed penalty on a Linux host as it does on win/Mac. I use tmux and vim. I've been using this setup happily for years now.


Another thing that's good to know: vboxfs (the default VirtualBox file-sharing filesystem) doesn't support mmap().

The are quite a few software that use mmap to access files, especially databases. When that happens, either avoid using the shared folder or switch to using the nfs mount.

Here is the ticket about it that's been opened 10 years ago: https://www.virtualbox.org/ticket/819


Adding swap really is critical. Machines running out of swap was causing my MBP to crash ~once a day, enough that I ran linux on it for a while (it was more stable for whatever reason) until we figured out what the issue was.


I highly recommend lando. Basically an easy to use CLI wrapper for docker apps.

https://docs.devwithlando.io/


From a development perspective, what's the advantage here over just using Docker Compose? Is it the baked in recipes that it provides?


Is anyone really using Vagrant for anything more than quick tasks on your local dev machines? I always perceived Vagrant as a hack.


What would you do instead if you want to test things that need to run on actual VMs? Configuration management or whatever. Is booting a VM at a public cloud provider any less of a hack?


Last time I did this I just created a base VM that had Ansible set up and anytime you wanted to get the latest environment you just ran a script. Although I admit that I'm sort of an infrastructure fanatic and love Ansible, so that might be why I steer away from things like Vagrant.


It's a ruby app to standardise configuring and provisioning VMs on any one of a number of hypervisor/container platforms.

How would you provide reproducible dev environments?


Vagrant is definitely wonderful to quickly provision VMs across multiple hypervisors, but that's where I also see it as a hack.

It will not always replicate exactly what you need on each hypervisor, and so many times you will end up with something not working (example: synced folders with host).

I have always managed my VMs directly in KVM. Takes more time but I don't rely on a middle-layer//wrapper.


If things like synced folders don't work it's usually an issue related to the "guest tools" not being installed/updated/running for that given hypervisor.

If you have well built boxes it shouldn't be an issue.


I use Vagrant to let me test on multiple Linux and BSD distros. The large number of available boxes makes this easy.


Yup yup this is a good list. Thanks for mentioning landrush, it is something I could have used.


Vagrant's documentation leaves something to be desired. It's perfectly possible to control many VMs from one Vagrantfile, but good luck figuring out how from the docs.

I mean with a bit of ruby knowhow and some digging you can figure most stuff out, but Vagrant's documentation still lags behind Hashicorp's other stuff which is usually very well documented.


Yeah, that's been my experience too.

Which is a shame, as it's an incredibly powerful tool, it just lacks the 'last mile' stuff that Docker did so well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: