Hacker News new | past | comments | ask | show | jobs | submit login
Setting Up A New Machine for Ruby Development - as used at 37signals these days (37signals.com)
117 points by ChrisArchitect on Sept 6, 2011 | hide | past | favorite | 32 comments



My preference is still a *nix virtual instance running locally (or on a LAN server/my desktop) so that I can keep even better control over environment, keep discrete projects separate and clone dev environments quickly to peers.

VMWare Fusion works out great for this, but FOSS VirtualBox works out too if you want the free route.

The rest of the 37 Signals advice is sound, just do it within the virtual instance.


I agree, but I would like to point out that if you use Virtualbox, you can script the whole setup of the VM with vagrant. The only downside I have with using VMs is that it's hard to use some of the nice OSX GUI tools ( GitX, Pixelmator) on the files in the VM.


Actually, I find using VMWare Fusion to be a great way to use all the good dev tools available on the Mac -- SourceTree git client, CSSEdit, BBEdit, etc -- while actually developing on the same Linux environment you'll use in production.

The key is to build and install the latest version of Netatalk on the Linux VM, and then mount its filesystem on the Mac via AFP. Netatalk has a config option (-admingroup) that lets members of an arbitrary group mount a volume with root privileges, which is very handy for development.

(Netatalk has plenty of bugs[1], but works well enough for development.)

I've tried to get this same setup with CIFS (Samba), as well as sshfs. However, apps like TextMate or CSSEdit freak out with permission errors (even if you set up ACLs or something so that you really have write permission, the Mac apps don't believe you do).

Netatalk is the only magic sauce I have found that lets you mount a Linux server's disks as volumes that your Mac sees as simple writable volumes, but where you actually have root privs as far as the Linux box is concerned.

Not something I'd use on a real server, but great for development.

-- [1]: the most insanely annoying one being that it currently can't serve the root volume; you have to configure it to make /etc, /var, etc available as separate shares


At my last job, I was successfully using NFS to share a directory from OS X to a FreeBSD VM running under Parallels. I went that way so that TimeMachine could backup the shared directory (which contained source code). It seems like it would still work if you did the reverse procedure and made sure the directory on OS X was backed up via Time Machine, but I'd love to hear of your experience.


Vagrant should auto-mount the directory containing the Vagrantfile as /vagrant in your guest OS. I normally add my Vagrantfile directly to my git repository, checkout the git repository on my OS X machine, and edit the files directly on the OS X filesystem (and use Github's OS X client). The files will still be available in your vagrant machine, and if you really want to you can automate a symlink from /vagrant to /home/vagrant, or wherever you want.


This is slick, but be warned that you do pay a performance penalty for it last I checked. So keep your main work on the totally virtualized filesystem and just use /vagrant for stuff you need to share.


Interesting. Well, I don't normally have to push any large files around, mostly python source code, so it's worked well for me. Will keep an eye out.


One thing that works well for that is using shared folders on the VM. I have a folder 'projects' on my windows drive that is a shared folder on all my VMs. It's mounted with fstab on boot, and works like a charm mostly. I say mostly because the last few Linux kernels/VirtualBox extensions/VirtualBox releases haven't been playing nice, where the VM filesystem is "out of sync" with the host filesystem, especially when creating and deleting files (updating is fine).

Haven't played with vagrant, but it sounds great.


MacFusion works pretty well for mounting the vm file system in OSX.


> Homebrew: Remember how painful it used to be to get imagemagick installed?

Uh... no? https://trac.macports.org/browser/trunk/dports/graphics/Imag...

edit: ah, so any comment not gushing over how fantawesomastic homebrew is is now verboten. Noted, I guess.


Don't assume you're getting downvoted through simple fanboyism.

Your comment is thin and unproductive, and flies in the face of the experiences of many.

We've been using ImageMagick in our product for about 6 years. Across 4 new Mac laptops and 3 versions of OS X, `sudo port install imagemagick` has never worked for me.

I've had similar woes on Ubuntu with `apt-get install imagemagick`.

I don't know if the fault lies with ImageMagick itself, the package managers, or just the simple fact that ImageMagick has an extraordinary number of library and environmental dependencies which are always moving.

Even today, my local development environment loads and runs ImageMagick with no errors or warnings, yet emits only black and white graphics -- no color. At this point, I'm exhausted of trying to fix it, and I've given up.

I've never tried brew, but I'm more than willing to give it a shot.


Macports helped a lot with open-source on the Mac, but I wouldn't say it solved the imagemagick problem.

Imagemagick, or one of its many dependencies, would often be broken, and when it worked it would still take a ridiculously long time to download and compile (unless you remembered to specify "no-x", setting aside the fact that you had to know that variant existed in the first place).

I used Macports for a long time, and it worked well for installing mysql, git, and all the other open-source software I used, but the imagemagick install was painful enough that I still kept around my own script to build it until homebrew came around.


Yeah, I remember how painful it was before MacPorts, but MacPorts definitely sorted most of it. I prefer homebrew now but MacPorts doesn't deserve to be swept under the table.


I remember how many times Darwin/MacPorts didn't work when installing that package... Of course it looks sorted now but it was a tricky process for many years, ensuring the right order of package installation to get certain features to work without crashing. What a pain and in this case it looks like that is over for everyone.


I voted you up out of sympathy :)


I wonder if they'll do one about deployment too (like Capistrano, Passenger, etc). I heard they used to use Capistrano, REE, and Phusion Passenger but have been moving to Ruby 1.9.2 and Unicorn but it'd be interesting to hear more.

In terms of test framework, however, the answer is well known ;-) http://www.rubyinside.com/dhh-offended-by-rspec-debate-4610....


Our setup for new deployments is rbenv, 1.9.3dev, Unicorn, nginx, Capistrano.


And what about system configuration? Your chef cookbooks repo is not updated for a long time. Do you still use chef, switched to puppet or use something else?


We are still (happily) using Chef. We'll update the public repo shortly. We just need to remove from of the more "private" bits before publishing.


Thank you for clarification!


1.9.3dev? Impressed! I thought I was being progressive with 1.9.2.. :-)


1.9.2 doesn't have any GC tuning options, so it proved slower than REE in production for us. Thus we needed to go to 1.9.3 to get the GC tuning.


What benefits does Unicorn give over Passenger?


> What benefits does Unicorn give over Passenger?

Zero downtime deployments come in handy.


Is bundler like pip + virtualenv in python land?


Think bundler = pip, virtualenv = rbenv


Further refinements to the self-licking ice cream cone.


cue the inevitable 'why not using x' etc - but it's always interesting to hear what the heads are using day-to-day


I really want to post "why not use Vagrant to automate all of that?" but I stopped myself.


vagrant forces you to run in a virtual machine, bit more inconvenient than running on the local physical host.


I've found it's pretty seamless. You're editing files on your local dev machine, which vagrant auto-mounts in the guest OS. Ports are automatically forwarded to your host dev machine, so my workflow is literally exactly the same as if I developed directly in OS X. The only difference is first having to run "vagrant ssh" to run commands in the dev environment, but for having a repeatable, share-able, isolated environment with all dependent libraries automatically installed, it's well worth typing "vagrant ssh".


I find memory hogging server processes running a different OS than my server to have too many inconveniences.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: