Hacker News new | past | comments | ask | show | jobs | submit login
Why doesn't every company buy developers the best hardware? (programmers.stackexchange.com)
189 points by utkarshkukreti on July 19, 2011 | hide | past | favorite | 220 comments



Good questions to ask when being interviewed:

- What is your standard developer hardware configuration?

- How often is it refreshed?

You get bad answers to those questions and it's a definite warning sign (IMHO).

If you're working for a cash-strapped startup (possibly bootstrapped), you may want to negotiate a deal whereby you supply your own hardware (in exchange for higher salary and/or stock/option grants) if they're only in a position to provide substandard hardware.

One side note: what constitutes good hardware depends on what you're doing. If all you're doing is writing PHP/Python/Ruby in vim/emacs and running a local Apache/nginx and possibly a MySQL server then it almost doesn't matter what hardware you have... apart from the monitor.

But if you're compiling huge C++ (or even Java) projects then you probably want good I/O (possibly an SSD, preferably in RAID1 config for redundancy), lots of RAM and a good CPU.

As one data point: I have a 6 core machine with 12GB of RAM and 2 24" monitors (some opt for 1 30") plus a Macbook Pro 15" with SSD. You can refresh your hardware every 18 months if you want to (but most don't unless they have a pressing need; I know some people with 4-5 year old workstations because those are fine for what they do).


>If all you're doing is writing PHP/Python/Ruby in vim/emacs and running a local Apache/nginx and possibly a MySQL server then it almost doesn't matter what hardware you have... apart from the monitor.

OK. If you're developing on a local instance of Apache/nginx sure. But as a professional web developer I usually have a fully virtualized dev environment running. That means, at least one of our app servers, our database server, and possibly a Windows VM (for testing in IE - usually I can save this for when I'm in the office and have all this stuff running on another machine).


Yep, this is true. At the very least I've got to be able to knock up an instance of the server OS version quickly without interfering with the rest of my desktop config.


"But if you're compiling huge C++ (or even Java) projects then you probably want good I/O (possibly an SSD, preferably in RAID1 config for redundancy), lots of RAM and a good CPU."

For huge builds it makes sense to go with a compile farm. Working on a god box that sits idle 99% of the time is a bit wasteful.


Quick aside: does ESXi still impose too much overhead to move a compile farm onto VMs?

A friend of mine tested it 3 years ago and it was something like 300% slower (no idea about the disk config, but he's a smart guy). It would have been worth it at anything under about a %50 hit...


We tested it at my last job maybe 5 months ago and it was a complete failure. Granted we didn't spend a lot of cycles on it, but we saw how slow it was and went back to native hardware pretty quick.


What exactly would be gained by using VMs, assuming that there were no performance problem?


The boss gets to put "lead the conversion to a virtualized environment" on his resume. True story.


Two reasons: - Easy migration between hardware configurations of a running system. - Snapshots to revert quicker after a failed upgrade / change. (Typically in addition to traditional backups.)


My case was for a heterogeneous build environment, where separate environments needed to be maintained for different developer groups.

It would have been a good fit for VMs.. except for the performance.


For local compiles with good I/O I use RAM drives, hosting most of the project, the IDE and the compiler/jvm on RAM drives. The files are backed up on a git repository on the hard drive, so I don't have to worry about data loss if the RAM drive has problems.

If you have to use a laptop, throw away the useless DVD drive and change it with a second HDD/SSD.

Many of the better laptops can also have a second mini-PCIE slot for an internal 3G modem. Install a modem there. This will help you a lot: 1) Internet access everywhere, 2) you don't have to worry about signal (the internal antenna is bigger and better) 3) you don't have to worry about crappy USB modems and their crappy drivers.


Regarding compiled languages, I wouldnt really mind a poor man's workstation setup as long as

* The compilation is done remotely on a beefy server with no CPU/Disk IO/Disk space issues.

* I get several monitors. they don't even have to be big - 17 inch is fine - as long as there are at least two.

* The dev box is able to handle its software responsively. I souldn't have to reboot it. Ever. Hell any dual core with 2GB of ram, a linux distro + emacs would do.

Currenlty my 600 euro i7 box at home would probably put my work box and the team's remote compilation VM to shame. Both. At the same time...

Interestingly, from reading the comments i feel that startups are actullay more inclued to supply devs with good hardware than established players. I was expecting quite the opposite.


> i feel that startups are actullay more inclued to supply devs with good hardware than established players.

On most startups, the CEO's desk is, at most, a couple dozen steps away from the devs. It's the bean-counting professional managers that are brought in after that stage that make the developers miserable.


The reason why startups are more inclued to do so, is that extra hardware cost almost nothing, compared to salaries. The only reason not to is bureaucracy.


I suspect the reason is at startup the person making the final decision on hardware actually has to use it and is intimately familiar with what's being done on it. At a big org with say 1000 developer's that's not the case and at 1kish savings per machine there's a big "look what I saved us" opportunity for that decision maker.


I know a place that gave new employees (even coders) machines with 256mb ram... I shuddered in horror.


> possibly an SSD, preferably in RAID1 config for redundancy

Since the code should be in a source control system, I don't see the point of this. RAID5 (increased redundancy and throughput) maybe, but even then you're increasing the chances of failure (the RAID hardware itself can fail) and you're not gaining much (if any) in the way of IOPS, which will be your biggest issue.


It's not about source code backup. It's about a developer not losing half a day when an SSD dies. With RAID1, at least in theory, when one dies you should be able to replace it and keep going.


Thats what time machine with a relatively slow external firewire HDD is for. It would only lag by an hour at most.


Are you serious? The goal of RAID is to eliminate downtime; a drive fails and you pop in a new one. All without ever having to reboot or even stop working.

With Time Machine, you still have to manually rebuild your system after your drive fails. This is not what Time Machine is for; it's for, "oops, I shouldn't have deleted that file". (Which, incidentally, RAID is not for.)


RAID + SSD = no TRIM (cept for recent Intel drives). That's a problem. It's easier (and much cheaper) to just have a bootable backup that's updated nightly.

Even a RAID1 failure requires you to have a replacement (ie, trip to Fry's). A bootable backup, however slow, is just there.


Pretty sure this is not the case for Linux. LVM supports discard, so md probably does as well.

Also, you don't immediately need a replacement drive: RAID1 boots fine with only one device. Assuming your company policy is to have RAID1'd SSDs for all developers, you're probably going to have extras on the shelf anyway.

Time machine is for accidental data loss. RAID is for disk failures.


Exactly.

A nightly differential bootable system image backup a la SuperDuper, Macrium Reflect, Acronis or the like + cheap external 3TB drive = zero downtime... if the SSD fails, boot from external, and schlep it while the SSD is repaired/replaced.

Time Machine is bonus for lost/overwritten local files. Source Control guards the jewels (you don't fear committing untested code to local branch, do you?)


Going half a day without committing code is missing the point of version control.


I think a lot of people would disagree with that statement. If you are committing that frequently then there are probably instances where you are checking in half written code that does not even compile. If you are using DVCS chances are your commits are still only stored on your local machine and just as susceptible to loss. If you are using something centralized or pushing your changes you are potentially going to disrupt the rest of the team unless you are working on a private branch. In that case, it seems as if version control is being used as a backup system where a substantial number of revisions cannot even be built. I think that is poor use of version control. It becomes difficult (at least for me) to analyze past changes (maybe months after the fact) when some of the versions are half-baked ideas that I committed for safety before going out to lunch.

I think there are two requirements here: 1) redundancy of work in progress and 2) change control management. I want my work in progress to be relatively safe from failures, I want it to be automatic and I want to be able to choose which changes are worthy of their own revision number in version control. Personally, when committing work in progress I want my revisions to represent a logical stopping point and not necessarily dictated by how much time has elapsed since the last commit.


i think the parent meant that the developer would not be able to get any more work done during that day because their machine is without a hard drive, not that the code on the hard drive would be lost.


The better solution here is to have a couple of "gold disks" of a default developer machine; with all the required programs etc.

That way if you go down you can just swap the drive out and be on your way again.

This is what we do (and we have a lot of hard drive churn due to heavy use) and it works well.


I think I would rather have the RAID so I don't have to take as much time. Setting up the personal info stuff on a machine takes time too.

For Linux / Windows folks, SuperMicro has a nice tower that has 4 hot swap hard drive bays that I really like.


Things like Chrome and Dropbox have made this less of a chore.

Speaking only for myself... I have a gold disk plus a little script that syncs scripts and configs to backups. And I use Chrome logged into my Google account.

I do see your point, but a lot of it can be mitigated.


Between BitBucket and Dropbox, it takes me no time at all (well, a little time to sync Dropbox, but I'm busy in those times).


This recently happened with our main content server. I had been bugging my boss for almost 3 years to do backups of the system data as well as the content data, but his logic was that since the system was RAIDed, even if we had a loss of data, the RAID would allow us to swap in new drives.

Well, our air-conditioner went out for 3 days in the middle of a Japanese summer, and the RAID controller itself shorted out, taking the drives with it. My boss still doesn't understand how we lost our data, because "RAID was supposed to protect our data."


I empathize with you, the developer. I still want the best machine money can buy.

But I have also been on the other side. Let me give you a simple question to ponder: assuming an engineering team of 50, and the cost of 2% to upgrade machines, would you rather upgrade everyone, or hire one more developer? As the manager, I can tell you that everyone is asking for an extra headcount: "we need a full-time person to handle builds", "Tom could really use an extra hand with the XYZ module". Etc.

It happens to cost the same: hire an extra developer, or upgrade. So, as a manager, you handle the trade-off. Life is all about trade-offs, not absolute. I'd love to have a faster machine. I'd love to have more engineers. I can't have both.


I'd say you're making the wrong choice there.

Developers productivity doesn't work the way you think it does. 51 developers on crap hardware won't be 2% more productive than 50 developers on good hardware. It won't even be equal.

Somewhere in that stack of 50 developers are a few of those mythical 10X-100X guys, who are currently spending a lot more time than you think arguing politics on Reddit. When you're writing code and your machine bogs for whatever reason, it kicks you out of what you were doing and almost forcibly alt-tabs you over to look at lolcats.

Give those guys good machines that don't piss them off, and you'll find them a lot more productive. And not, like, 2% more productive. More like 2X more productive as a group and 10X productive for some individuals.

Try it yourself one day and let us know how it works out for you.


Having worked as an indie dev for a decade and a manager of 150+ developers for a decade, I find this comment misguided. Here's "how it works out" for me.

I wrote code as fast on an Apple //c as on this late-2010 MacBook Pro 17". I'd argue faster, as context switching to something unproductive was actually a chore. For writing, Appleworks 2 on the //c was, if anything, more efficient than the latest MS Word, for the same reasons some hardcore devs prefer EMACS over Visual Studio.

In my experience, the single most meaningful programming productivity boost in the past 25 years is a second screen.

Running PageMaker on an SE/30 with an external Radius Pivot, for example, ran productivity circles around working the built-in screen alone.

But before second screens, devs managed to get by. In the Apple II coding days, an Imagewriter printout of code thus far taped to the wall behind the monitor served as the "second" screen, and physical books lying open around the desk served as reference tools. In many ways, I'd argue the simplicity and thoughtfulness of that approach was more efficient. It's as though the mind can keep separate threads for each type and physical location of media.

For horsepower, as the rest of the comments here point out, a team does want a source code and build system that screams, but each developer workstation could arguably be a text terminal provided it can handle a few windows.

The psychology of latest toys contributing to job satisfaction contributing to productivity is a separate argument. I have found giving dev groups machines tuned for LAN parties (latest graphics, high GHz CPUs) more relevant to productivity than providing machines tuned for compiling (striped raptor HDs, quad xeons).


My Apple II development workstation was actually two Apple II's - a II+ where I edited the code (because it had a Videx keyboard enhancer and 80-column board attached via a soft-switch to a 14-inch white-phosphor monitor I had to convince a CCTV company to build for me), and a IIe enhanced with a 12-inch green screen to run and test what I built (moved via floppies). That and a pretty-printed listing. The II+ motherboard video out was also piped to a 16-inch color TV via composite. I wrote a couple programs that used both the Videx 80-column text and the motherboard video at the same time.

My dual-headed Apple II+ was something to behold.


That's fantastic! Btw, do you know if that Videx board is what the Korean Apple ][+ clones used to make it do upper/lowercase? The Korean clones had that before Apple IIe came out.


Lower case could be done with a proper keyboard. The II used a 7-bit parallel ASCII keyboard and IIRC the II+ could use it. The Videx keyboard thingie would be overkill.


It really depends on your build and testing times. If you have to wait another fucking 15 minutes so the entire build system can compile the project, or 30 minutes to run through all of the tests before you can compile, it does kick you out of coding.


I find it's not the 15 or 30 minute builds that are a problem; I can usefully context switch to something else, get it done and come back in that time. It's the 5 minute builds that are the problem. If I know something is going to take between 2 and 5 minutes, chances are I'll crack open Chrome, hit "news.y<enter>" and lose the next 30 minutes until noprocrast kicks in.


You've polarized the issue pretty intensely here. A developer using the '10 MBP is not really going to be less productive than one using the '11 MBP. But that's where this logic leads.


I think you're straw-manning the premise. He's not talking about the difference between a '10 MBP and an '11 MBP, more like the difference between a mid-to-high-end '08 MBP running Leopard and a mid-to-high-end '11 workstation running Lion. Try going back to 3-year-old machine for a week, and I think you'll find it's kind of annoying, and your morale will die the death of a thousand papercuts.


I'm typing this on a 3 year-old machine... I'm doing just fine:)

I should elaborate:

I'm running an almost 3 year old MacBook Pro. I do all of my development within VirtualBox (Ubuntu 10.11). I have it connected to a gasp single external monitor. I work every day on a MongoDB/Python stack.

It could be faster, sure... but I'm hardly unhappy with it.


I dunno, man. I do most of my ruby/node/c development on an 11" mba, which is about as powerful as an '08 mbp. Sometimes, I am annoyed by slow test suites, but overall I dont feel the machine is holding me back. If you asked me to work on a windows box, I'd end the conversation politely.


I work on a pretty big C++ project and had a ~5 year old Thinkpad until about a month ago. I can't say that I notice a useful difference. Incremental builds mean that I rarely have to rebuild the entire solution, and when I do, even on the old machine, it's not a big enough deal that I would bitch about it.

I think that for most of us, a few gig of RAM, 2 cores and a 22" monitor (that was my setup last month) is enough that more horsepower wouldn't make us substantially more productive.


It is a total straw man. But tell me you don't know developers who would make the argument anyways. I've employed people who made similar arguments.


But that's a minor update of a pretty good machine already, especially if both are kitted with SSDs and maxed in RAM.

jasonkester mentioned "crap hardware" versus "good hardware", a pretty minor revision of a machine likely won't move it from one to the other.


>But that's where this logic leads.

Clearly you get diminishing returns here. And in your example you selected the comparison points (MBP'10 vs '11) so high and close to each other that the effect is negligible. jasonkester on the other hand is comparing "crap hardware" and "good hardware"...there the effect is huge.


MBP = Macbook pro, for anyone else wondering. Took me a minute.


Well the '11 comes with thunderbolt and that Promise RAID sure beats the other external drive options on that box.

// haven't tried putting the VM's on the Promise RAID yet


What's the point in having engineers who can't engineer because their tools are inadequate? Would you hire a carpenter to work on your office who offered to give you a 2% discount, because he has to stop several times a day to fix broken tools?

I find it bleakly amusing that so many blue-collar companies have no trouble putting a $10/hour worker in front of a $100,000 machine, while white-collar companies will do anything to avoid putting a $50/hour engineer in front of a $2000 machine.


Engineers that can't engineers, can't engineer. Are we talking about tools that can't do the job or "the best tools money can buy" vs your average computer? I could care less if my carpenter has a Dewalt power drill or a Festool power drill.

Seems like something you could quantify pretty easily. Arstechnica or Phronix or someone could build up a $1000 machine, a $2000 machine and a $10000 machine and build various open source projects on each and publish the times.

IDE reaction time might be harder to quantify but I bet you could do it.

My hunch from 10000 feet, you'll see an appreciable difference between like $1000 and $2000 but between $2000 and $10000 it won't be that substantial or it could be attributed to a $300 upgrade (SSD vs. spinning platters) or something.

If the difference is that big, nobody would ask this question, everybody would have $12000 workstations on their desk at work and $5000 laptops to carry home, that's just how it would be.


That ignores the diminishing returns on additional staff. Also your math is off let’s say 1 person costs 100k/year including benefits SS energy hardware office space etc and you upgrade every 2.5 years that’s 5k/person above the base cost of having any hardware/software.

Now, it can be close if your comparing 150 people vs 151 people but I can tell you better hardware would be more productive in that case.


Now, it can be close if your comparing 150 people vs 151 people but I can tell you better hardware would be more productive in that case.

No you can't.


Give each year/month credits to your engineers to buy a new computer and give them a way to 'buy' extra credits. So some of them will wait longer, some will buy a new machine each year.

Give them the liberty to choose!


You're forgetting the cost of maintaining a separate machine type for every developer. When models are standardized, the support team can keep identical replacement hardware on hand for quick fixes. They can swap in an identical memory stick or GPU if yours has glitches. They can even swap entire machines by moving the hard drive. Having non-standard machines greatly increases this cost for companies with more than, say, 6 developers. And of course standard hardware typically means a volume discount and a significant support agreement with Dell, or HP, or Apple, or whoever else.

Let's also not forget that when developers are choosing their own machines, they're doing so on company time. It's certainly not cost effective to have developers doing comparison shopping on machines. And it's absolutely not cost effective to have developers putting together their own machines from parts (as some would happily do if allowed).


There are certainly benefits to having everyone upgrade at the same time, though. From a technical standpoint, it seems better to have everyone on the same setup so that you only have to deal with the same set of problems. You might also be able to get a deal for buying in bulk.

Additionally, it would be frustrating to use your credits too soon and then the guy next to you gets something twice as good one month later. So just play it safe by keeping everyone on the same playing field.


Or suddenly all the machines in the office are bluescreening at the same obscure graphics driver bug and nobody can do any work...


I've pitched this idea before, and the standard argument is that by doing something like this, you lose the tax benefits around capital expenditures that become depreciating assets.

However, I'm sure the tax benefit is small compared to the potential for increased productivity and output. But try explaining that to the average CFO.


But that would give up the really good dell discount for buying 30 identical machines.


As a father of two, I heart this idea. This takes the budgeting headache away from the manager, and is completely fair.

It implies that every developer is responsible for his/her own workstation maintenance, though, and that might be a risk/pain you don't want.


Ask the team which they'd prefer.


Maybe you should just cut salaries, healthcare, or other benefits by 2% and see how well that flies. Benefits are there for retaining the best talent and maximizing that talents efficiency. Treating PC hardware as ay less a concern is ridiculous. If you want to think of it another way if you manage to save even 1.3 hours of developer time a week (20 minutes of each daily builds) the hardware has paid for itself.


The manager should ask the team, or himself, if having SSDs, twice the RAM, etc. would give them enough extra time to manage the builds.

OR, if Tom is having trouble with a module, maybe he would feel more comfortable handling CI...


To any development manager who wishes they could offer better hardware within their budget: If you haven't already done this, buy an SSD for every workstation. Nowadays, you can get 128GB for $200 or less.

Do that, and make sure the system has 4GB+ RAM, and 2 monitors at high resolution, and your hardware will feel brand new to the guys on your team.

I did this a couple years ago as one of my first acts after being promoted, and it worked great. I paid a bit more to get the drives that came with an upgrade kit (that is, an enclosure and imaging software). Gave them to each dev to take care of themselves.

I didn't have to wait on and work with the bureaucratic IT department, and a dozen SSDs was cheap enough I could put it on my card without needing CFO approval.

You don't need "The Best" hardware. In fact, it reminds me of the saying that "People buy horsepower, but they drive torque."

What matters to developers isn't how many FLOPs you can do, but how quickly can you load your VMs and start Eclipse and grep your local filesystem.


Echoing this. Most workloads don't need dual Xeons, or even more than 4GB of RAM, but an SSD and two big monitors can make even a low-end machine feel like an absolute Ferrari when you're not waiting for Outlook to launch or wasting time otherwise. SSDs are the best productivity upgrade you can buy: $100, maybe $200 per developer and suddenly no one's sitting around just waiting for their IDE to launch.


I'm not a developer, and I still think an SSD is the most significant marginal upgrade I've seen in the last 10 years. If you told me I could use a machine with Core Duo and an SSD or a modern Xeon and a standard HD, for almost all of what I do, I'd choose the former.


My five-year-old Macbook with a new SSD is very noticeably faster than my brand new Macbook Pro with an HDD. I don't know why people still use HDDs.

Don't answer, it was rhetorical.


I'll answer anyway since it's an interesting question.

Space is still cheaper on hdd's, and data volatility concerns. Is the latter still an issue with SSDs, or have they reached the point now where you can store your music collection on them and not worry about losing it in two or three years?


It's probably just space, as volatility isn't a very good excuse. Sure, it's worse with SSDs, but you need to always have backups.


If your backup strategy doesn't account for your main disk failing at any given time you've already lost.


You should always worry about losing it, traditional spinning disk or not. That said, my SSD is doing just fine, but I still have a full backup of it.


With Time Machine, there's no reason for anyone on OSX not to have backups, it's so incredibly easy to do, and you set it up once and forget about it until you need it.


My original macbook has an SSD in the optical drive slot with all applications and the OS. In the HDD bay is a large hard drive containing all user data.

Fun oddity after switching to SSD: If the laptop gets REALLY hot from long term summer usage, in the past I would just slap a cooling pad under it to get the fans to stay quiet. Now, the cooling pad somehow causes the SSD to freeze, so I have the cooling pad sitting only under the left half of the machine, and all is well.


Yes, SSDs have a read/write limit, but the average time it takes to hit it is longer than the mean time to failure of HDDs.


I have mixed feelings that the best hardware yields better productivity.

I worked with a Macbook Pro, top of the line, with 2 x 24 inch monitors attached, with an ergonomic keyboard and wireless mouse and all that crap.

Now I work with a $500 ASUS that doesn't even lit my keyboard, with Ubuntu Linux installed and I'm using it directly (no external monitors or keyboard) since I'm on the move a lot.

As far as my productivity is concerned, I still get things done at the same pace. Hardware is not my bottleneck.

Of course, this has more to do with my other preferences. I like keeping my toolchain as light as possible. I don't use bloated IDEs, even when working with languages that require an IDE, such as Java. I don't do heavy processing often and when I do, I prefer offloading that work on AWS. I am also proficient with manipulating virtual desktops.

As far as startups go, I think that spending money on expensive hardware is not frugal spending. If you're my boss, I would rather prefer a bonus than the latest and shiniest crap -- if I want shiny crap, I can buy it myself while respecting my own priorities.


My meager single-core 1.6GHz laptop has been fine for development, until the time I had to use Eclipse instead of Vim. Why, oh why, does it freeze while I'm just writing code!


Eclipse shows you build errors by compiling your code in the background as you type. I believe you can disable this, or make it happen less frequently. This is the cost of red wavy underlines for compile errors. I don't remember the exact setting, as I haven't used Eclipse in years, but it's in there.


But the Right Thing™ would be for Eclipse to lower the priority for such checks, so that they wouldn't conflict with the interface. I often have processes using all my available CPU time, but since they run at a 'niceness' value of 19, they don't affect the interactive tasks.


It's easier said than done -- whatever Eclipse is doing in the background, it also has to update the interface to reflect the latest results, which means synchronization with the UI thread.

Also, a big project is hard to optimize, especially when threads are involved; so whatever shortcuts you take to make it work at first does come back to hunt you in a big way and you can't fix it easily.

For example people are telling me that Eclipse should run fine, but on my laptop it freezes a lot, even for small projects; while IntelliJ IDEA works flawlessly. So I'm not sure what the problem is, but it does something on my box that it shouldn't -- and throwing more money on hardware (instead of searching for a better alternative) just seems like the wrong approach to me.


I am under the impression that async UI updating is a rather solved problem. I'd wager Eclipse has even done it since.. probably forever. No, I suspect the real issue is that it doesn't 'renice' it's compilation processes correctly or sufficiently.


Not necessarily -- if you're pushing thousands of update events in a short time, those events will also trigger a waterfall of other events triggered by UI components and so the effect gets multiplied; the end result being that it doesn't matter if the operations themselves are async as the UI will get sluggish.

Another problem is that some operations are blocking in nature. Consider the case when an intellisense dialog is triggered (after typing a dot or pressing Ctrl-Space) -- to give you intellisense, Eclipse has to compile your code, make guesses about your intent in case the code has errors (since intellisense also has to function in case of simple errors, otherwise it is useless) and then present you with a dialog with options available for completion. This is not something that can be done async.

Of course, this opinion was just some random guess. I have no idea why Eclipse has the tendency to freeze the UI on my machine.


Disable Build Auto in under Projects


Why do you need Eclipse?


The project (an internal tool) had a dependency on an Eclipse plugin and platform.


For me, long compilation times puts me out of "the zone". If compilation takes too long I'll start reading e-mails or web browsing. When compilation is done my mind is elsewhere and I have to get into the problem solving "zone" again.


This is very true; that's why I hate the Scala compiler and when working with Java I take special care to have a lean and mean compilation strategy.

For example with Java I use manually written Rakefiles (I prefer it over Ant since I have more control), I make sure it doesn't compile unless files have actually changed and in case the project is getting big I start separating functionality in multiple projects, having multiple JARs as a result.

Then, I'm using Emacs and in Emacs I can start a build whenever I'm hitting "Save" on a file. And in case of compilation errors, Emacs even highlights the errors for me.

You have to work on it a little and you lose time on the actual build process, but you can achieve a lean and mean setup (unless the compiler really sucks).

Of course, this is the advantage of an IDE - it takes care of annoying details for you; but then you have to put up with all the bloat that brings. And for humongous projects, your IDE will choke anyway, even if you have the latest state-of-the-art hardware; try loading the Firefox codebase in Eclipse CDT or in Visual Studio sometimes.


In the 'old days', compile time was a chance to print out your code on fanfold paper and do a top down thoughtful code review, refreshing the map of the whole project in your mind.

It's quite rare these days for developers to literally see their whole program's code at once, and I think we're the worse for it.


In the 'old days', you must have been writing small programs. Printing a 1K-line program on paper might give you a nice perspective on your code. Printing a 10K-line program on paper is a waste of paper. Printing a 100K-line program on paper is ridiculous. Beyond that it just gets worse.


> In the 'old days', you must have been writing small programs.

On the contrary, 10,000 lines is still only 150 pages at 66 lines per page, and fanfold paper flips through easily.

I've read through several projects that took at least three reams of fanfold paper. I'm not saying that was fun. Fortunately one didn't generally have to read through the whole project, only the module one was working on.


I don't think it's useful to attempt to hold 150 pages of code in my head. Even with a long compile time, it's not possible to do more than just barely skim that much code.

I have printed code and reviewed it before. Sometimes it's useful for small programs or classes. I don't think it's useful to waste 150 (or more) pages to print an entire large program, though.


> I don't think it's useful to attempt to hold 150 pages of code in my head

Not sure it was about holding the actual code in one's head so much as the structure, flow, or shall we say, "plot".

The Chronicles of Thomas Covenant runs 4948 pages, Song of Ice and Fire is 4195 pages so far, and even LotR is 2144 pages.

This is one of the reasons I think there's a high correlation between great developers and developers who love history -- leveraging the ability to envision and hold a complex sequence of interlocking details in mind.


I feel like you're really reaching for a comparison here. You're not reading LotR in the 10 minutes it takes your code to compile. You're not even skimming it. The fact that your code is shorter than LotR isn't really meaningful. You also aren't reading 150 pages of code while your code compiles, and if all you want is to review the high-level flow, skimming the code is going to miss a lot.

I just can't see the value in printing 150 pages of code to barely skim it. Especially since those 150 pages will be increasingly out of date as time goes on. It's just such a waste of paper.

I'm not sure where you get your "great developers" and "developers who love history" link, either. Liking history has nothing to do with coding. Nor does it have anything to do with reading fantasy novels. And none of the history buffs I know are even coders. This is such a random tangent.


> "I'm not sure where you get your "great developers" and "developers who love history" link, either."

I provided credentials above. I've been managing hundreds of developers over the past decade and working as an independent developer for a decade before that. (And as a hobbyist developer the decade before that.)

> "Liking history has nothing to do with coding."

My experience hiring and managing hundreds of devs indicates the exact opposite. You may have found differently, but I will continue to focus on hiring people who find learning a rich tapestry of interconnected context fascinating, and preferring to hire those with history (or linguistics or other complex humanities) degrees with formal CS electives over those with pure CS degrees.


Sorry, I should have been more clear: Your personal anecdotes are not sufficient to establish any meaningful connection between these two. Moreover, this is entirely tangential and has nothing to do with what we were discussing.


> "Your personal anecdotes are not sufficient to establish any meaningful connection between these two."

“Now, I don't want to get off on a rant here ...”

By your definition, most science is "personal anecdotes" if building knowledge through systematic observation and study of hundreds of test cases is merely "personal anecdotes". (See "empirical research"[1].)

By contrast, you wrote "Liking history has nothing to do with coding" but you supplied nothing whatsoever to substantiate that statement, just as you supplied no basis for the statement that working closely with hundreds of developers is not sufficient to establish a connection. Ok, I've managed hundreds, and we've collaborated with thousands. How many developers would be enough to establish any meaningful connection?

You challenged "I'm not sure where you get your link" and I provided the basis for that link: observation of several hundred developers I have hired and employed. That's a reasonable number considering many studies use pools of just a couple dozen test subjects.

Throughout this thread, you have countered remarks I've based on experience, with your own unsubstantiated assertions, sometimes insulting in nature. For example, you wrote "In the 'old days', you must have been writing small programs" when the opposite was true.

Just because you "can't see the value" in a printed code review back in the 80's, or haven't noticed a link between coders with an appreciation for history and an exceptional ability to architect software systems, that doesn't obviate the need to provide at least as much foundation for your arguments as I've provided, particularly if you're trying to call me out for lack of basis.

You said this is "entirely tangential and has nothing to do with what we're discussing", yet this concept of holding the complex in mind is precisely what I opened with, and the theme I've stayed with.

In my experience – which I've scoped so readers can decide for themselves if it's relevant – reading well, taking time to contemplate and be thoughtful, remembering and understanding complexly woven tapestries of information (whether multi-volume literature or world history or computer code), and being able to read long code (or threads) and form a structure of it in one's mind, are all skills signaling good developers.

In my experience, a love of reading, stories, and history in particular, signals a desire for learning, a sense of proportion and place, and a respect for 'the shoulders of giants' likely to help a good developer become great.

“... of course, that's just my opinion. I could be wrong.”

1. http://en.wikipedia.org/wiki/Empirical_research


I have serious doubts about the quality of your "systematic observation and study of hundreds of test cases". I don't believe at all that you've been systematically tracking which of your programmers are history buffs and how it correlates to performance. I think you're a history buff and so you assume that it must correlate meaningfully, just as programmers who are into music, or art, or whatever else do.

No number of developers is enough to turn offhand observation into meaningful correlation. That would require measuring and tracking. It would require more than your gut feeling and confirmation bias. I frankly find it worrisome that you actively choose history majors over CS majors for development work. That says nothing to me except that you have a personal bias.

I stand by my statement that you must be writing small programs if you're willing to print them in entirety on paper. You can find that insulting if you want (it wasn't intended to be), but it's a fact. 10K lines is not a large program in modern terms (though I say it's too large to waste the paper on). It's definitely not large when your team involves multiple developers.

As for code reviews in the 80s, the value is in the review. Whether it's printed on paper or not isn't really very meaningful. It can be nice to have a paper copy sometimes, but it's just that, nice. It's not really a substantive change.

And yes, this history argument is extremely tangential. You did not start with that. You started by saying that you used to spend compile times reviewing the printed code. That has nothing to do with being a history buff. Someone could love doing paper code reviews and hate reading about history. And any link that might exist was certainly not established before you transitioned into talking about history buffs being good architects.


> I have serious doubts about the quality of your "systematic observation and study of hundreds of test cases".

You still haven't provided your basis for positions, and you're still littering your responses with assumptions or accusations.

> I don't believe at all that you've been systematically tracking which of your programmers are history buffs and how it correlates to performance.

You are mistaken. When, out of hundreds of hires, a handful match the criteria, it's easy to look back at data collected at hiring time, and find out if there are correlations (not causations).

> I think you're a history buff and so you assume that it must correlate meaningfully, just as programmers who are into music, or art, or whatever else do.

Wrong. History isn't a primary interest. I enjoy a variety of things more. I believe most are unrelated to ability to architect software.

> No number of developers is enough to turn offhand observation into meaningful correlation. That would require measuring and tracking. It would require more than your gut feeling and confirmation bias.

See above. Your assumption was mistaken.

> I frankly find it worrisome that you actively choose history majors over CS majors for development work. That says nothing to me except that you have a personal bias.

Prefering to hire a History major with a minor in CS over a pure CS major typically results in better rounded individuals more capable of software architecture, dealing with clients, and collaborating with peers. Unfortunately, out of hundreds of hires, again, only a handful have fit that bill. But none of those who did fit that bill, had to be let go.

This was data collected at interview time, and in fact, on the first such individuals, I was as skeptical as you. Looking back at the collected hiring data about outperformers revealed this correlation. Since then, I've confirmed this curious correlation with several peers.

> I stand by my statement that you must be writing small programs if you're willing to print them in entirety on paper.

Again, your assume that I was printing the large programs. I was not. I was invovled with software developed and used by scientific research organizations, hospitals, and universities. They would hand me a box of fan-fold paper (once even on a hand truck), and say, "Here's the code." I found learning the overall picture faster reading through the stack than scrolling the code on a CRT, particularly thanks to the ability to use a highlighter and Post-Its.

Today, when confronted with a similar task, I prefer an iPad and a good reader with markup tools.

> As for code reviews in the 80s, the value is in the review. Whether it's printed on paper or not isn't really very meaningful. It can be nice to have a paper copy sometimes, but it's just that, nice. It's not really a substantive change.

Given today's technology, mostly agreed. However, research suggests tangible artifacts cement concepts more firmly in our organic brains than digital exposure alone.

> And yes, this history argument is extremely tangential. You did not start with that. You started by saying that you used to spend compile times reviewing the printed code. That has nothing to do with being a history buff. Someone could love doing paper code reviews and hate reading about history. And any link that might exist was certainly not established before you transitioned into talking about history buffs being good architects.

You're right, I didn't bring up history first, I brought up the concept of understanding "a map of the whole". To me, that equates closely with a long multifaceted narrative. You brought up the inability to hold a few pages of code in one's head, and I countered with literature and history. Both code and history are something like the Bayeux Tapestry, where the local is most useful when understood in context of the whole. Come to think of it, the word for our oldest long historical texts, and for what you do reviewing code on a the screen, are the same: scroll.

My ravioli's done baking. Enjoyed the discussion. Cheers.


To be fair, you haven't provided any legitimate basis for your claims about the link between history buffs and software architects. I'm just saying that your claims are unfounded, and so I'm not making any claims that demand support. Your claim that interest in history is correlated closely with ability to architect software is not obvious prima facie, so it demands support to be believable.

I also don't think your handful is a very large sample size, regardless of what correlations you think you see. Honestly, I can't imagine how you're even attempting to gathering this info. Are you just randomly asking people if they've read 1776 during the interview?

But no, hiring "hundreds of developers" is still not the same as actually measuring. I do not believe that you have files that track how history-oriented your developers are (make them take a survey?) vs how productive they are, so that you can find a proper correlation. A proper study of this might yield a strong correlation (though I doubt it), but it would require a lot more than casual observation during your hiring.

I did assume that you were the one printing the programs, because that's how I read your reply. In the 'old days', compile time was a chance to print out your code on fanfold paper ...". If that was a misunderstanding, then I guess it changes the situation. Sure, if someone's already handed you a stack of printed code, why not look through it while compiling?

Hope you enjoyed your ravioli. Cheers.


I think about what the Old Masters programmed with just a single 80x25 green-screen terminal, and I weep for programmers today.


The expectations have increased over time. Batch processing banking transactions on a Manframe is a lot easer than writting a bannking website today.


Not sure if being sarcastic .. Expectations might have increased , quality hasnt.


Quality and Scope are always at odds. Comparing a MUD to an MMORPG you might find programers producing similar amounts of bugs, but individual productivity has clearly increased.


If you define individual productivity as the amount of misuse of hardware resources , then yes. In all other cases, no.

A few pretentious programmers wanting fast hardware to play with is one thing, but demanding it and proclaiming that programmer productivity has increased and thus all programmers deserve the best possible hardware is laughable at best.


Feel free to limit yourself to green screen terminal, but good luck building a GUI.

Anyway, fixating on HW like it has meaning outside of the software that runs is a common misconception, and assuming there is any reason to buy a PC with less than 200$ of ram is simply premature optimization. Today that works out to 16GB, in the past that may have been less than 16KB, but that does not mean 16KB was enough just that it was a point of diminishing returns.

PS: Assuming that the same assumptions will always apply even as HW gets 1000 times as fast is ridiculous. I know someone that prevented 1 million in computing hardware from being purchased due to a few weeks work. Today doing those same optimizations would be a waste of time because computing power has literally increased by that much.


Should we build buildings with primitive tools that we used hundreds or thousands of years ago? I mean, builders back then built things just fine, we're so spoiled with our power tools and motor vehicles.


What they built lasted until now, that's how we know about it. How much modern stuff would last even a hundred years, let alone a thousand?


Exactly what has lasted a thousand years without maintenance? I live in a country with very old constructions (Portugal) and all have had regular (heavy) restoration works across the years.


I disagree. I think the overall quality of software built by software professionals has increased over time. It's the natural maturation of an industry (although I believe we are far from mature).


And often connected over a 300 baud dial up.


I offload to AWS too for personal stuff, but is there a way for a company or organization to manage a team of developers spinning up new instances for testing/compilation/etc.? The costs of that could be very variable, depending on the instance type and nature and frequency of work it's used for.


We have a single LTSP server for the office which everybody works on. It's a 12-core, 44GB server with about 1TB of platters in various configurations attached. I think it cost around $8000 all in.

We do a lot of VM spinning, and it's a fantastic way to do that sort of work, plus managing a single server is much less overhead than managing 6 workstations.


So I shouldn't buy a new MacBook Air for the fun of it? I guess the answer is no for PHP programming :)


That wasn't my point -- do buy a MacBook Air just for the fun of it, if you have the money.

Whatever makes you feel better if you have the cash, plus for MacBook Air the portability gained is a great bonus and you may need it.

I was arguing about this notion that companies should buy the latest and most expensive hardware as that supposedly leads to better productivity, but I have my doubts about that. Better hardware does give better productivity, but it depends a lot on what you do with said hardware -- if developing PHP-stuff, no, you don't need a 20-core processor with 50 GB of RAM. You don't even need a MacBook -- any crappy laptop will do.

It's also debatable if you need big dual monitors and an office that looks like the cockpit of a plane. Yes it's cool. No, it won't help with your ADHD problems.


Good points. Regarding the dual monitors, I find being able to move quickly between workspaces in Linux/OSX to be a good alternative (I use three workspaces for programming).


I feel the same way. I have a Dell Inspiron that cost me something like $500 two years ago, and I have no problem doing Python development on it with vim. If I had a compile step I might feel differently.


What language(s) do you work in?


How much memory do you have?


3 GB


you going to get irate at the next point..

I use Eclipse IDE on vmware-Ubuntu on a winVista 32 bit computer with not claims of bloated IDe slowing me down..it snot the IDE but the operator..more often times than not its improper eclipse.ini config that produces such bias rather than knowledge

Should not a java developer know how to use VM settings to set up their IDE? Its about like doing J2ee but not knowing how to do the VM setting son the server..is it not?

--rant off-


My point about bloated IDEs was to highlight my specific setup and preferences, which may not be specific to your setup.

I also think that proper configuration should be preferred over hardware-investments. Heck, I lived in a time when optimizing Config.sys and Autoexec.bat was required for playing games.

I do get annoyed by arguments that take a small part of what I said and comment on it out of context. Don't take this the wrong way, but your rant is not warranted -- if Eclipse floats your boat then I have no problem with that.


My company proactively offered to upgrade hardware for whoever needs it. Even bought a huge monitor for everyone, which I didn't need and doesn't fit on my desk.

But a $99 software license I asked for 4 months ago that will make me more productive every day, I'm still waiting for it.

One time I was at a large company that gave all developers ridiculously fast and powerful machines. By far the highest specs I'd ever used. But the machine crawled because of all the million background processes IT put on there, and the crappy tools in our software chain.

Software is so much more important than hardware.


Is that $99 for tasktop? I've been waiting a week for that one, and will just buy another license myself if it hasn't come through by Thursday. Just as I did for Fusion and Igor...

Now, if I could just afford a CX1...


No, it's an SSH client.


Ah, interesting - never considered paying for one of those...


SecureCRT has some really nice features. I spend a lot of time in terminal sessions and never liked Putty.


Actually, yeah - looks really nice. I know some ME/ChemE types that might go for it.


I spend 90% of my day in terminal sessions (python development on remote servers), and while I like Putty, I'm open to anything that can help me develop faster. What features of SecureCRT do you feel would help you the most?


SecureCRT has tabbed windows, so I can have 5 sessions on one machine in one window, 5 sessions to another machine in another window, and so on. I can find and navigate to the right session with the least effort.

It can automatically keep logs of all sessions. Grepping through these logs to find a complicated bash command or sql query or the output of a query/command that I ran in the past has been invaluable.

And there are little things it handles better than other clients I've used. It restores broken connections better. The session profile management is easier to use. Quickly opening an SFTP tab for the current session is handy. Resizing a window does the right thing. It has a million other options, far more than other clients, but doesn't feel bloated at all.

I'm sure other clients have some of these features but this is the only one that has what I think is the best implementation of all of them, and they add up to a great experience.


Thanks for the feedback. Looks like they offer a free trial, giving it a shot.


If $99 is a little steep (and for me, it was), PenguiNet is a pretty good, mostly-modern SSH client. I purchased a license and it's been worth it.

http://www.siliconcircus.com/


Doesn't it seem that everyone is avoiding the "elephant in the room" here? Virus scanners are a severe drain on productivity. Perhaps I'm in the minority by running a Windows box, but when I open up Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645) it sickens me to see how much I/O is going toward stupid virus scanning operations. Not to mention the 500MB of real memory that the various scanner binaries are consuming. My virus scanner consumes more memory and I/O than my IDE (Visual Studio 2010).

Maybe I'm in the minority here, but I would expect that a developer should know enough about computing to not get a virus. This, of course is assuming the organization has a competent IT department and is "securing the perimeter" with firewalls/proxies, and only allows approved software loaded on machines.

Sure, this would be a radical approach to IT, but I think it's a policy that needs to be implemented.


IT here who has tried this. Completely agree, av tools are pigs and they never seem to stop viruses. Intuitively virus writers must test vs common av tools before releasing.

A locked down network without av on the endpoints worked better in my experience, and the users loved it. The only reasons I still use av for some offices is this:

1. A manager will inevitably ask "omg why wasn't there AV on this users station?! Are you negligent!?"

2. Locking down the network can get in the user's way too. Just because I have good network practices doesn't mean that the software you require does too. All it takes is one business need/damn the damn the risks, and you are left scrambling.


what's interesting about this is that you now have a new virus... the virus scanner itself. They are difficult to uninstall, take up CPU and RAM, much like a real virus


>500MB of real memory that the various scanner binaries are consuming.

I think you need a new AV. MSE uses <50MB and does an acceptable job. Another 5MB for a firewall and you're set.


I worked for a huge company in their IT dept. We tried to convince our management to upgrade all the development machines using some of the arguments already mentioned in this thread. They did not want to believe that a hardware refresh would actually improve productivity ("show us the ROI"). A couple of us decided to build a tool to gather some metrics on how long it took to build and deploy our app on localhost (if you're interested see http://lopb.org). We then compared results before and after a simple RAM upgrade on a few machines. We were able to show hard numbers to support the claim that better, faster hardware would save time waiting for build & deploy to complete.

Although building in less time did lead to happier developers, it did not lead to more features getting built in the given time frame. We did eventually all get RAM upgrades at least, but the development process and technology stack we were using were the real time sucks that we could not fix with better hardware.


I've made this comment before on here. I purposely develop on older hardware because of performance concerns. Pretty much any code you write performs well on an i7 but will it perform well when it gets to your clients who are all stuck running Celerons? I always keep my development machines back a few levels so that I know if it runs well for me, it will for the end user.


Surely the best way to do it is to develop on the best machine available but then occasionally test it on inferior hardware just to make sure you're still on track?

Intentionally crippling the entire dev environment seems a bit drastic to me.


I don't see it as crippling at all... I don't develop on a 486 :) But you also don't need to sit on an i7 either which was the thought in the article. If your target audience is using typical hardware then most of the time you can easily use typical + 1 to develop. Then you can spot issues as they arise and not later when you kick to the test environment. The more you can catch in-process is less you have to re-architect later. Of course every situation will vary. Just my thoughts anyhow.


That doesn't really work when you have a complex system that spans servers, clients, databases, custom libaries,etc. And a large large test suite.


That's often my thought too. Surely we can have the best of both worlds though? How far has virtualisation come in _simulating_ slower machines, whether it be cpu, network, memory or storage constraints?


Not good enough. Caches are the killer. A coworker of mine committed code that ran fine on his machine, but was rejected by our nightly tests on older machines. It turns out it wasn't that long ago that sub 1MB caches were standard, but his machine has 8MB of cache. This meant about a 20x reduction in performance on this algorithm (though some of that was also ALU performance).


In a lot of organizations, software development is secondary, so the machine you get is going to come out of some system that does a mediocre job of procuring machines for salespeople and secretaries. If you're lucky you get a new mediocre machine, if you're not you get a hand-me-down laptop from a salesperson who couldn't sell.

It doesn't make any sense, but 70% or so of people who hire developers don't seem to look at this rationally. On the other hand, they usually don't provide you good specs either.


Even in shops where software is the business I've been given a standard Dell laptop for development. The IT folks didn't want to bother with procuring different laptops for different business functions, so everybody got the same thing.


What constitutes 'best hardware'?

I have a friend in high finance whose desktop is a (dual?) quad core with 12GB ram which he regularly maxes out. I have a colleague that writes firmware on windows XP, which with his various drivers and Eclipse problems causes a BSOD about once a fortnight. I keep offering him to get a new computer, cards, whatever, and he refuses because of the cost to him in terms of getting his environment set up 'just right' again - he's quite picky (at my previous workplace, a digital circuit designer on six figures didn't want to move from his PIII to a Core 2 for the same reason).

Another colleague runs a quad core on an SSD and complains that his CAD program runs slow... but when we run it through its paces, it doesn't seem to hit any bottlenecks bar the initial load (which isn't his complaint); it just 'feels' slow.

as with all things computery, the answer is: "it depends"


>I have a colleague that writes firmware on windows XP, which with his various drivers and Eclipse problems causes a BSOD about once a fortnight. I keep offering him to get a new computer, cards, whatever, and he refuses because of the cost to him in terms of getting his environment set up 'just right' again - he's quite picky (at my previous workplace, a digital circuit designer on six figures didn't want to move from his PIII to a Core 2 for the same reason).

I've seen this before (and was guilty of this kind of thinking a few times) and I believe this is simply misguded. No matter how good of an "old-school" developer someone is, she needs to keep her tools fresh. To me this is no different than having a very manual build or deployment process (maybe not as bad but still).

I believe when it comes to HW + toolset for development, you simply must consider:

-Computers break - you need to be able to set up a new one in a matter of hours (or 10s of minutes). Maybe it does not need to be this drastic everywhere, but if your computer breaks and it takes you a few working days to get it just right - something is wrong - you need to automate it or reconsider the tools (which may not be always possible especially in embedded/hardware design world). If you get this right - upgrading will not be (such) a problem.

-You need to keep up with the tools - maybe more than you need to keep up with the libraries. With this I don't mean only the newest versions, but to constantly be on the lookout for the better alternatives - blogs, forums etc. there is simply no excuse for not doing this if you are a developer other than laziness. This will help you get rid of some stuff on your machine you don't actually need because you may end up with better alternatives. I am a firm believer in getting rid of clutter in your life.

Not all devs in all companies can follow these, of course, due to policies, the nature of work, "legacy stuff" and whatnot, but it helps a lot if you can, and point 2 means you are learning - so you are not bored.


I still use Win 2k3 on my work laptop. I'm using this OS since RC2, and I'm way more productive on that old OS than on W7. I tried customizing a W7 install to look and feel just like that and I failed to get the same feeling. I use W7 at home, it's very nice for entertainment, but I just can't get into 'the mood' with it.

For the other applications, I went for the 'portable' version of everything I need. This way I can move from one machine to another without spending days setting up everything - I just copy the 'tools' directory and I'm ready.


You should ask him to look into disk2vhd tool. Allows you to make VHD off a current set up which he can run on new machine as VM until he is comfortable with setting new machine per his requirements.


I don't care so much about the hardware as long as it does its job and there are no crazy wait times. Having good and recent software on the other hand is very important to me.

I simply hate having to put up every day with bugs that have been solved five years ago. And especially if there is no reason beyond simply the slowness and conservativeness of the IT dept that causes this.


My philosophy.

Buy a nice monitor (2 if you can afford it).

Buy whatever hardware is reasonable for ~1k or less. Repeat yearly.

This will always keep the developer with a good machine without spending too much money. With the way HW is nowadays, it's possible to go 2 years on each machine.

The story changes if you need laptops. After using LOTS of laptops over the years, if given a choice I'll only use MBPs now.


I concur with all the above. It doesn't take a fortune to keep developers in really nice machines. I do prefer to get my developers laptops, though (unless they explicitly prefer not). Convenience is just too nice (not too mention the occasional time at home always pays for the extra cost).


I agree and will personally never buy another desktop. With laptops I budget them to last 3 years as my primary machine. I've owned 2 other MBPs, all of which are still in use by the people who I either old them to or gave them too. The original G4 PB is still kicking strong with a guy from work, my mom loves my old SR MBP, and my current i7 runs great to the point where I could see it lasting 5 years by simply replacing the HD with an SSD when prices are right.


Something I always wanted to do as a manager was to offer not just great hardware, but essentially an allowance. That is, the developer is expected to buy and maintain their own hardware, but they get a budget. Say, $3000 at signing, $1500 annual refresh, and you have to pay it back if you quit or get fired for cause within the first 12 months. Coder gets to use whatever hardware they please, and keep it. I have predictable support costs, and in the grand scheme of things, it's a pretty cheap perk.

Never could convince my superiors. But maybe when I start my own...


Sounds like a good idea, but I would not want to risk getting stuck paying $3,000 for a machine I don't need anymore.


Don't... need? What? ;)

So, if you already have hardware you like, use that and take the money as bonus. Or get a big monitor.


That's a very nice idea, but it works only for 10% of the developers. Not all the developers are hardware geeks.

I usually upgrade my own machines, but I have to answer awkward questions from my managers ("why the hell do you need an internal 3G modem? don't you have the USB stick?" to "can you really have two hard drives in a 15" laptop?"). Sometimes I just buy the hardware and ask for a refund later, it's easier to prove that it's useful.


C'mon, everyone knows enough to buy a laptop. They would do it for themselves anyway. Even if that's just going to the Apple Store and saying "give me a laptop" (which BTW is what I do, I have no pretensions about knowing anything about laptops).


If you're working in an in-house IT dept. then giving the staff of a cost-centre better equipment than the "people who make all the money" is a big issue.

If you're working in a big company then central IT probably have a standard PC supply agreement and standard image and will oppose allowing anything non-standard on the network until someone signs off on 2 or 3 extra support engineers to "support" this non standard stuff.

If your company isn't making much money, any capital expenditure like this is hard to justify.

So, you really want to work for a small software company that's making lots of money.


Another important 'hardware' that doesn't get enough attention is the chair & desk. I believe that having an ergonomically designed and adjustable chair and desk is as important as the PC specs to programmer productivity. In my current job I have a bad chair/table and it is breaking my back. Probably this issue also gets worse with age. Investing in good furniture might also save the company on health care costs. From personal experience I have spent quite a bit on fixing my back.


From the other side, as founders, when did you switch everybody over from using their personal laptops to company laptops? A funding event? Certain size of the team?


Even before paying for salaries, we buy hw for founders or employees as needed -- early on, pretty flexible (I got a desk and chair for home so I could wfh; cofounder got a new mbp 2011to replace a 2009, but since I have a 2010 personal, I didn't feel any need to upgrade myself.

One of the big advantages of being a small startup is flexibility -- no need to have policies for stuff like this, just handle it on an ad hoc basis.


Am i the only one who thinks developers asking for good hardware is an extremely bad idea? All the greatest software ever written has been developed on extremely slow hardware (especially by today's standards). Fast compile times is an oft-cited reason, but you can always work on the program while it compiles. Are 'developers' these days really that perfect and busy that they need to build their 10GB codebase in 10 seconds? And they have /nothing/ else to do while it builds ?

According to me, all programmers should be given absolutely the bare minimum hardware to program on. This way we can eat our own dogfood and hopefully reduce program bloat. The primary reasons programs suck these days is because 'developers' have terabytes of RAM on their development boxes and consuming 1G of memory for an applet isnt a big deal for them.

I say give all these developers asking for more hardware a 386!


There is this concept of "flow"- a break in your concentration (for example, because your IDE is too slow to keep up with your typing) can often be enough to push you out of this state of flow.

Sure, if I know my compile is going to take a few minutes I can plan for it, and work on other stuff in the mean time.

But for me the tiny interruptions really add up, and contribute to me not being as productive as possible. This is especially true when you're hunting a bug, or working in a tight TDD loop where you're constantly editing/compiling/testing. Having to wait for the machine is a real morale-killer.


Devs should be required to _test_ on that older machine, and benchmarks should be recorded and distributed, to keep everyone aware of the base-supported-config performance.

The software can also suck from hundreds of hours of time being wasted by the older gear. If your competitors don't think the same way, you have a problem.


It would be way to easy to ignore and rationalize away numbers on a table, making it personal for them means automatically fast for every user.


In the past I have been content with hardware which is several years old. One advantage with slower machines is that benchmarking becomes easier, such that any slow operations are really obvious at the development stage. The other advantage is that customers are also often using similarly aged hardware. Older hardware forces you to make your algorithms efficient, rather than relying upon faster machines to hide the bloat.

In my case I was also often developing for embedded target hardware which was a good deal slower than a typical PC, so older PCs were more realistic for testing.

The advantage of always being on the latest hardware is if you're developing large software systems which take a long time to compile, or if you're doing something which fundamentally requires significant number crunching - such as games or computational chemistry.


I work with 135,000 of my co-workers and we all get rather slow, outdated, Windows 7 boxes with 2GB of ram. It's the standard config regardless of what you do. Supposedly it's much cheaper to manage for the IT dept than if we got our own stuff individually and I imagine it's much cheaper to buy 50,000 of the same computer from Dell than order a whole bunch of variety.

Seeing as developers are a small percentage of the total workforce, even if all of them complained, it would be drowned out by the mass of people who have computers good enough for their jobs. The cost of having to deal with ordering and supporting different computers (beyond just laptop versus desktop) is not 0. The quantifiable gain from having some people have better computers is very difficult to calculate. Thus, it's easy to just give developers the slow boxes and listen to them complain.

Related, I've previously emailed to ask why my company has a policy that every PC must be shutdown every night even though it can take up to 8 minutes to start up in the morning (we have a lot of required anti-viri/spam/malware and disk scanners that run). I was told that the company expects us to not be fully efficient all the time, go get some coffee while your computer boots. Also, different budgets cover PC cost versus payroll.


"I was told that the company expects us to not be fully efficient all the time, go get some coffee while your computer boots."

Who told you that? Well, actually, it doesn't much matter to me, but if you were told that by IT, you may discover that forwarding that email up to management will have exciting and entertaining consequences. As long as you don't mind making an enemy or two.

On a more practical note, some machines can be set to turn on at a certain time, via BIOS settings, or hardware that takes advantage of Wake-On-LAN or other such things. If you're really personally bothered, you may be able to take advantage of that. You may also want to consider trying to simply suspend the machine. It'll mostly look and be off, but should come up more quickly... if it works.


Management told me that ;)

Our IT is some low paid people answering the phones in a foreign land. They don't much care if I complain. The IT managers higher up who make the decisions aren't any better at listening to employee complaints. Their motivation appears to be save as much money (that they decide to count) as possible. Their impact to costs outside of the IT budget aren't high on their priority list.

My laptop gets locked in a cabinet at night, I just suspend it and everything appears to be off. If you have a desktop, that's not necessarily possible, and the Dell's we have usually have some blink'en lights still going when the machine is asleep.

BIOSes are generally locked down so employees can't change them. Laptops have mandated full disk encryption that uses the MBR in special ways, so dual booting is not possible.

I actually was told once that I had left my 21" CRT on overnight. The little stand-by light was still on when I left one day. That's a big no-no. There's somewhat random checks for these types of things. Fun, eh?

Thankfully, the work I'm doing is interesting and the pay's good. The politics and budget antics are rather annoying but I get the impression that's the way the world works at large companies. Maybe I'm wrong?


Real question: I know these are huge companies with diverse teams and projects but can anyone share what type of hardware Google, Apple, and Microsoft provide for their devs?


Don't trust me at all but I've read about Microsoft's machines (probably here on HN) and as I recall they have really top-end hardware.

I think Google has a lot of MacBooks but I don't know where I get that from.


I spoke with a Google sys admin a few months ago. He said he had a MacBook and a Lenovo Thinkpad with Ubuntu. It was in the news a while ago, that they only use Mac or Ubuntu. Any Windows OS has to have some type of signoff from management.


That's what I've heard too. Also, according to this article, 70% of the laptops run OS X and 30% run Ubuntu. Most desktops are Linux, though.

http://digitizor.com/2011/07/12/google-android-linux-dream/ and http://news.ycombinator.com/item?id=2755050


The economics in the article are faulty because they do not include the considerable amounts of staff time entailed in swapping coumputers out. Those include not just the obvious time spent acquiring a computer such as determining what is available, comparing specs and pricing, ordering, receiving and physical installation, but also all the productivity losses which configuring a new computer entails - e.g. installing and configuring all the various pieces of software for the new OS installation which invariably accompanies those Macbooks which the author advocates (the same would hold true for Windows machines as well and even the drivers on Linux would have to be tweeked for new hardware). And lest we forget, there's handling and disposing of the old computer which also takes time.

It's the sort of thing which can easily consume 40 person hours - even without the considering the inevitable time lost playing around with the new toy.

Finally, there's dealing with the inevitable pissing and moaning which accompanies any change - some people just want their damn computer left alone because it works fine, thank you very much. Other's wanted the 15" MBP not the 17", while the OSS fanbois cannot believe that they were once again thwarted in favor of commercial software.


As well as just the hardware speed, I think for some it would also improve their motivation, and you may gain more from an employee in that.

It also depends what your doing, if you are after someone to produce great pixel perfect designs, get them some decent screens. If you have an app where the latest i7 and SSD can cut compilation time in half you can probably make a good gain there by not having the programmer get distracted each time they compile.


This is a reason why I'm interested in "Bring Your Own Computer" idea. I mean I still bring my laptop to work anyway. Normally my personal computer is more powerful than what I have at work anyway. And if the company would pay or give me some bonus for it I'd be really happy.

http://www.zdnet.com/blog/sybase/the-year-of-bring-your-own-...


IANAL, but in the modern world, that can result in a lot of potential legal hassles. E.g. company gets sued, and suddenly your personal computer gets subpoenaed in the process. You're without your laptop AND other people are looking at potentially embarrassing personal data, etc.


If you work at a big company, it's because most people are bad at programming, and the assumption is that you are too. Therefore, doing anything other than the bare minimum to prevent employees from burning down the building is a waste of money: a new computer is never going to increase productivity if the person using it doesn't know how to program.

The reality is that organizations don't change, and if programmers are considered code monkeys at yours, you need to GTFO if you aren't one. The reason your coworkers don't do more to change the status quo is because it's great for them: no real obligations and a nice bump in titles every five years. They don't need a better computer because they don't do any work. If you actually want to program computers, though, then you need to look for other opportunities.

<jedi hand wave> This isn't the employment opportunity you're looking for.

If you work for a small company and have this problem, it's simply because they're cheap.


Another point to make: if you're a developer, don't be afraid to ask for the best hardware.

I've seen many developers sit around staring at laptop screens while I'm working on my 2x24" monitors.

Do not feel like you're being greedy. Ask your boss. The most he can do is say no, but the likely thing he'll do is 1) ask why, 2) say yes.


I once had to watch a startup skimp on dev hardware, only to splurge on 'launch' parties and (no kidding) billboards...

The founder, a marketing major, had a hard time explaining things to their investor once the inevitable end was clear.

I'd ask the question: why wouldn't a company trust the developers' specs for adequate hardware?


Those few seconds of delay between performing actions can mean the difference between keeping focus on your work, and getting annoyed and distracted. I simply can't understand why anyone wouldn't want to give their developers at least a fast CPU, plenty of RAM, and an SSD.


I've been reading a lot of Paul Graham's works and one of the big things he points out is being cheap is good.

"8. Spend little.

I can't emphasize enough how important it is for a startup to be cheap. Most startups fail before they make something people want, and the most common form of failure is running out of money. So being cheap is (almost) interchangeable with iterating rapidly. [4] But it's more than that. A culture of cheapness keeps companies young in something like the way exercise keeps people young."

http://paulgraham.com/13sentences.html

Take those few moments of lag or downtime or whatever and enjoy your day--do something else that's productive or have a drink or something; make the best out of life.


Due to all the costs of adding extra developers (management overhead, communications complexity, inertia), spending $20k per person (every year or two) on hardware (desk/chair, home and office setups, laptops, phones, etc.) is still a win, if it lets you have smaller teams for the same overall productivity.

The bigger hassle for me is that upgrading machines causes some downtime, so it's better to buy loaded boxes and replace them slightly less frequently (every 18-24mo) vs. a new machine of lower spec every year.

Tools also are a great place to spend money; having a great build/provisioning/tinderbox/etc. system saves developer time, and doesn't add communications complexity.


I don't know if it's anyone else's motivation, but developers should have machines that approximate the machine of their software's customer base. Otherwise what runs OK on a developer machine can be painful to use out in the field.


Remote debugging works pretty well on Windows, and I imagine it's the same story on anything Unix-like. You can work on your decent PC, debugging your program as it runs on some rubbish old castoff PC that was destined for landfill. If the budget isn't even enough for the necessary peripherals, you can get away with just a kettle lead and a network cable and then use Remote Desktop to interact with it.

The remote debugging setup process isn't very slick, but it doesn't take long to figure out, and with a bit of folder-sharing you can keep all the files on your work PC so that it's all nice and convenient. I did this for quite a while, working on Windows, and it worked well. I don't recall any significant problems with it.

If cost is what's preventing you getting a nice PC, then that's one thing, but the issue is just ensuring that slow code doesn't go unnoticed, this approach will probably keep you/your programmers happier...


That's the reason we have test VMs, to approximate the customer's environment. Sometimes I just get a VM clone of the production machine with everything set up.

Right now I write code for a 32-CPU 64GB RAM 300MB/s machine, should I request one exactly like it for myself? And it's not even a hardware issue - the OS, DB and middleware licenses to use on that HW cost >$1M.


Depends on what you are developing. If your tests and compile times take 5 or 10 minutes, you are losing tons of productivity.


Only if you write plugins for the same IDE you use to develop in.

If you write Java code, you need 2GB for the IDE on a large project.

But then, many people don't write software to run on customers machines.


Definitely a no-brainer. Always buy the best machine you can get.

I'm surprised that most developers don't take the same approach to tools. If you're using Eclipse or (god forbid) a text editor to write code, spend a minute and tally up all those 5-second chunks of your life you've spent this year looking up the names of variables, objects, whatever, and running into runtime errors from typos. Multiply by $$$/hour and see what you could have spent on a decent IDE.

JetBrains makes IDEs for pretty much every language out there by now, and any one of them will pay for itself in about four days.


"Best machine you can get" would often be overkill. Consider the "best" on the Apple Store today: a 12-core Mac Pro with RAID and 2 27" Apple Cinema displays, you're looking at more than $12,000. And unless you're overflowing in VC cash like it was 1999 it would be insane from a business point of view to equip every developer with this kind of hardware.

By the way many folks are highly productive using a text editor to write code. They might start off slower but end up actually learning the language and libraries they are using. IDEs that step in and try to take over while I'm typing drive me insane.

Back to the primary topic, certainly it's a false savings to skimp out on buying an adequate machine for the task, but for most devs who spend most of their time in an IDE or editor, something like buying last year's best is often completely adequate and much more economical. To expect the "best machine you can get" without regard for cost is not realistic.


> If you're using Eclipse or (god forbid) a text editor to write code, spend a minute and tally up all those 5-second chunks of your life you've spent this year looking up the names of variables, objects, whatever, and running into runtime errors from typos. Multiply by $$$/hour and see what you could have spent on a decent IDE.

This is a bit of FUD. There are plugins for most text editors that give them features that most IDEs have. For example, my Vim setup has tab and code completion, snippet management, syntax error highlighting in real time, code folding, document and file search, etc. I can also traverse a file faster in Vim than an IDE, my fingers do not ever need to leave the keyboard, and it's completely free (as in beer and freedom).


I'm a hard-core emacs devotee since 1988, and I used to think the same way- who needs freakin' autocomplete? But a good friend made me try one out, and indeed, a decent IDE really does make a big difference in certain environments.

For example, being able to click through an entire app from Spring XML config, all the way through your own source code, into library files, and even the source for the JRE makes a huge, huge difference in productivity.

And languages/libraries/frameworks change all the time; the IDEs tend to follow these things closely. Having to maintain your own hand-crafted elisp (or whatever) takes a lot of work. I've been there, and I don't plan on going back any time soon.


For some reason, I use IDEs for Java and emacs for everything else. The only other area I find emacs lacking is web pages that mix languages (e.g. HTML/Javscript).


I use Vim and don't really need all the fancy code/whatever completion things, but how on earth do you get them? It seems complicated enough that I haven't bothered.


It's not to difficult, There are a ton of plugin's listed on the Vim site, and some on github. Most of them are just dropped into your .vim directory and some need to be set to startup in your .vimrc. Command-T(this one can acutally be kind of a pain to setup if your vim was not compiled with Ruby support), SuperTab, SnipMate are the three I recommend the most.


Thank you, I'll give them a shot now.


For something like on-the-fly highlighting of syntax errors it's usually just as easy as downloading the plugin and dropping it into your ~/.vim/ folder. For more some things you might have the extra step of having to remember a new command or keybinding, but that's about as complicated as it gets.


Emacs users should check out the flymake minor-mode.


ctrl-( p previous or n for Next) is built in auto-completion of what you are typing. It can be configured to use tags, files, only open buffers etc.


Ah, I do use that (rarely). I was looking for something more intelligent that might perhaps help with development, rather than with just typing.


You might want to look into Tags. They tend to be what gives vim its ide powers.


If you're using Eclipse or (god forbid) a text editor to write code, spend a minute and tally up all those 5-second chunks of your life you've spent this year looking up the names of variables, objects, whatever

Nope, i love my text editor and i see no reason to start using an IDE. I've used IDEs before and they're all monolithic, slow, bloated tools that slow me down and make my life harder[1].

[1] Hi! I'm <strike>clippy</strike> Netbeans. It looks like you're trying to write a program. Can i autocomplete your words wrong, reformat your text in ways that don't make sense or follow your formatting guidelines and then crash?


My work dev laptop is bitchin fast, but runs slower than my 3 yr old personal.

Because of useless software (cya-ware) installed by corporate IT.

Useless anti-virus crap (ever hear of sudo?), ridiculous hard drive encryption, remote monitoring/management stuff.

Just working in Eclipse, I often wait every single keystroke. Yes, Eclipse is mostly a pig, and I've disabled/closed everything I could. But I have zero hassles working on my personal laptop, even when I have video (or audio) running too.


I had to buy myself a couple of SSDs. When building your project takes 2-3 hours of time, every improvement really matters.

Our management just tells us to go context switch onto something else, there's always lots of thrilling email answering and documenting to do. At other times you can switch to fixing bugs or working on another feature, even though I personally cannot stand continuous context switching as it decreases the quality of my work.


well. if it is 2-3 hours task - i believe 15 minutes wasted for context switch is still better then 2-3 hours lost waiting =)

Questions is - will 2-3 hours task become 30 minutes task with SSD drives?


Minimum two large monitors, say, 24in 1920 x 1200. Preferably three. This, in my opinion, enhances productivity far more than the difference between a 3GHz and 4GHz machine.

That said...fast is good...faster is better...ridiculously fast is just fine!

All of our workstations have a minimum of two monitors, some three. A few are overclocked and use fluid cooling to keep them from going up in flames.


You're a programmer making $100K/year. If you're confident that a new whatever will demonstrably increase your value to the company, spend $1K/year on the new whatever and make the case later that you are more valuable and should be paid more.

And, yes, this doesn't take into account that the company might not want to support your idiosyncratic hardware choices.


Not everyone can do it, but for smaller purchases, it's much easier to just get it on your own and file an expense for it. 8GB memory SODIMM kits are about $75 and probably have the biggest impact on performance. I used to upgrade company-issued hardware all the time -- IT never knew and my manager didn't care.


It's probably easier for smaller companies to make hardware refreshes but in big corporations there's the whole issue of "standardised desktops" and so on to get around.

Getting a new machine isn't just a case of plonking a new one on your desk, I'd imagine a lot of red tape goes on in the background


As long as your machine provides reasonable performance, I doubt the marginal return of a better machine amounts to much additional productivity. I'd rather spend that money on 2 or 3 monitors, better software tools, or a quiet work environment.


I'd rather have developers use slow machines. This prevents a lot of foolish performance problems. The developer's laptop should not be faster and have 5x faster storage than the servers that will host the app.


I believe that most developers will gain 1 hour per day of productivity by having at least 2 monitors. Any company that isn't paying for multiple monitors is making extremely foolish management decisions.


Do you think John Resig needs the latest hardware to run VIM and Firefox?


Do you think all developers are running VIM and Firefox and developing JavaScript?


I believe real cause that managers don't know math, bureaucracy or disability to see the big picture. Simple rule in our company we buy the best tools (software / hardware) that money can buy, because mathematically it make sense (there are a few exceptional cases)


I got the best machine they have at Alienware because I got fed up with managers and customers sending me 200mb pdf containing 8 raw bitmap files. I don't regret it although I still have my indy from silicongraphics for nostalgic reasons.

one of the best benefits? I can look something up in no time.


A rather silly post - there are a lot of reasons why companies don't buy the best hardware - the comments on the thread do a great job of listing them.


Isn't that the point of the question - to get those answers? It's a Stackexchange site, not a blog.


Even then, it's still a silly post. The reasons for not buying the latest hardware are rather obvious.


If the productivity gains outweigh the cost is it still obvious?


The point is that productivity gains are only one obvious metric - businesses constantly battle capital spending vs. productivity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: