Hacker News new | past | comments | ask | show | jobs | submit login
Jeff Atwood is wrong about performance (compaspascal.blogspot.com)
62 points by vaksel on July 5, 2009 | hide | past | favorite | 30 comments



"Hardware is cheap, programmers are expensive."

Mediocre programmers are expensive.

Good programmers are the bargain of the century.

If companies would just wise up enough to pay a good programmer 3 times as much as a mediocre programmer to do 10 times the work, do it right, and do it so that it can be maintained reasonably, Jeff Atwood's tradeoff would become moot. But companies generally don't do this, which is probably one of the main reasons the best programmers go off and do their own.


100% agreement, yet:

> If companies would just wise up enough to pay a good programmer 3 times as much as a mediocre programmer to do 10 times the work...

Very few people have such a good eye for talent. It's really, really hard to do. Jack Welch, one of the better HR people of all time, said his hiring success rate was only 2 out of 3 hires working out even towards the end of his tenure. It's really hard to identify good people, know how they'll change and adapt over time, create a good environment for them, and so on. Vastly under ratedly hard. One of the reasons good people are underpaid is that it's often quite hard to tell them apart from "just okay" people. People are bad at picking talent - they mistake overconfidence and gregariousness for ability, they put put premiums on likability and agreeableness, they look for people who "look like a winner" when that only maybe slightly correlates... hard stuff. Companies whose managers have technical backgrounds have a better chance, but it's still quite a difficult challenge.


They also can mistake business for productivity. For example, there's the classic joke (at least in the sys admin world) about breaking something about a month before reviews, then stay over the weekend to fix it. You will be the savior of the company and be rewarded in kind.

Really good work is hard to see sometimes because it doesn't look like work. If you had two teams, one just delivered on time on and on budget and the other spent a heroic month overdue fixing bugs and finally shipping over budget it might appear that the people on the second team are better. They work more and worked harder than the first team. You might even decide that the first team was given too large a budget and too much time, when in reality they paced themselves and had talented people who could give good estimates and were able to deliver on their promises.

YOU might not make that mistake, but I can guarantee that many managers would give greater rewards the team working overtime to fix (their own) bugs than the one that consistently delivers on time and on budget.

I think it has more to due with the culture believing that software is buggy and impossible to schedule, and rewarding people when faced with those things.


"Very few people have such a good eye for talent."

That's OK because it's not necessary.

Management doesn't need an eye for talent. It needs an eye for demonstrated performance.

If Programmer A consistently delivers excellent software in x% of the time of Programmers B thru G, the users/customers love it, and the maintenance costs are a small % (if they have a way of measuring this), then what the f*&#%@ else does management need to know? Treat Programmer A appropriately or lose him/her.

Talent is in the eye of the beholder.

Demonstrated performance is on the bottom line.


But do managers know how to measure performance? Really?

In an enterprise situation, for example, do "customers" really know if they love the software, or the presentation that sold them the idea, or perhaps they asked for something to be built wrong.

I would suggest that "very few managers have such a good eye for performance" no matter the metric.


When I was a low-level employee using custom-built SAP crap, I could have easily lost my job if I dare suggest an "improvement" on the software.


Demonstrated performance on what? A padded resume? A minimal contribution to a well known large project?

Or is your ideal company one that hires 100 programmers to perform the same task and then fire the bottom 95% after a code review? You're assuming the company has already hired all-star talent (accidentally?) and just needs to pay off their demands. What happens when this all-star leaves to balloon across the world - how do you find a replacement?


This is why projects with large teams contributing to one system or application make it exceptionally hard to pick and reward good programmers. Their contributions just wont stand out functionally in the mass of mediocre performing system.

Moreover, their code may be considered unreadable or even dangerous by less experienced team members, leading to situations when he is considered less capable by the management than actually is.

It seems to me developing systems with loosely coupled components rather than tight, rigid interfaces, and keeping overall size small is the only way for assessing performance of products of individuals. Not an easy way, for sure.


what separates a good programmer from a mediocre one?


Great question. Ask it to n programmers and get n^2 responses. This could easily be the subject for another post or even a book. Just off the top of my head in no particular order:

  - understands the problem at hand before writing any code
  - uses the right tool for the right job
  - follows accepted standards and protocols without sacrificing creativity
  - names variables & functions what they actually are for the next programmer
  - anticipates what could go wrong before relying on a debugger or testing
  - understands the underlying architecture and how to best utilitze it
  - never writes the same code twice
  - never writes in 150 lines that which could be written in 100 lines
  - Poor code: uncommented.  Mediocre: commented.  Good: doesn't need comments.
  - understands the entire code life cycle & writes it to last
  - has pity on the poor soul who has to maintain it & leaves a clue or 2
  - writes flexibly enough to be easily changed before the project is done
I could go on and on, but you get the idea. In general...

A good programmer writes it right, once, in a week.

A mediocre programmer writes it OK, in 2 months, and then futzes with it forever.

A bad programmer never gets it done.


I agree with a vast majority of this, except for the qualification of time.

There are many brands of mediocre programmer, some of which would claim a finished product well before the good programmer. But, the code will prove a nigh-unmaintainable disaster that will required all sorts of time later when a new feature is requested or bugs start creeping out.

There are times when a good programmer will take considerably longer than a bad programmer, and might spend time futzing around with different implementations, moving code around, creating various data structures, etc.

Depending on the task, I would argue that can be a good thing. The good programmer is attempting to understand and present a clear conceptual model of the problem and solution in his code; the mediocre programmer might just create a litany of special cases for the cases specified right now just so they can get that particular program behind them.

Although the mediocre programmer might produce code faster, I would argue the good programmer is still 10x more productive, because he has done that much more thinking than the mediocre programmer. Perhaps when you refer to futzing, you mean changing the code just for the sake of another commit to show management, where nothing is learned and nothing is improved.

Also, a good programmer might come back two months from now, with the knowledge he gained from some other project, and make a change to the program that tangibly improves it for the better. The mediocre programmer will have put all that behind them; it is not on their list of projects and requirements, therefore they won't bother, nor will they bother even mention it to anyone.

But, I suspect this sort of comparison of timespans varies with size, and with the difficulty of the problem space.

And yeah, N^2 different answers. :-)


There's a rather simple metric actually: A good programmers stuff works.

You don't have to go back and fix it, you don't have to rewrite large parts of it to extend it.

One day I want to write a VCS history analyzer to quantify these things. Svn blame ($yourtool annotate) works to an extend, but there's a lot of untapped potential in that data.


good luck with that ;)


I get the idea. However, I would be severely challenged to actually measure any of that. To me that means that "only a good programmer knows another good programmer", which is a hard situation to start from when you are trying to reward good programmers.


> never writes in 150 lines that which could be written in 100 lines

I'd disagree on this one. Oftentimes the 100 lines of code is an unintelligible mess while the 150 lines isn't.


I would like to add:

- does not over-engineer

A good programmer abstracts and generalizes as much as reasonably necessary and no further. A mediocre programmer will over-abstract the system for pointless 'extensibility' that will never be used. A bad programmer just outright doesn't do anything.

A mediocre programmer always codes with the "kitchen sink" mentality, targeting specifications that do not exist, but in their own minds do.


The same thing that separates a good plumber from a mediocre plumber.


and that would be?


Pipes/abstractions don't leak?


If you have upgraded all your servers without consolidating, year after year, you will notice that your electricity bill is going through the roof.

I don't know, you can buy an awful lot of electricity for an hour of my time, and I'm cheap... I think the largest deployment my day job supports probably causes an electricity bill in the hundreds a year range. If you had a meeting between one PL and two engineers to discuss what to do about the electricity bill you could cancel the meeting and pre-pay the next few years.

Adjust if you're Google... but you're not Google.


It's not cheap, and you don't have to be Google for it to matter. Most of the time, the costs are just non-obvious to engineering.

Here are some rough numbers.

Power for a single 48U rack usually runs $200-$300/month for two 20amp circuits (possibly +2 for redundancy). A modern 1U server draws anywhere from (roughly) 1.66 to 2.5 amps, and you can only use 80% of a circuit's maximum supported load due to safety regulations, resulting in a total supported count of between 12 to 19 mid-range servers per rack. Higher-end servers have a significantly increased power draw, but potentially higher efficiency, and there can be a significant advantage to leveraging multi-core systems.

In addition to rackspace and power, each server must obviously be connected to the network. The switches draw enough power to potentially decrease the number of supported servers.

Each rack costs roughly $500/month (plus power), and often requires advanced provisioning to ensure that rackspace is available, and if possible, located in the same area. It's not unusual to buy an entire block of space well ahead of actual use to ensure that the business' growth can be accommodated.

Additionally, one requires an operations staff to maintain the servers. Hard drives fail, backups must be done, hardware must be ordered, tested, and finally, deployed. Simple deployment of a single server, including the networking configuration, travel time, and physical work can easily consume 4-8 hours of operations staff time.

When you add together the operational and capital expenditures, I'm not really convinced that the idea is so simple as "servers are cheap, programmers are expensive". Servers, electricity, physical space, and the labor necessary to install and maintain them are not particularly cheap.

I'd propose a new mantra: programmers are expensive, and so are servers and IT, and so let's try to not be monomaniacally flippant about it. Software can be easy to write AND perform reasonably well, and to say otherwise simply appears to be a false dichotomy fronted by those who benefit from it most.


You're still talking about a minimum of 12-19 servers though. I'd imagine that the average website doesn't use even that. I think that SO in particular is at 3 servers. Maybe if SO explodes, this will become an issue, but an optimization would likely have to make a difference of a few racks before it it's even worth considering making an optimization based soley on power concerns.


Whether an optimization is worth pursuing depends on the business requirements, how much the it costs to implement, and whether there are any additional gains beyond avoiding power, space, and personnel costs.

Simply picking an efficient (both for the developer, and in terms of implementation) language/platform can be enough to start with, and will help you avoid being boxed in later.


Nice straw man. Did anyone ever try to say that it's not worth fixing a 1000x-slowdown error because "we'll just buy 1000 times the hardware, that's cheaper!"?

Obviously there is a balance somewhere between pointless, unmaintainable, expensive over-optimisation on one end and programmer incompetence, bad tool choices and ignorance of hardware limitations on the other.

There will be a sweet spot in the middle, possibly different for all projects, which just has to be figured out by experienced people who knows what they're doing. Why all this trying to take "sides"? It's like getting all fired up about which is "better", planes or ships. They are both good for different purposes, choose the one that fits your need!


I guess that when the problem is to really scale a lot the big cost may be the energy not the hardware, so the performance the coder can bring you for watt is crucial.

And to use few CPU cycles you need a programmer that is smart enough to know both computer science to select the best algorithms and data structures and low level programming languages to lower the constant times involved in this algorithms.


One item left out is floor space. This is a serious consideration, maybe more serious than electricity. Real estate for your server racks has a cost associated with it, and it is not so easy to buy/rent in rack-sized increments.


Its a short term vs long term analysis. If you focus on the long term, programmers are cheaper. Short term, it is easier to throw more hardware at the problem. The part that makes it difficult to measure is estimating how fast it will run on the new hardware. Usually if you use good algorithms and data structures with an architecture that matches the problem, adding hardware will scale the performance. Mediocre programmers rarely can get this right unless they are lucky.


This is good advice. ankp's comment is also good advice.

A well-run technical organization shouldn't have to choose between good programmers, code that performs, and the ability to quickly operationalize new hardware when necessary. If you're having to choose, you've got more work to do.


excellent points. plus the author's writing style is direct and if not humble, than certainly not arrogant.


"Jeff Atwood is wrong on the Internet", film at 11. Why is this news?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: