Hacker News new | past | comments | ask | show | jobs | submit login
There are no 10x developers, but there are 1/10 ones (techfounder.net)
154 points by pytrin on April 5, 2013 | hide | past | favorite | 165 comments



These discussions are worse than useless. People with made-up numbers confront people with unreliable anecdotes. Productivity means, very simply, an average of X widgets per hour. Does such a measure even exist for developers? The discussion can apply to such varied situations as:

- how fast can a developer add new features to a system they built themselves

- how fast can they add features to a system designed by somebody else

- how fast do they fix simple bugs

- how fast do they find horribly complicated bugs

- how fast can they architect a reliable, multithreaded backend

- how fast can they work with program management to design a good UI

etc. etc. etc.

These are often conflicting requirements. People who develop a very elaborate personal coding style will be fast on single projects, but slow to work with other people's code. Those who are methodical and question all assumptions will find hard bugs much sooner, but waste time on simple bugs. Those who can construct elaborate systems in their head often have trouble tweaking hundreds of little CSS details for a single ticket. So I would venture that, for any pair of moderately experienced developers, it's almost always possible to find a pair of tasks such that one of them is twice (or even ten times) as fast as the other. Let's not even bring up how much the incentive structure can vary, even among people in the same department.

The example that the article brings is terribly vague and uninstructive. Is that guy, for his whole life, going to be a "1/10 developer"? We never get an idea why he was slow in the first place. Was he lazy? incompetent? concentrating on his studies instead? not motivated by the incentive structure? risk averse because breaking the system carried harsher personal consequences than developing it excruciatingly slowly? The author doesn't even figure out the roots of the problem in the one case he's familiar with, and purports to give advice to everyone else based on it...


> So I would venture that, for any pair of moderately experienced developers, it's almost always possible to find a pair of tasks such that one of them is twice (or even ten times) as fast as the other.

I completely disagree. The FizzBuzz syndrome is very real: when I do developer screenings, the majority of candidates simply can't program even tiny problems. There's no way they're faster than anybody on my team at any development-related task. And yet, they all have long resumes and a lot of experience. All those people are working someplace. And I've watched these 1/10 developers at work: they copy/paste lots of code, program through trial and error, and spend most of their time in the debugger. Eventually, stuff gets done, but excruciatingly slowly.

Once you reach a certain level of competence, then I think what you're saying is true. But there's a huge number of developers who don't reach that level.

On the other hand, I agree with you about this article. Some sort of insight on where this dev spent his time might have been interesting. Whenever I read things this vague, articles based on vague impressions rather than any hard metrics, I'm left wondering which side was actually incompetent here. Maybe this was one of the 1/10 CTOs, completely incapable of effectively communicating with people who don't fit his favored personality type. And if the feature the dev was working on was replaced with something else that only took 30 minutes to implement, maybe the original design was simply unworkable.


The fizzbuzz syndrome is real alright. But someone failing fizzbuzz is not an indicator that they will always be incapable of development. It just means they are very junior and require training. Frighteningly for the West, Indian and Chinese companies appear to grok this and actually develop their employees. US and UK companies throw their hands up in empty self-satisfaction that their work is so intellectually difficult that people up to the challenge are impossible to find. The reality is probably that the corporate business model is flawed meaning they cannot afford the seniority of guy they need.


I worked with a developer for a year and a half, who has 11 years of experience, who cannot code fizz buzz and cannot accomplish basic tasks. I have to hold his hand through even the smallest changes or bug fixes. These people exist.

He just got a new job at a large company with a significant pay-raise. The industry is filled with people who are, to put it bluntly, incompetent.

edit: that last line is a little harsh. I don't mean "filled" as in entirely or even mostly. I am just trying to say they aren't uncommon.


By the time you're evaluating someone with FizzBuzz, they're trying to convince you they have training and have developed from "very junior" (to wit untrained). FizzBuzz is the kind of test which should be a pass/fail final exam question for a 6-week Beginning Programming course, not something any degreed entry-level candidate would have trouble with.

Can they be trained? sure. But when applying for jobs, they're expected to already have training up to basic competency for the position. A restaurant shouldn't have to teach a prospective cook how to make a grilled cheese sandwich, an auto repair shop shouldn't have to teach a prospective mechanic how to change a tire, and a software house shouldn't have to teach a prospective programmer how to write FizzBuzz.

If such training is in fact needed, then the higher education "industry" is broken to the point that businesses should just get kids straight out of high school and take 'em from there.


Tangent:

The FizzBuzz question should really be "write me 3 versions of FizzBuzz, explain why they're different, and under what conditions would you use each over the others."

The fact that the FizzBuzz issue is a matter of whether the candidate can write it at all - and that so many interviewers don't view a "nope" answer as a full-stop red flag - indicates a baffling state of industry affairs.


The problem is that these people are supposed to be programmers, i.e. people who have been trained already. Yes, it is real that companies do not want to train people - and that is a problem in itself. But if someone tells me "I am a professional programmer" and I give him FizzBuzz and he cannot solve it it is no training problem. It is a problem with hiring: How could such a person ever hold a position as a programmer (without "trainee"/"apprentice" behind their job title)?


How senior would you need to be to write fizz-buzz? Will someone ever gork that if they didn't after Teach Yourself Xlang in 24 Hours?


I doubt is a matter of seniority, its a matter of criteria, the way I see it is, FizzBuzz is useful only to weed out perhaps the lower 5% of the developers population and perhaps the lower 20% of the general literate population?

As an anecdote, I asked my wife, who's not a developer, she is an executive at a large beverage company, to solve FizzBuzz and to explain to me how she would do it and she did it successfully. She didn't use the word "for" or "while", but she explained that she figured there must be something that allows you to iterate over the same algorithm several times (paraphrasing).

Personally, I think FizzBuzz are useless unless you use them as part of a submission form before candidates send in their resumes, sort of like a slightly harder captcha. There is no place for a FizzBuzz once the company is already engaging with the company, if they are not able to solve it they will be earlier signs that they lack common sense and criteria.


> FizzBuzz is useful only to weed out perhaps the lower 5% of the developers population

Problem is, that 5% represents way more than 5% of the pool of job applicants, since it's those folks that continuously get rejected from jobs and keep applying. Any company hiring developers needs a decent FizzBuzz filter to sort out this riff-raff.

> There is no place for a FizzBuzz once the company is already engaging

In a technical and sharp company, yeah. In a company where the resumes go through HR and nontechnical managers, such that the first point of any technical evaluation is in the interview itself, then yeah that interviewer is going to need a FizzBuzz. And this is distressingly common in companies who hire some software developers but whose primary domain is not technology, like medicine or shipping or education.


Fizzbuzz is an analytical problem, not really a programming problem. A kid with no knowledge of a particular programming language could create an algorithm for it with a little guidance.

"Are there any patterns in this that we could use to make this shorter?" , "You're printing 'fizz' more than once. Is that necessary?", etc.


For well-defined programming tasks, I could agree that there's a standard "professional" baseline, and that, at most, other developers could be twice as good/fast, at most. (And of course, there are the 1/10 developers too, and even the "negative" developers that do more harm than good.)

But the 10x factor is real. But it comes into play, I think, more for architecture roles, or incredibly key algorithms, etc. It's a whole different beast from "proficient developers, who are experienced with their stack and problem domain."

These are the developers who have the intelligence, experience, and organizational understand to make the kinds of decisions that keep a team or product running smoothly along, as opposed to getting mired in spaghetti code, rewrites, refactoring, etc. And the larger and more complex projects are, the harder this is, and the more valuable it is. And the more of an art it is, than a science.

The idea of 10x developers isn't that a single developer performs simple tasks 10x better/faster, but rather that they bring such insight into complex tasks, that they can wind up achieving 10x as much, possibly for the whole team, because they make other developers more productive as well. Or, they see a well-defined task, and see a different, better way of doing it, that only takes 1/10th the time. But the 10x is the gain they bring to the company -- and sometimes it is far greater than 10x.


this is really well stated.

i've noticed the exact same thing. most programmers at the baseline level of proficiency take about the same amount of time to perform any particular, well specified task. its the stuff that isn't well specified where the huge differences come into effect.


I find this a challenging read. If the guy isn't cutting it in the position he is in, then its reasonable to ask him to move on, but I like to believe that everyone can improve, just as I believe not everyone wants to improve.

In my career I've seen developers who were 'average' become way above average after their boss left. This was due to the weird psychological box their manager had managed to get them pinned into.

A friend of mine who is now a VC told me about dogs which were put into a box where anything they did caused the floor to shock them. Eventually they just lay on the floor quivering. This sounds horrible and cruel (and it is) and sadly I've seen managers do the same things to their people. Every time they try to do something they get yelled at, and never with any guidance just a "don't do that again!" sort of shock from the floor. Eventually they can't do anything.

It is hard to rehabilitate those people but it is possible. It takes a bit of patience to get their confidence back up that they can in fact be excellent contributors. But boy is it painful. Both for them and their new manager.

The bottom line for me is that people work at different speeds and different levels. When their tasks are well matched to their strengths they do well. I once characterized two folks I knew as a 'bubble sort' kinda guy and an 'insertion sort' kinda guy. Strangely the bubble sort guy could write code really really quickly, and that was good because he took a long time to arrive at a solid solution. The insertion sort kinda guy worked more slowly and methodically but still got to the solution in about the same amount of time. If you looked at their commit histories you might thing the insertion sort kinda guy was a 1/10th developer but if you look at the milestone delivery rate you'd see he was just as productive as the bubble sort kinda guy. Not a particularly deep insight that people are different I know.


A friend of mine who is now a VC told me about dogs which were put into a box where anything they did caused the floor to shock them. Eventually they just lay on the floor quivering. This sounds horrible and cruel (and it is) and sadly I've seen managers do the same things to their people. Every time they try to do something they get yelled at, and never with any guidance just a "don't do that again!" sort of shock from the floor. Eventually they can't do anything.

This was a study done by Seligman et al and is known as a psychological phenomenon: https://en.wikipedia.org/wiki/Learned_helplessness

Scarily, even mentioning this brings flashbacks of when I was part of an entire team that was ruined by it. The pattern that emerged was that a particular senior architect who had the ear of management was perceived as the 10X contributor, even though the misdirection and hostility from him reduced the rest of the team to 1/10 productivity. Most of the other developers were entirely capable but realized the complete futility of their efforts in the face of this kind of management.


Well, if the author kindof hasn't seen it, I guess I should ignore this stuff:

Boehm, Barry W., and Philip N. Papaccio. 1988. "Understanding and Controlling Software Costs." IEEE Transactions on Software Engineering SE-14, no. 10 (October): 1462-77.

Boehm, Barry, 1981. Software Engineering Economics, Boston, Mass.: Addison Wesley, 1981.

Boehm, Barry, et al, 2000. Software Cost Estimation with Cocomo II, Boston, Mass.: Addison Wesley, 2000.

Boehm, Barry W., T. E. Gray, and T. Seewaldt. 1984. "Prototyping Versus Specifying: A Multiproject Experiment." IEEE Transactions on Software Engineering SE-10, no. 3 (May): 290-303. Also in Jones 1986b.

Card, David N. 1987. "A Software Technology Evaluation Program." Information and Software Technology 29, no. 6 (July/August): 291-300.

Curtis, Bill. 1981. "Substantiating Programmer Variability." Proceedings of the IEEE 69, no. 7: 846.

Curtis, Bill, et al. 1986. "Software Psychology: The Need for an Interdisciplinary Program." Proceedings of the IEEE 74, no. 8: 1092-1106.

DeMarco, Tom, and Timothy Lister. 1985. "Programmer Performance and the Effects of the Workplace." Proceedings of the 8th International Conference on Software Engineering. Washington, D.C.: IEEE Computer Society Press, 268-72.

DeMarco, Tom and Timothy Lister, 1999. Peopleware: Productive Projects and Teams, 2d Ed. New York: Dorset House, 1999.

Mills, Harlan D. 1983. Software Productivity. Boston, Mass.: Little, Brown.

Sackman, H., W.J. Erikson, and E. E. Grant. 1968. "Exploratory Experimental Studies Comparing Online and Offline Programming Performance." Communications of the ACM 11, no. 1 (January): 3-11.

Sheil, B. A. 1981. “The Psychological Study of Programming,” Computing Surveys, Vol. 13. No. 1, March 1981.

Valett, J., and F. E. McGarry. 1989. "A Summary of Software Measurement Experiences in the Software Engineering Laboratory." Journal of Systems and Software 9, no. 2 (February): 137-48.

(References from http://www.construx.com/10x_Software_Development/Origins_of_...)


Have you read those references? Or are you just quoting them blindly?

Laurent Bossavit has read them, and he reports that they are, for the most part, reporting on each other and on anecdotes. There is no actual, scientific evidence for the "10x programmer." He writes up his search for the evidence in his book "The Leprechauns of Software Development." It's a fascinating look at how a supposedly-rational group of people has taken a myth and blown up into well-known "fact."


Have you read those references? Or are you just quoting Laurent Bossavit blindly?


I've read and understood the reference I've provided (Bossavit), and I've also read and understood the reference justin_vanw provided (McConnell), so no, I'm not quoting Bossavit blindly.

After reading both, I found Bossavit much more convincing. In addition to meticulously following the chains of references back to the original research, Bossavit also examines how science is done and what is required in order for a claim to become established scientific fact. His conclusion is that the evidence for the 10x claim is "quite weak."

Here's my summary of his material from the last time this came up ( https://news.ycombinator.com/item?id=4118034 ). There's much more to Bossavit's research than my quotes here, so if you disagree, please read and rebut his work directly, not just these quotes.

"How strong is the support conferred to the 10x claim by the best-reputed list of references, for a reader persistent enough to follow the chain of citations back to primary sources?

"Based on our close reading of the “10x files”, we can now answer: quite weak.

"Not a single one of the references is to a replication, in the scientific sense of the term, of the original exploratory experiment.

"The empirical data is in general quite old, most if not all of it predating widespread use of the Internet - which we can safely expect to have wrought major changes in programming practice.

"None of the studies address the question of construct validity, that is, how meaningful it is to speak of an individual programmer’s productivity, and if it is meaningful, whether the experimental measurements line up adequately with that meaning." [1]

[1] The Leprechauns of Software Engineering explores what science is and how we distinguish between fact and folklore in software engineering. It specifically explores the 10x claim, and determines that it's folklore. http://leanpub.com/leprechauns


> There's much more to Bossavit's research than my quotes here, so if you disagree, please read and rebut his work directly, not just these quotes.

Please provide a copy of Bossavit's research.


I was using "research" in the colloquial sense, not in the "research paper" sense. Bossavit researched the 10x claims (thoroughly and meticulously) and wrote up his findings in the book I mentioned. You can find it here: http://leanpub.com/leprechauns

It's an impressive work of scholarship; many of the citations he followed were out of print, expensive, or referring to unnamed parts of 300-page books. Luckily for us, Bossavit's write-up is engaging and accessible.

It isn't free, but it's trivially inexpensive (US$5-10) if you care about this subject. Far cheaper than the source material.


> It isn't free

And isn't available in any of the linked libraries available to me.

And I already had a long email discussion with Laurent back in 2011, about his comments on Steve McConnell's blog.

If I may make a suggestion, when the source material is likely to be unavailable, it might be better to quote the reasons given rather than the conclusions -- then we can try to follow the reasoning rather than having to take someone's conclusions at face value.


I don't understand what you're getting at here. Laurent's book (written after McConnell's blog) is available at http://leanpub.com/leprechauns . If you're unable or unwilling to figure that out, I suspect you're not really trying.

You're engaging in an asymmetric attack here--posting trivial statements that avoid the substance of my comments, but require me to do a lot of work to satisfy you. I've replied to your comments in good faith. Now the onus is on you.


Unwilling to pay the $5.

> You're engaging in an asymmetric attack here <

I wasn't engaging in any kind-of attack, just making a suggestion.

You said -- "if you disagree, please read and rebut his work directly, not just these quotes" -- so, as I haven't read the book, I haven't tried to rebut those quotes.


> so no, I'm not quoting Bossavit blindly

As far as McConnell; as for the other references, aren't you taking Bossavit's claims on trust?


Steve McConnell himself doesn't dispute Bossavit's claims about the earlier studies (for the most part), he just disagrees about the significance of those claims. See McConnell's rebuttal in which he acknowledges the methodological weaknesses of the citations: http://www.construx.com/10x_Software_Development/Origins_of_... .


What specifically are you pointing to within that blog post and comment stream? (Let's make sure I look at exactly what you mean me to see.)


Sure - I'm pointing to the section titled "A Deeper Dive into the Research Supporting '10x'".

I'm accepting one of Bossavit's claims on trust: he says that of McConnell's eight citations in support of the 10x claim, only two were original research based on empirical studies under controlled conditions (and of those two one only concerned debugging).

I'm willing to trust that because it looks like McConnell mostly agrees. For example, regarding the 1986 Curtis paper, McConnell writes "Bossavit states that, “the 1986 Curtis article does not report on an empirical study.” I never stated that Curtis 1986 was an 'empirical study.'" Regarding DeMarco and Lister, he says "Editorial insinuations aside, that is a correct description of what DeMarco and Lister reported, both in the paper I cited and in Peopleware. Their 1985 study had some of the methodological limitations Sheil’s discussed in 1981".

They disagree about what counts as evidence supporting the 10x claim, but apart from the Card 1987 citation (about which Bassavit was wrong), McConnell confirms Bossavit's claims about every citation.


> Regarding DeMarco and Lister...

After the words you quote, McConnell goes on to say -- "Having said that, their study supports the 10x claim in spades and is not subject to many of the more common methodological weaknesses present in other software engineering studies."

After the words you quote, McConnell goes on to say -- "Bossavit should have looked at the paper I cited, not the book. The paper shows a 5.6x difference between the best and worst programmers—among the programmers who finished the assignment. About 10% of the programmers weren’t able to complete the assignment at all."

I don't see how that can be described as McConnell mostly agrees?


I was originally responding to the challenge "aren't you taking Bossavit's claims on trust?"

I'm differentiating between Bossavit's claims about the citations from Bossavit's analysis and arguments. I have to take his claims about the articles and books on trust, because I haven't read them. On the other hand, I don't have to trust his analysis and arguments - I can evaluate those on my own.

In the case of DeMarco and Lister, for example, Bossavit writes "the only 'studies' reported on therein are the programming contests organized by the authors, which took place under loosely controlled conditions (participants were to tackle the exercises at their workplace and concurrently with their work as professional programmers)"

That's the only claim that I have to trust here. Everything else is analysis. McConnell agrees with that description of the citation, but disagrees that it invalidates DeMarco and Lister's research. He thinks that a programming contest that took place under loosely controlled conditions is a legitimate research study.


To check my understanding of your response, let me reduce it to caricature:

B claims the article is not printed on blue paper, and we have to take that claim on trust because we haven't seen it.

M agrees with that description of the article, but disagrees that not printed on blue paper invalidates its status as evidence.

We can decide for ourselves whether we think not printed on blue paper invalidates its status as evidence.


That sounds fair.


So presumably you'd characterise --

"I am claiming that for whatever reasons he is here dressing up, in the trappings of scientific discourse, what is in fact an unsupported assertion meshing well with his favored opinion. McConnell is abusing the mechanism of scientific citation to lend authority to a claim which derives it only from a couple studies which can be at best described as “exploratory” (and at worst, maybe, as “discredited”)."

-- as Bossavit's "analysis and arguments" which you can evaluate for yourself.


I've just read through the McConnell post and the comment stream, and your comment that "McConnell confirms Bossavit's claims about every citation" baffles me :-)

"Curtis 1986. ... Bossavit says the paper “offers no support for the ‘10x’ claim.” But the first paragraph in section II.A. of the paper (p. 1093) summarizes 4 studies with the overall gist of the studies being that there are very large differences in productivity among programmers. The specific numbers cited are 28:1 and 23:1 differences."

I don't see how that can be described as McConnell confirming Bossavit's claims?


Again, McConnell isn't disagreeing with Bossavit's claims about the Curtis 1986 citation. He concurs! "I never stated that Curtis 1986 was an 'empirical study.' Curtis 1986 is a broad paper that touches on, among other things, differences in programmer productivity. "

The difference is that McConnell thinks that summarizing four studies "offers support for the 10x claim" and Bossavit does not. That's a matter of analysis & evidentiary standards.


What are you talking about? You should read those references. Just download peopleware for fuck's sake, rather than regurgitating bullshit. There is strong evidence that the most productive programmers are 10x as productive as the least productive programmers.


I've read Peopleware. Your mention of it is another example of the uncritical citation of references that Bossavit talks about in his book.

Peopleware isn't a scientific study. It's a collection of essays and opinions by two authors I have a great deal of respect for. The closest it gets to science is an un-sourced discussion of DeMarco and Lister's "Coding War Games." In that discussion, it specifically says that differences in productivity (which were less than 10x) were likely due to workplace factors, not individual productivity.

People constantly say that there's "strong evidence" supporting the 10x productivity, but no one ever quotes a scientific study that provides this evidence. At best, people list references, often without reading or understanding them--as you did with Peopleware.

There could well be a 10x difference in productivity, but gut feel and testosterone-fueled claims aren't the same as science.

(It's no surprise that there's no science backing the claim. In order to show evidence for a 10x difference in programmer productivity, you'd first have to define "programmer productivity" and come up with an unbiased measure of it. And that's surprisingly difficult, as some of the discussions in this thread have shown.)


>Your reference of it is another example of the uncritical waving of references that Bossavit talks about in his book.

He says, while uncritically waving a single reference that has no backing evidence. You seriously don't see your hypocrisy here?

>Peopleware isn't a scientific study

Duh? That is why it contains citations, to the studies the authors conducted. You sure did a poor job of reading it if you are actually that ignorant.

>In that discussion, it specifically says that differences in productivity (which were less than 10x) were likely due to workplace factors, not individual productivity.

No, they were 10x. That is precisely what it says. And why do you suggest workplace factors being part of the difference makes the difference not exist? I don't even understand how you can say something that absurd and not realize it.

>There could well be a 10x difference in productivity, but gut feel and testosterone-fueled claims aren't the same as science.

Ok, I'll just take your ignorance-fueled gut feel claim as proof that all the scientific evidence doesn't exist.


Whoa there. You are being way too rude and aggressive about this. Please help keep HN civil; that's much more important than any argument.

Peopleware is a classic, or at least many software professionals regard it that way, but it is a classic of folklore rather than science. Here is an example that I asked one of the authors about years ago: https://news.ycombinator.com/edit?id=1995716.

The research literature itself on these matters is extremely flimsy. There is not even any empirical evidence for the stuff we all agree on, such as that version control is good (http://www.neverworkintheory.org/?p=451) let alone on the controversial points such as productivity. Incidentally, how do you measure productivity?

I'd be happy to be wrong about this, though. Perhaps you could cite a study or two that really does hold up to scrutiny?


Perhaps if you don't like the dozens of studies, you could offer a critical response to them rather than ignoring their existence no matter how often they are pointed to?


There is no offer of proof for the claim that there are no 10x developers past claiming "they've not seen it in practice"

I have seen them with my own eyes. They are in fact, able to do things 10x faster than others. There are not a huge number of them, but they exist. At Google, folks like Jeff Dean and Sanjay Ghemawat are easily 10x more productive than "the baseline proficient engineer". Any proficient engineer who has worked with them can tell you the same.

Again, i won't claim there are many of these kinds of folks, but they definitely exist.

(This also doesn't mean 1/10th developers don't exist, and aren't common, but ...)


For those who work inside Google, it's well worth it to look at Jeff & Sanjay's commit history and code review dashboard. They aren't actually all that much more productive in terms of code written than a decent SWE3 who knows his codebase.

The reason they have a reputation as rockstars is that they can apply this productivity to things that really matter; they're able to pick out the really important parts of the problem and then focus their efforts there, so that the end result ends up being much more impactful than what the SWE3 wrote. The SWE3 may spend his time writing a bunch of unit tests that catch bugs that wouldn't really have happened anyway, or migrating from one system to another that isn't really a large improvement, or going down an architectural dead end that'll just have to be rewritten later. Jeff or Sanjay (or any of the other folks operating at that level) will spend their time running a proposed API by clients to ensure it meets their needs, or measuring the performance of subsystems so they fully understand their building blocks, or mentally simulating the operation of the system before building it so they rapidly test out alternatives. They don't actually write more code than a junior developer (oftentimes, they write less), but the code they do write gives them more information, which makes them ensure that they write the right code.

I feel like this point needs to be stressed a whole lot more than it is, as there's a whole mythology that's grown up around 10x developers that's not all that helpful. In particular, people need to realize that these developers rapidly become 1x developers (or worse) if you don't let them make their own architectural choices - the reason they're excellent in the first place is because they know how to determine if certain work is going to be useless and avoid doing it in the first place. If you dictate that they do it anyway, they're going to be just as slow as any other developer.


You've really hit the nail on the head here, and my response to the article was along the same lines:

Solving the right problem in the right way is the key of programmer productivity. Every programmer in existence who has a baseline of critical thinking has written the wrong code before, and then reflected on it and improved it. You can do this at any level of abstraction from the details of bit flipping all the way up to whole systems architectures. The very best programmers are the ones who are able to think about the gestalt most effectively. This is a tremendously difficult task give our limited conscious brain capacity, and I constantly feel myself bumping up against the edges of it when I work on a big project.

One example of a programmer I really look up to is Yehuda Katz because of his work on Bundler. Rubygems had an awful lot of pain points for large projects, and worse, it had a culture of borderline denial about the existence of these pain points. Yehuda and Carl Lerche worked relentlessly to come up with a solution that addressed 99% of all the recurring projects across a wide variety of scenarios. Then they powered through over a year or more to hammer it into a reliable solution built independently and only loosely coupled with Rubygems. Sure it took months and months and untold hours of hard work, but in the end they effected in a sea change in Ruby dependency management which would have been impossible without the ability to see all sides of a very multi-faceted problem. To me that goes beyond 10x because you could have 1000 good engineers and still not be able to design Bundler.


What I learn from this is, they key to being a 10x developer is to work on problems that have 10x impact and not necessarily writing 10x more code.


That's the right lesson. The difference is actually even more stark than that: if you extend it out to picking the right project in the right company in the right industry, there're positions where you can be easily 1000x more productive. Project selection probably determines more of your productivity than almost any day-to-day coding decision.


I'm sure Larry and Sergey would agree.

The what-to-build skill trumps the how-to-build-it skill every time!


And in fact, writing scads of dumb code is often the sign of a poor developer. It often follows from the "never turn back" mentality, that leads them to follow an initial poor choice of solution through to the bitter end.


> The reason they have a reputation as rockstars is that they can apply this productivity to things that really matter; they're able to pick out the really important parts

That is the same thing as saying that they are, in fact, more productive.

Productivity is improved by one's ability to synthesize technical concepts and experience; to market (internally, amongst peers) one's ideas; to communicate; to convince; to adapt; to select; and to balance all of these tensions.

> They don't actually write more code

The "10x developer" meme was never a claim on LOC. At least I hope not.

> the code they do write gives them more information, which makes them ensure that they write the right code.

That is productivity optimization.


You've just described a two magnitude spread between an incompetent programmer and a "rockstar" (I hate that term), with the average being a 1x programmer. I tend to believe that's true because I've seen a few 10x programmers (and I think I've had days where I have been one), but I've definitely seen 1/10x programmers. This sounds about right.


Eh the thing of it is, everyone can have 10x days, as a developer's output variability is high, the trick is to hit that consistently, and to do that you basically have to have 100x days at least once in a while.


I can believe 10x. Though, when people start throwing around numbers like, "A great developer is literally 1,000x more productive than an average developer!" I raise an eyebrow.


Depending on what you mean by the average developer, 1000x is plausible. If you think in terms of reliability, it's the difference between a four nines engineer and a one nine. That guy who pushes untested code that brings the site down for a day vs. the woman who has to roll back her code once in five years because she made a fencepost error is an example of how such variances in developer "productivity" can be measured by an organization, especially one that does not consider itself a technology company.

Think of floor personnel in stores. There are some who start off several steps behind because they have a lot of sick days, come in right at the start of their shift (so end up starting "late"), and take dozens of smoke breaks throughout the day without clocking out. They may chisel a minute or two extra from every break even when the store is busy. On top of this, they may drag their way through their workday. You may need 3-5 of this person to do the job of one, then you need someone to manage them.

They're going to be significantly less productive than the scarily chipper go-getter who wants to be store director, gets in early to start work on time, skips breaks or takes them only when things are slow, is very upbeat and engages customers, and offers to take extra shifts to help out. You can leave this person alone in the store and they'll keep things moving along.

Measuring productivity is a difficult science. In some environments, it's screwing up less than other people. In some, it's a simple measure of something silly, such as function points. Scientific Management has seeded a lot of bad ideas in managers' minds about measuring the value of individual contributors.


I'm skeptical that any engineer can independently be responsible for four nines. HA is a big-picture deal. One engineer can break it, but one engineer can't make it.

Put differently, a good HA organization does not need 1000x engineers- a good system ensures even garden variety engineers will deliver.

There are lenses through which you can look and say "Engineer A was 1000x as productive as Engineer B", but those lenses are things like leveraged work, which is not what people are thinking of when they talk about "rockstar programmers".

When you say someone is 1000x as productive, that means that they sit down and do 3 years of work in 8 hours. (Unless the average developer contributes negative net productivity)


"When you say someone is 1000x as productive, that means that they sit down and do 3 years of work in 8 hours. (Unless the average developer contributes negative net productivity)"

Productivity is value to the organization. If it is an ecommerce entity, Mr. one nine cost the company 1000x in sales vs. Ms. four nines.

Your skepticism is noted. As I've said, sometimes productivity is measured as not screwing up.

Would you claim a developer who replaced 1,000 lines of messy, poorly written code with a 10-line implementation of a more efficient algorithm is less productive than the person who wrote the 1,000 line mess?


sometimes productivity is measured as not screwing up

At which point I guess we get down to what is an average developer. Does the average developer make such big screw-ups that simply not screwing up is 100x increase in performance?

I guess when I think "average", I'm not thinking about what is actually the measured average, but more like "acceptable competency". Never destroys shit, but also never advances the project a month with something clever. The sort of standard grade you would hope your rank-and-file would be made of.

What does lines of code really mean? If the messy code has a lower bug rate and is easier to debug, which is entirely possible if the 10-line solution is painfully elegant, it is the more productive of the two. Remember that old quote about how it takes twice the cleverness to debug code as it does to write it...


The average developer, in my mind, is a person right in the middle of the group of people who can maintain a position as a "programmer" or "developer" or any equivalent terms.

You've made a number of assumptions that I was trying to squash.

Most developers do not work on "projects". They're not doing startups or even working for software companies. They're anyone from the guy maintaining a FoxPro database for a dry cleaning chain to a jedi ninja who poops better, tighter, inventive code than the rest of us can dream of.

How do you measure someone's productivity in an environment where they write or patch code based on their manager coming around and asking them to do make tiny changes or write reports? As an analogy, how would you measure the productivity of the "hero" of Office Space? He turns two character dates into four character dates. He's a programmer, but he's not actually "producing" anything. He's keeping the world from ending in 2000! His manager would probably count his productivity as how many lines/files/whatever he updates in a given period of time.

The lines of code metric was an attempt to illustrate that productivity is a difficult task to measure without a goal-oriented context. You see that measuring productivity is not just "lines of code," but you also missed that the developer spent time refactoring 1,000 lines of code. Is that productive? Measured on that day, the work done was zero, or negative, and productivity can only be measured in the medium to long term. Yet I hope we would both agree that it was a productive effort.


In regards to 1,000 LOC vs. 10 LOC, I still maintain that I'd have to see it. Most of the time you'll be right, but that refactoring is not necessarily productive as a rule.


"a good system ensures even garden variety engineers will deliver."

I assume this points to the idea that test procedures can replace individual expertise ? My experience is that testing systems get bypassed by "garden variety" engineers (mostly with PM/Manager support), usually in exactly the cases where you wouldn't want them to do that.

Even when this is not the case, tests, no matter the coverage, don't cover everything. The 100% test coverage demands that are in vogue these days make sure tests are usually written specifically to minimize interactions between pieces of code, which is of course exactly what a good engineer would test for.

Of course with tests testing interaction, there are pieces of code (the ones that worry me) that get tested 1000x (in one case literally iterating over all possible calls and verifying constraints afterwards, almost fuzzing), and there's pieces of code that just don't get attention (usually an if out_of_range check at the beginning of functions, since I would just test for the same range, with potentially the same mistake in both the used number and the tested number, but other things happen too. If the consequences of the function screwing up is a slight UI aberration ... well I barely test UI code at all, really. MVC for the app MC for the tests). Also there's pieces of code where I know they're potentially not thread safe and so, just to make sure big screwups get caught, I test firing off the same test 1000x in parallel. If the function works entirely based of the stack and return values, no such test is done.


Good points.

Most requirements do not include error or exception handling requirements, so when developers write their test cases (in the very few cases when they do so), they usually do not write negative test cases.

Obviously, NFRs need to cover those, but in many organizations, they're seen as being "in the way of getting work done," just like comprehensive automated tests.


If you ask me to design a programming language, I'm a 0 and van Rossum and Kernighan and Ritchie are some finite number, so they are infinity-x me.

There are levels where people hit a wall. Some hit it at hello.c, some hit it at writing professional business logic with proper data structures, some hit it at assembly language or programming language design or operating systems.

Anyone in over their head at any level is a 1/10 or a 0.

The superstars of language design, if they had to dive into Java business logic, might be at 2x the productivity of a competent pro...or they might shoot themselves.

But if you create a Python or Linux...what's your productivity compared to that Java programmer? In terms of the market value of those ecosystems, increased efficiency by the developers and users in those ecosystems, it's not 10x, it's gigantic, millions of times the value-add.

The weird thing is, people don't get paid much more when they move from one level to another, unless they start the next Google or get shares/royalties...it's more on a log scale.


Give you some credit. If you had to design a programming language, you'll end with something. You'll have to learn a lot of stuff in the way. Maybe the result won't be awesome, but it will be something, not just 0 ;-)


NO!

You're not a 0 at designing a programming language.

If you look at modern C++, you may think that it's impossible to design that language. TBH, I'd be incredibly surprised if the current standard is implemented on all of the major compiler platforms before the next standard is released.

Remember one of the most important rules in programming and problem solving in general.

If you can't see how to solve a problem, break it into n smaller problems. Repeat as needed until you have a problem you can see how to solve and solve that problem.

You don't need to know lex/yacc, ANTLR, LLVM, recursive descent, LALR(1), or any of that crap to design a language that will be useful for you. They'll give you leverage and have incredible value, but I assure you that you have the ability, today, to design and implement a programming language if you can program at all.

It may not be a general purpose language, but look at languages with minimal syntax, such as forth or a very simplified lisp/scheme. Start by recognizing strings, categorize those strings into your language keywords and program values, and make the computer do something based on the combination. Start with a program that can do something like "print hello" and work from there. You can even add comments by looking for a "#" and ignoring everything from there to the end of the line. You're on your way from there.

Do not hold yourself back by thinking any problem that other people can solve is intractable. The main advantage for more experienced developers is that they can think in abstractions and assemble those abstractions into a solution.

Sorry for the emotional intensity. Sorry also for ignoring the rest of your comment, as it's relevant.

But I care more about your mindset than the argument. You're better than a 0 even if it's just because you understand that you do not yet know something.


well... let's just say my 1985 PLT (programming languages and translators) class project didn't run after numerous all-nighters, and I got the grade that made the top 75% of the curve possible LOL. It's possible that another couple of weeks of all-nighters would have yielded something useful, or if I went back now I would get it...in a sense, any program with a few command line options defines a 'language'...still, pretty close to 0 in terms of a product that could be put in production or that someone would willingly pay for. Appreciate your comment though, nobody should consider themselves a zero at anything...with a just-do-it attitude you will always eventually create something useful...what one man can do another can do.


There are a small few very gifted people who are prolific coders, and there are a small few very terrible coders who manage to get a job but can't produce decent work.

But FAR more often than spotting either category "in the wild", in my own experience, I have watched poor management turn a productive developer into an unproductive one, or watched circumstance and luck turn someone who's not natually great into a central and productive team member.

I would say, blanket generalization, that at any company that has more than 5-10 developers, team dynamics and management are 10x more important than whether individual programmers are rock stars or duds.


This. People forget the impact of good management, supportive teamwor and morale If worked in a place with the 5-10x devs (at least compared to te rest of the team) but when the workplace is hoatile, people dont eat together, etc. You won't be able to rretain people for too long.


In my experience there are both. There are certainly a fair number of 1/10ers that manage to progress through seemingly normal career paths as programmers despite not being very good. I've worked with some, and most of you who are also developers probably have as well. But there are also for-real 10xers on top of the baseline. John Carmack, Jeff Dean, Fabrice Bellard, et al. If you define hyper-productive programmers like those guys as your baseline then like 99.9999% of working programmers need to quit in shame.


I think you're spot on.

There are some _really_ bad programmers out there. Many are stuck in IT departments in non-technical companies, pounding out RPG, COBOL, BASIC, and other "maintenance" work that consists of writing reports. They take weeks doing it. It's a 9-5 job that they don't have to think about when they go home.

I sat in a kickoff meeting with a client's development staff. The staff had an average of eleven years of employment in the department. They were starting to define an SDLC for their organization and the topic of code reviews came up. One of the major points of contention for the day was the use of GOSUB over GOTO. (The pro-GOTO people were talking about a bug that smashed the stack, but didn't know it.)

It's safe to say these people are not reading "Hacker News", even in its current form, at 10pm. They read ESPN.com at work.

Even within this sort of environment, individual effectiveness follows the 1/10 to 10x distribution. There are developers who change one line of code, with a clear specification, and take entire facilities out of production for days. Others get in at 7am, leave at 7pm, and work on ripping out as much code as possible to create reusable modules of...BASIC. Most plod along in the middle.


> But there are also for-real 10xers on top of the baseline. John Carmack, Jeff Dean, Fabrice Bellard, et al.

Those are the famous ones. Also, a fair portion of Members of Technical Staff (and higher) at Google are 10x.


I'm rather tired of this, "developer productivity," rhetoric. It places software developers on a scale that allows micro-managers to perceive people as cogs in a machine. And I don't think that is fair given that we are all capable people. Each one of us will experience times when we are very productive and deeply inspired while other times we will struggle with completing trivial and meaningless tasks. To put ourselves in such absolute figures is disingenuous.


Of course there are developers that are vastly more productive and effective than others, this is true of every industry, not just developing. Consider two different developers. One designs and implements a highly useful feature which gives their product a key competitive edge in the market and does so in a way which is elegant, efficient, and robust. The other implements a poorly thought out feature that is a borderline bug even if implemented to spec, takes a long time implementing it, and eventually produces code that is hard to understand and full of defects, sucking up tons of developer time and slowing down development velocity as it soaks up resources every time it needs to be fixed.

The difference is not merely one of a 10x productivity imbalance, it's a huge effectiveness difference on the scale of thousands or even millions of times.

Looking for "10x productivity" developers is a side-show, it's treating software like factory work. Look for developers who can work more effectively. Developers who you can rely on to deliver a product that is absolutely better along every axis than the competition. Developers who won't just pump lines of code into the repo but who will actively work to improve the quality of the code base over time and increase the total development velocity of the company. Developers who will take the time to look at the internal processes and tools that the company uses and seek to improve them when necessary. Developers that will serve as strong mentors to other developers on the team, and improve their quality as well. And too developers who will be looked up to and admired and will serve as a reason for why other developers enjoy working at the company.


Developers is one thing, but how about sysadmin work? I think I've seen it there, too. I looked at the tickets worked by a group of people from June 1 to December 15 of a year far in the past. The counts looked like this:

119 348 353 409 515 559 572 634 722 779 830 1004 1029 1169 1345 1487 2096

That's a total of 13970 tickets closed by 17 techs. If you split it evenly, that's 821 each, or about 5.8%. The lowest tech only handled 0.85% of the load. The highest tech handled 15%! If you use the extremes, that's a 17x difference. It also means the top tech did almost 2.5x the "fair share" load of 5.8%.

Now, to be clear, tickets are not fungible. One ticket may be simple and another might take days to finish. Still, when there's that much of a difference, odds are the person at the top is doing all of them regardless of the complexity. That's the only way to find enough things to do.


My wife worked with a guy who closed a ticket about closing tickets. If you're measured purely on the basis of number of tickets closed, you can game the system. Replaced users mouse, ticket closed. Or, upgraded network software on 300 machines, ticket closed. If someone is doing twice as many tickets as the next guy, I'd assume he is doing a lot of BS work, not that he's twice as fast.


Well, yeah, that's why I said tickets aren't fungible.

Still, when you have someone hopping on them as quickly as they're opened while someone else just grabs one every hour or two, something is wrong. What if I told you the "one every hour or two" person was doing the dumbest, simplest ones, while the top tech was doing the hardest ones (in addition to everything else)? It happened.


What was the distribution of remuneration?


I never had that kind of data, unfortunately.


Don't forget: open 300 tickets, one for each machine.


And also be sure to be "that guy" who insists that every single request be made into a ticket, instead of handling simple walk-ups informally.


All the articles about "10x" developers, "1/10" developers, or "super" developers have not taught me anything about how to improve my skills, or a team's skills. Even the articles that promote the "10,000 hours" or "Just F'ing Do It" mantra feel like proselytizing without practical, or proven, methodologies.


Articles are not intended to teach you, they are intended to attract attention to their authors. If you want to learn about productivity in software development, read peopleware.


I'm reminded of a paper [1] that "discovered a test which divides programming sheep from non-programming goats." The claim is that there are two distinct populations of people (those who can programming and those who inherently cannot). Each population has its own variance, e.g. as the OP mentions, the best programmer in each population might be 30-40% better than a lower-tier programmer in the same population. But the average programmer in the "can program" population is a magnitude better than the average progammer in the "can't program" population. This seems to closely match what the OP is describing.

[1] http://www.eis.mdx.ac.uk/research/PhDArea/saeed/


This is why I like the news over here. Thanks. :)


I think the author was just plain wrong in his decision assigning that transition task to that developer:

   """ They were just in the middle of a transition from a homebrewed framework into a popular open-source one, which I was intimately familiar with. 

The author should have been transitioning the application to the new PHP framework himself. He is intimately familiar with it (and only it?), after all. And that other guy probably should have been assigned some other task, that he could be efficient at. Or let go.

Instead, it looks like this other guy was assigned a task in which he would be very inefficient, and then micromanaged by newly hired CTO. So no surprise that it ended in that huge time waste.


"x10 developers" are, in fact, proficient developers, who are experienced with their stack and problem domain.

Well, I'd say this is only true for your typical corporate entity, where their products are basically just web applications that represent a state machine using some sort of database store for persistence (which is... basically everything). By the time the company has achieved some size, there's enough process and bureaucracy and projects mostly consist of what the OP described -- migrating from one framework to another, or some other incremental enhancement that's hardly pushing any sort of business of technology threshold. At previous employers that fit this description, I could have easily been 10x more productive if I didn't have to endure four hour sprint planning meetings or be required by an understaffed QA/DBA/sysadmin team to approve my code before release, all to do completely routine development changes that required no serious problem-solving.

This doesn't mean I'm a 10x developer. It probably means I'm a 1x developer, so a 'true' 10x developer would have been 100x more productive without the aforementioned bureaucratic crap. At these companies that would dogmatically follow Agile and had two week sprints, I might spend four hours, across two days, actually developing -- with the rest of the days filled with various meetings, interviews, waiting for QA/DBA/sysadmins, fucking around, etc. I could easily see how someone could spend only 15 minutes on what took me four hours, and his solutions would likely be more elegant. It doesn't matter -- the limiting factor isn't the development time, or even the development quality. In that environment, there is literally no difference in productive gains between a proficient developer and a 10x developer.

Given a "10x developer" is literally defined by his productivity/quality, if you think they're effectively overhyped/nonexistent, ask yourself this: are you in an environment where a 10x developer could actually demonstrate 10x productivity? At most companies, the answer is no, because a 10x engineer is doing in hours what takes people days, and nobody notices. But in the right environment, the 10x engineer will do in weeks what would take proficient engineers months, or do in months what would take years. They're the people that Steve Yegge calls "Done and Gets Things Smart"[0] or that Rands calls "Free Electrons"[1].

And if you work at a company that can't think in terms of months because it's always concerned about quarterly earnings, then they have no use for 10x developer, and chances are you couldn't even tell if one was even there.

[0] http://steve-yegge.blogspot.com/2008/06/done-and-gets-things...

[1] http://www.randsinrepose.com/archives/2005/03/20/free_electr...


Agile is the current day water fall model. I mean in spirit on in factual procedures.

>>I could have easily been 10x more productive if I didn't have to endure four hour sprint planning meetings

I completely understand this feeling. The problem is so plain and simple. There are teams that win because of heroics, they achieve something big. Those management types who lack the chops to be heroes simply try to turn heroics in a process.

So what they basically do is, watch a team win. Instead of realizing its the interesting/challenging nature of work combined with things like lesser distractions, strong deadlines and well aligned monetary incentives that count, they rater look into common set of patterns that they would do. The net result is they end with a really boring process.

If you think to be successful you have to do X, Y and Z steps you are doing it wrong. X, Y and Z are catalysts, enablers or at most methods to keep you sane while you pursue a higher purpose.

My latest irritation is manager types getting too obsessed with Unit testing, it reminds of XML and the way it was abused.


My latest irritation is manager types getting too obsessed with Unit testing, it reminds of XML and the way it was abused.

It's a bit of a tangent but I've also observed this. So much that I've started calling it the "cucumber-complex".

Once infected the team not only writes excessive unit-tests to validate things like that a method-call to 'foo' does indeed call the method 'foo'. But they also wrap these unit-tests in quite elaborate parsers in order to "express" them in pseudo-english.

The hilarious (cucumber-specific) aspect is how they usually start by writing their tests in plain RSpec or Test::Unit and then go at implementing the parser to make it "tell a story".


I wouldn't be so quick to write off Cucumber et al.

A few benefits:

- Sometimes business users will contribute and use these scripts (admittedly in only some teams and less often than supporters would claim);

- They help with analysis and design as you are forced to question exactly what behaviour you would like to acheive;

- They keep the developer focused on the feature rather than, for instance, building some big fully featured class or library which wont be used;

- They encourage outside in testing so you get good but pragmatic testing for features at each layer of the application;

- They act as a free living documentation on the system;

- They are more lightweight than they initially appear.

I would rather have lots of BDD style tests than the silly tightly coupled unit tests loaded with expectations that you describe erlier in your post.


the most important bit, i think, is that a cucumber test suite is documentation that cannot go out of date, because if it does your test breaks and you have to fix it, thereby updating the documentation aspect of it.


> Agile is the current day water fall model. I mean in spirit on in factual procedures.

I think the problem is people sticking to a particular process, thinking it's the definition of "Agile". Rather than responding to change and the individuals in the team.

As our team was growing to the point where planning sessions were taking several hours we switched to just in time planning. A couple of developers could give a lightweight estimate of work at any point. Planning became just a 30min discussion with stakeholders to prioritise work, and we'd dig into implementation details as a team when we actually started working on something.

This would obviously not work for everyone. My point is that we responded to the problem and adapted our process to change in the team and environment. It's the 4th point on the agile manifesto http://agilemanifesto.org/

Retrospectives should give an opportunity for teams to do this if done right.


Somebody should let them know that unit tests are for us, not them.


someone should tell everybody that it is not us or them, but we.

unit test help us but not them? How can that be true if they pay your salary. If tests make you more efficient, so becomes the/their team (we/them as you please). In the end even the customer is influenced by good unit tests, or any other improvement for you. Bottom line: stop thinking us vs them, think we.

p.s. Not a manager, I am a programmer like you.


>>Agile is the current day water fall model. I mean in spirit on in factual procedures. >>My latest irritation is manager types getting too obsessed with Unit testing, it reminds of XML and the way it was abused.

Whenever I read opinions like this all I hear is "Wah wah wah. I'm old and I don't like learning new things. What's wrong with the way I've always worked? Wah wah wah, managers."


The problem with unit tests and a certain type of manager is that they see the unit test coverage as something that must be managed to be as close as possible to 100%, when this is arguably a waste of effort.


Agile has been around for many, many years. I think the Agile Manifesto is a teenager by now. There are a lot of people well into their careers who have "always worked" the so-called Agile way.


Sure, but I don't see why an effort to study how successful teams work and distil in a methodology needs to be derided by other programmers.

Most managers I've worked with just wanted some transparency and predictability to the process. Waterfall failed to provide that and now the industry is experimenting with Agile.

I find it distasteful when developers grumble to each other about having to perform basic tasks such as unit testing (TDD, BDD or whatever). I've worked with far too many 1/10x developers to know some kind of process is required to keep them on the straight and narrow.

There is no magic bullet, but I think it's time we stop snorting in derision at the industry's attempts to reduce the unpredictability of building (and maintaining!) complex software.


" ask yourself this: are you in an environment where a 10x developer could actually demonstrate 10x productivity?"

I'd buy whatever you were selling if you started off with that line.


Yeah that struck me too

By extension, that environment is also one where a 1.5x developer is going to repay their salary


> are you in an environment where a 10x developer could actually demonstrate 10x productivity? At most companies, the answer is no, because a 10x engineer is doing in hours what takes people days, and nobody notices.

At most companies a 10x developer will be noticed by other developers. 10 of them will quietly agree to gang up on that 10x developer and backstab away because their own 10 jobs are threatened by that 1 person. Maybe it's just an unspoken agreement between developers all acting in their own interest.

If a 10x or 5x or 2x developer decides not to do things too quickly in a new job so as to "maintain good relations with other staff", they will still know. You won't find too many >1x developers around most businesses because they've all left to do something else, like a PhD program in the OP's case.


I think that any model that bifurcates coders based on skill is absurd; but this description was uniquely outrageous. It assumes that skill is binary, and that there is never any room to improve beyond being a competent developer. If the maximum "level" a developer can attain is 1x, than there is no differentiation between competent and amazing coders.

In all honesty, these scales and the terminology which makes reference to them are absurd. There are average developers, there are mediocre developers, and there are excellent developers. People operate within a massive range of skill levels and have both talents and shortcomings. Trying to express ability as a binary determination made against an arbitrary baseline simply doesn't work.


I think its important to note that if we didn't have ten fingers and created a decimal numbering system, we probably wouldn't be using the 10x factor


Indeed. I settle for nothing less than the 0x10x programmers.


Perhaps, but then our 6x or whatever factor would still mean essentially same thing.


Think of how much better programmers would be if they were being measured in base20, though. "There are no 20x programmers, but there are 1/20 programmers!"


That would make me automatically twice as good as I am now!

Now I just need to find a surgeon with questionable morals...


Yeah, that changes everything.


You can't give 10x or 1/10 labels without context. If you ask a smart developer to code in a brand new language, he'll look like a 1/10 developer for a while; and an average developer can look like a 10x developer if he's working with technology that he's been using for the past 10 years.


Because nobody wants to be called or to call himself a 1/10 developer! It's goddamn demeaning!

And we're all at different points on the learning curves for different technologies, so we are all "1/10 developers" for some techs! But if you get the "1/10 developer" label for a technology that you end up working with but you've never touched before so you're running in the "learn on the job" mode, some will tend to mindlessly generalize this and you'll end up with a "1/10" label on your head, and you never want that, especially if you're also freelancing or doing independent consulting and your "fame" has financial consequences!. You just label the pro/experienced/uberproductive guys as 10x and leave the others to learn and perfect, it has the same "positive" effect that having a "like" button and not a "dislike" one (or both) has on social networking sites. (Now , having "learn on the job" guys working at 1/10 productivity is horrible for the customer, but this is how it works and how you get paid for learning instead of paying for learning by going to a top notch university and/or crawling through badly paid internships - as wise men said, "paying customers get shafted every now and then", but this is how we keep learning and keep our jobs fun :))


If you are consultant it is implied that you are at least proficient in technology/tools you are using and learning on the job in that case is quite frankly immoral. Especially if you are not uprfront with your client about your suitability for job.


How to tell whether I'm a 1/10x developer or not? Often it took me 3 months just to get a asp.net web application, as I'm unable to cope with the frequent requirement changes.


I was wondering just the same thing. I can build features and ship code, but I'm definitely slower than my co-worker (who has been on the project for over 1 year, while I'm just in my first 3 months).

How can we measure where we fit on the scale?


A lot of the productivity difference is probably due to overall familiarity with the code base. In a year from now when you have to fix a bug or add a feature you will probably know where to go and have an idea of what needs to be done rather than having to go hunting for it or asking another developer. If a project is started from scratch I imagine a lot of difference in productivity is due to familiarity with the tools, frameworks, api's, etc. used.


Make an estimate when you start a task. If it takes you 10x the time you probably are.


Good idea, number always tell the truth.


I guess I don't get it. This Ph.D candidate was recommended by the previous CTO and was the one spearheading the framework transition...and then was fired after 2 weeks? But junior devs at the previous firm were given months to get up to speed?


Sometimes it's not about the person, it's about the team and environment. I've been both on the competent and not-so-competent sides, and the environment I'm working in made a ton of difference (note: it is a feedback loop, so mistakes and lack of delivery cause your environment to get tougher, and vice-versa).

Perhaps the guy felt he was going to get a raw deal and so was spending the few weeks looking for another job? Who knows. Lots of potential variables that might have turned an otherwise stellar coder into a quivering mess. The plural of anecdote != data.


The fact that someone was recommended by a chain of people doesn't mean that good, or even competent. Perhaps the old CTOs weren't competent to judge or didn't have enough experience to have a proper baseline.

It's quite possible that I could judge someone at some task (say cutting & laying intricate tiles), and think they're quite good. They may seem fast to me because they're doing something that would take me forever. But if you brought in a real professional, they may laugh and say worker is going to slow.

The author said that the coder in question didn't show any improvement after 2 weeks of coaching. The developers at the previous firm he mentioned were slowly improving, that's why he kept them. From my reading I got the impression that if the PhD candidate had improved, he would still be there.


I just had that happen at my company.

New Dev hired by the CEO without input from anyone else. She was not bad, but terribly sloooooooooooooooow. She indeed took 10x longer to do common tasks compared to other devs.

It's not that she wasn't smart or reticent to learn new things, just that her way of working through problems was incredibly long winded for some reason.

It's not something that you can easily re-train someone to fix, it's a fundamental problem and that combined with other issues made us let her go.


Not sure what to make of this post.

I'm a 10x guy when it comes to Java/Python/Perl. And I'm also 1/5 guy when it comes to something like a executing a large C project[Which I'm currently working on in my spare time in the nights, totally apart from my day job]. Actually I was 1/10 guy a while back. I'm now at 1/5. I give it another 6 months while I be the 1x guy. And it will take some years when I will hit 10x mark when it comes to C.

However this 1/10, 1 or 10x metric is bull crap. There a lot of people who work slowly and steadily and then come to speed. There are a lot of people who learn by making mistakes, there are people people who take time figure out things and go at their own speed.

My metric of measuring is how to check how much committed the person is, are they deeply into it, are they serious about it, how much effort do they put in. To they persist on their effort despite problems? If they do, you have a winner. Doesn't matter if he can't understand the C syntax or takes a while.

Its worth while to give such candidate some time. Because sooner or later they turn out of be awesome.


Something seems a bit off about that anecdote to me--exaggerated perhaps?

There's a "highly recommended", "smart", Ph.D candidate developer and he can't do in 6 weeks what the author could do do in 30 minutes, even with the author holding his hand? That's not a 10x difference, that's a 500x difference.


Yeah, I've worked with quite a few ppl like this. Usually they are very smart, read programming books, blogs HN, etc. and interview very well. The problem seems to be motivation, as the author pointed they face a problem and don't have the will power to keep throwing possible solutions at it. Or the motivation to write those 500 lines of CRUD and simple business logic.


A lot of this comes down to the company itself.

Let's just put it out there. If you are working for a consulting company (let's just name the big ones - Accenture, IBM Global Services, Fujitsu Consulting, etc..) it is NOT in the best interests of the company for their developers to be a "10x developer" or even a "1x developer".

Usually these companies have set an expectation on how fast development will go. If its big government projects, icebergs might move at a faster pace than development.

Working as an employee or even on contract on some of these projects, you learn pretty quick what the "pace" needs to be. These companies don't make their money by doing the speediest development.


Let's say we redefine the definition of a developer to baseline at what's currently stated as 10x. What if the demand for programming far exceeds the supply of 10x devs? Fact: the above sentence isn't actually a hypothetical, it's the current state of the world.

You know what? At the end of the day, it IS harder to teach someone to fish than to fish for them. This is a known fact. This is why developing talent is an investment. You don't just get to reap the rewards from the get go. You invest and hope for a high yield. The person you let go sounded like they had great potential (wrote decent and working code, but just a little slow).

Stop being so damn arrogant.


2 weeks vs 30 minutes is "a little slow"?

Sounding a bit mercenary but if the ROI on your investment is not high then you abandon the investment and find another one.

It sounds like this developer was not returning a good enough ROI (and believe me I hate using that term with regards to people) on the educational investment being made into him, maybe they needed him to get up to speed quickly and it just wasn't happening.


Not when you are talking about human resources. There is a very large up front investment in order to train people up.


I'd in no way claim I'm a 10x developer. Hell I question whether I am a 1x developer, but it's not really as simple as that.

First of all the notion that 10x doesn't exist is sorta crap. This may come across as pretty shocking but I don't dedicate every waking moment to development. There is an expectation from many people that being a developer is a 24/7 job and a lot of kids buy into that. If someone spends 10x the effort of me because they honestly feel their career is more important than the rest of their life, then I wouldn't be suprised if they were considerably more productive.

I like to think effort correlates to intelligence and skill but I guess it isn't that simple either. Many people learn certian things, while I choose to focus on others. In the end we may emerge somewhere in the middle but everyone has their strengths and weaknesses. Thus whether you're 10x more productive than me depends on the domain of knowledge which we're utilising.

Folks judge productivity differently. Sadly this often comes from a managment perspective which relates directly to the speed the thing they wanted first appears to them as usable. That's fairly logical from their point of view, but we all know it's folly. Personally I live in a world of duracell bunnies coding up a mess without any thought of security of maintainability. More often than not management tend to believe they're more productive but in reality they just make everyones life harder in the long run.

Finally it comes down to motivation. I'm working a crud app for day rates, I'm in no rush. That's not to say I draw it out longer than it should take but I work my hours at a reasonable pace then I go home. Give me an interesting problem and maybe a vested interesting in completing it and I'll work harder and work on it at home. Suddenly I may be that 10x developer you were talking about afterall, though I'm probably not going to spend my entire life like that.

Personally I think it's pretty silly to propagate the myth that people are useless if they aren't as quick as others. Sure the scale is huge, but the real problem is people aren't rewarded based on their value but based on crappy metrics like "market rates". Next up is even if management were willing to reward based on value, it's actually very difficult for them to do in the vast majority of cases.


I think the problem is that the author has normalized his development standard to proficient coders in the tech industry. I used to do enterprise software in the midwest and there are legions of 1/10th developers out there. Its easy move your baseline far away from the average when you are in the echo-chamber of high-tech startups. Compared to the true average developer there are definitely 10x developers.


I think the author's point was we should start calling a spade a spade.

Stop calling the 1/10th developers normal baseline and admit that even thought they may be the majority of developers, that doesn't mean we should accept that as the status quo.

I imagine this phenomenon is common in most professions. They are tons of accountants and arborists out there. I wonder if they have the same ratio of 'able' to 'efficient', or if some professions are better or worse.


I once had a professor who remarked that if you have a large enough population, the distribution of their skills will form a normal distribution. The difference is that, in some fields, the difference between a genius and a dud is much smaller than others.


I'm not a 10x programmer. Probably will never be. I do know of programmers who not only are slower in shipping, but are very bad at designing coherent programs and data structures. That's my main observation. It's not that I'm X times better, its that they are X times slower and make poor design choices. Why? My experience has shown that this tends to happen with people who stop keeping up to date and stop learning.

Let's take my current job, for example. I'm working with a Python program designed to interact with 2 other programs. It is a middleman. The input comes in as text files, and the output goes out as text from a SQL query. The two programmers who built the system I'm working with (one from IBM, another from NASA, believe or not) had never heard of XML, or JSON. Every data structure inside the system is handled by reading and slicing around the contents of the text files. Slow, error prone, and tedious. My first step was to clean this up. I now have a small script that takes the input data text file and turns it into JSON. Since this data comes from an outside vendor, I don't get to pick the format (though I'm developing an API to handle this). After cleaning the data and saving it as JSON, I then simply load up the JSON file into memory and go through the program flow without issues. I removed the size of the program to about 25% of its original size. Plus its way, way faster. I also removed a bunch of database queries that simply did nothing. Was even able to add logging to the program.

Anyhow, the crazy part about it is that the actual code was worse. For those who know Python, using global variables for everything is considered pretty bad form. Here is an example that mimics the code base I inherited:

    #they called lists arrays
    an_array = []
    def foo(array):
    global bar, global hello, global hi
    if bar <> 1:
        print "something"
        array[0] = hi

Yeah. But wait, there's more. There is a web app written in (what else) flat PHP. It handles very critical information, and is plagued with:

    $foo = $_GET['foo'];
    sql = "INSERT INTO TABLE.NAME VALUE($foo)"; //not actual SQL 
They never clean, or sanitize the data. They don't even check if the GET array is empty.

It is obvious these two programmers stopped learning. They simply fell into what I call a learning slump. Where all new data simply goes out the moment it goes in.

    #Like so
    def new_data(data):
        return data

    #Instead of
    def new_data(data):
        #process data
        #insert data into database
        
Yet, I don't want to call myself better, because most people here are better than me. Yet this leaves me with quite a strange view of everything. I'm not a great programmer, but I'm not as bad as those guys who worked at some big name place. Where does that leave me? Who knows.


I feel I am in the same boat as you. I belive myself to be a very average (possibly less so) developer.

Even believing that I cannot understand the amount of crap I have been forced to rewrite because it simply does not work.

Just recently a expensive contractor was brought in to write a series of simple CRUD screens using .NET MVC and KendoUI. It appeared to work, but if you even scratched the surface you quickly discovered it was almost totally broken. Further down it consisted of one big class as a service layer, raw SQL rather then LINQ, and a jumble of interdependent hell. I spat the dummy after 5 days of fixing bugs (trying to avoid changing too much) and scrubbed each page one after the over and ended up with something modular which I could actually support. It took me 2 weeks to scrub (80% of the codebase was touched or rewitten) what took this contractor 3 months to write.

I am with you. If that's normal (its not the first time this has happened to me) then what sort of dev am I if I know I am not a rockstar?


I don't mean this offensively, but this has a bit of a whiff of NIH syndrome.

Rewriting and refactoring existing code is a lot easier and quicker than writing it the first time, even if it's someone else's. Unless you saw a massive drop in TLOC you were probably just rewriting it into your own style. Looking back on my own past, I am guilty of doing this myself more than once.

And raw SQL vs the Entity Framework is not a reason to judge code. There's a lot of warts on that thing still, though I do use it in moderation. LINQ and the EF are two completely different concepts btw which you seem to have muddled. You can use LINQ or the LINQ extension methods with any IEnumerable, not just with the EF ORM.


No that's fine. From the above it does sound like it.

Normally I would agree, but I really couldn't fix bugs in this code without re-factoring majorly. It really was that bad. I wouldn't have minded so much if we hadn't supplied the guy with a sample which showed how he should lay it out as well.

I may have gained a boost from his code (I did literally just scrub the codebase and reused a lot of had a change a heck of a lot too) but im certain I didn't gain 3 months from using what was delivered.

For the SQL portion we explicitly told him to use repository pattern using Entity Framework which he totally ignored this requirement. The moment I saw raw SQL in our models I really cracked it.

Of course the above could have been fixed with a decent peer review process but I could write chapters over what happened there. Suffice to say I saw the code once, made many requests of things to fix and was never invited to review it again.


I thought about reasons for re-writing for 2 days. But the system is just not reliable or safe. I'm not re-writing it in one sitting, just replacing the buggy/unsafe code with good code until the system is stabilized. Then, and only then will I re-write. Though there are parts of the system that would work very well in Lisp. (:


Absolutely. The parts that really bugs me is when I take on bi challenges. My biggest challenge so far has been building Nuuton. It took me months of thinking and reasoning the data flow and structures. On top of that, I had to pick which language fit the problem, rather which problem fit the language. My original choice was Lisp, because it allowed me to create DSLs. Something that would have really helped in building each individual sub-system. But at the same time, Lisp is not as simple to pick as it should. It has libraries, and an amazing amount of data ready to be researched. Yet, it does not seem (to me) as complete, in terms of libraries and community, as Python. Which was my second choice. Python has all of the libraries I needed. Sure, I had to play by Python's rules, but libraries and frameworks like Django really do save development time.

Them, I had to research the options available within Python that would allow for the architecture I had designed. Every sub-system is simply a black box that reads/returns data in a standard format. From the crawlers, to the data store, to the actual web app, everything is simply an API. Ultimately, I chose the Django rest-framework, because it had a lot of features built into it. Thing like authentication, a permissions system, throttling, and so on. Those were things I did not have to write. A simple set of Models/Views and I was off. With Lisp, I would have had to write that, and end up with a code base that would not have been as good as that one.

Nuuton is huge, in terms of sub-systems. But small at the same time. Everything is divided neatly, and is very simple to follow. The queue sub-system that handles requests/responses from the crawlers is very simple to follow. Plus you can actually see how the data is collected, cleaned, formatted, analyzed, stored, served, and modified.

But I'm not a rockstar, cowboy, or even great. I'm average.But worse is that I have this inherent thought that I will never become any better. Even though years ago I was writing Visual Basic apps (you know the type). This due to how much I struggle. Every day I have to research some class, or some method. Can it do that? What's parameters does it take? Does it fit the design? And sometimes I just write the damn thing and end up throwing it out because I realize it is too complicated. I also understand that I will always struggle. But I will never quit. Do rockstars struggle? Do the people that graduate MIT and go on to work on Google struggle? I don't know. Maybe if they do, they dont struggle as much as me. Guess thats the difference. Average struggles a lot, rockstars not so much. Yet I know thats bullshit. Who am I to question my own insecurities?


> Every data structure inside the system is handled by reading and slicing around the contents of the text files.

This is quite common for mainframe systems, only because many mainframe applications are written in COBOL and that is the file format COBOL prefers.


Yes, you are right. I had not thought about that. Maybe the guy used to do mainframe programming. I've played with COBOL before, but the pattern never popped into mind.


>Yet, I don't want to call myself better, because most people here are better than me. Yet this leaves me with quite a strange view of everything. I'm not a great programmer, but I'm not as bad as those guys who worked at some big name place. Where does that leave me? Who knows.

http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

If you think you're bad, you're probably way above average :p


You are probably right. I just hate saying I'm better than that guy or that girl. Because programming is so complex, that I might be great at doing some things, while others might be great at doing others. Does that mean I'm better? No. That means I have to improve to learn what others do better than me. There is no way to talk about this without sounding weird. (:


If you don't have your existing coding team involved in the interview process for new people, then you are looking for trouble. The net cause is either your own poor processes and resource allocation (ie. bad hire) or weird internal procedures/standards (overwhelmingly difficult for newcomers to grapple with).

Anyway, '10x your tier of productivity'-style elitism is pointless. People will always either accept new colleagues or get shirty ... there is no objective metric for the infinite range of potential human foibles.

That said, removing someone too early could easily prove a loss ... often people who think differently are a 100x asset. If you can influence an important resource allocation decision with interesting information, then saving both a month's time and the potential alternative productivity for a bunch of fellow developers could easily pay for a hire's first few.

Software's complex, mmkay. Humans are even worse. Chill, we are all flawed creatures :)

</first-bottle-of-red>


If there are 1/10 developers, there are, in fact 10x developers.

Guys, math and stuff.

In all seriousness, there's a lot of things being done in the productivity and quality spaces to try and mine data to measure things. I recently joined tenXer (.com) which is trying to both drive productivity measurements as well as quality indicators.

We by no means have any answers, but I think things have evolved a lot in the recent years to where bringing real measurements is becoming a possibility.

I think the important thing here is to realize that measuring people is not solely for someone to manage you differently. You can't succeed in something if you don't have a reasonable success metric, and data makes it far easier to capture those.


    [ ] What separates the 10x devs from the 1x? (what _is_ it? Environment, skill, ...)
    [X] 10x productivity myths (why you shouldn't believe it)
    [ ] 10x productivity truths (but, but, it IS true, see?)
    [ ] Be the 10x developer (aspiring to be one)
    [ ] Startup perspective (should startups try to acquire 10x devs?)
    [ ] Why the 10x notion pisses me off (why it shouldn't matter)
    [ ] If 10x productivity why not 10x salary? (or similar)
    [X] Some meta discussion on the metrics of "productivity" (solve for x)
    [ ] Technologies/Languages/Tools/Methodologies that facilitate/impede 10x?
    [ ] 10x in other professions?


From my recruiting experience and from asking around others who do programming during recruiting, there are 30% programmers that can't program even simple problems (FizzBuzz, String reverse)


"but the difference between developers who are proficient at what they do will never be a x10 multiplier - it will be closer to a variation of 30-40% in productivity"

If you look at individual productivity you're right, but the effect of the myhthical 10x developer is not just on their own work but on the work of others in the project. You could in some cases get a 10x at the project level. It's the difference between working smarter versus surface level efficiency such as better time management.


Are we to suppose that Fabrice Bellard, Steve Wozniak, and John Carmack are merely proficient developers, who are experienced with their stack and problem domain?


well, the latter two of them seem to have created their domains from scratch, back in the day. Now that they are done with the trailblazing and lots of other smart people crowded in, who knows where they stand in comparison to the right tail of the bell curve.


Isn't it obvious that people who use dual monitors, take time to set up projects, one step builds, thonl before coding, think about their work be more productive then who don't. What I feel is productivoty isn't magical which a programmer has, it is driven by how much obsessed the programmer is in increasing it ( too much obsession wont be great though ).


What you're describing (not counting dual monitors, which is a complete non-sequitur IMO) is just basic competence. When people are talking about 10x, it is more a matter of knowing what to do and how to do it.

As a simple example, consider implementing a shuffle from scratch. The inexperienced programmer will muddle about with loops and random statements, a few false starts, and probably after a morning or so will end up with something that is slower than necessary and perhaps not quite as random as it should be.

The experienced programmer will say "Fisher–Yates shuffle!" and have a top-in-class implementation ten minutes later.

The 10x programmers I've known were experienced like that in seemingly everything. I'm pretty sure I'm an above average programmer, yet I could routinely save myself hours of work by consulting with them for five minutes...


This is interesting, but I feel the author did not get to the meat of the issue yet. He suggest to relabel the 10x to 1x and 1x to 1/10x. Which to me just leads to a discussion about whether the glass is half empty or half full.

I think the interesting question is, why is it that 1x is generally regarded as default while 10xers are regarded as special.


"You can tell when someone 'gets' it and when someone doesn't."

Can someone explain this in more detail please?


Irrespective of languages (verbose vs non verbose), I have found that the rate at which I code on a normal 8 hour day is 500 LOC of medium complexity code including testing effort - and it has not changed for a long time.


Why can't both be true? If you actually had some objective measure of programmer skill, wouldn't you expect there to be some kind of bell curve of skilled programmers, with 10x and 0.1x people somewhere at the tails?


Having a hard time buying this theory especially given the article's anecdotal evidence. If someone spent 2 weeks trying to program something that took him 30 minutes that is over a 100x improvement.


I think there are 10x developers because the vast majority are 1x.


10x at what? What is 1x?

Programming is a wide field. In some tasks, I'm faster. In others, very slow. Give opportunity to do only the things I'm good (and pay me!) and happy I will. But... I need to do a lot of stuff I'm not good, or is unrelated to programming per se (I'm a solo startup) and sometimes is like the time is stopped...


I'm of the same view.

I think there is some fuzzy logic going on here: If the majority are 1/10 developers (as the op asserts) and thus form the baseline x, aren't the 1/1 developers by definition 10x developers?


haha exactly... It doesn't matter at what number you set as the baseline. We would say a developer's productivity is an average (mode perhaps) of all developers' productivity. There are a percentage of them that are 10x more productive.

Its much worse than that in my experience. Some people won't be able to jump over hurdles no matter how hard they try. I'm like that for making a website look good (pre-boostrap). I'd eff around an entire week on swapping colors and changing pixels sizes.


That of course it depends on what you call a "developer" and what your standard of "1x developer" is.

I know people who are full-time developers who I would class as less than 1x


So, Zipf rather than Bell .. ?


It doesn't take much time wasted on Twitter, HN, or chatting up co-workers of the opposite sex to turn into a 1/10x developer


So are you saying if you are a man, that talking to a woman makes you a 1/10x developer? Or am I missing something?


Hah, I don't think the parent post is literal. The point being that it's easy to be a 10x developer if you have 10x the focus.


Lets just be lucky the poster wasn't at Pycon


What's the difference so long as it remains true that some developers are 10 times better than other developers?


if progress in software dev is limited by bottleneck problems that stop progress or slow it to a crawl, then large multipliers in productivity are possible and limited by the scale of the problem...


Is a 10x developer the same as a "10 star" developer?


so what do i have to do to shift from 1/10 to 1x ?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: