I've noticed that people within the same company, are usually comparable in productivity at a micro level, but can be extremely different in productivity at a macro level. If you give someone a very specific feature-task to work on, it usually doesn't matter who you assign it to. But if you give someone a very open-ended task, one that requires making high-level architectural/design decisions, you will get wildly different results which will help/hinder the entire team for years and years. This is where 10x programmers truly exist. They may not be 10x in their own personal productivity. Rather, than unlock 10x productivity for the entire team, through the foundations that they build.
I agree with you in the general sense, but want to point out that programmer productivity is far more than writing code. I think you’ve lumped quite a bit under “productivity for the entire team”.
Debugging code and reading others code are the two big ones to point out that are usually equally important to writing good code.
I have experienced that you can compare programmers in these tasks on a micro level and see drastic different results.
Top programmers tend to not only write code with good foundations, but also have an uncanny sense for the root of seemingly obscure issues, as well as the ability to understand other code almost on sight.
Debugging is the single biggest challenge in my experience for junior devs. It is also the best way to learn.
When they call me up frustrated that they've wasted hours trying to find a bug, whether it be as innocuous as a typo, as subtle as a type-error, or as painful as a quirk in the framework, I will always consider time spent debugging to be worthwhile. As this is when you pull apart the guts of the code, stretch your understanding of it, and learn to isolate the flow of data within a system. Then you can put it back together in a better way.
I've always kinda liked debugging, and sometimes love it. I wonder if there is a general correlation or even predictive property here, with taking to debugging right away and programming achievment. (as little as you know how to program, you still can and will need to debug it, right from the start of learning)
I'm in a team right now which I believe has a 10x programmer. He picked a simple threading model to prevent tons of wasted time on deadlocks and other threading issues. It's also easier to reason about and get new people on-boarded. When he reviews code, he finds bugs that prevent days of debugging down the road and suggests simpler architectures that make the code easier to understand and change. He has had this effect on ~20 people over ~5 years. I would not be surprised if he saved us ~1 year of dev time collectively.
Simplicity is the silver bullet. The longer I work as a developer the more value I see in it. The single biggest mistake I see good but inexperienced developers make is building overly complex & unnecessarily abstracted solutions.
Exactly, and to elaborate on your point, take a problem that seems complex and difficult to break down, and find the simple pieces that can be tied together to solve the complex problem.
To me, that's programming in a nutshell. Or at least what it should be.
To me part of 10x mindset is seeing most things as a business with an objective. Sometimes that objective gets lost in the name of process. You start accepting projects even when it is not adding a value to the business, but i is done becaude someone wants it. This leaks from the top down. Part of that 10x is to avoid work that is useless.
Implementing unnecessary features also complicates the code. There is a network effect for complicated code -- the more complicated it is, the more complicated you need to make it to add functionality. It's really incredible how fast you can slam the breaks on a project simply by not questioning if you should be implementing something.
On the other hand, feature poor software is not necessarily simple. Simplicity is hard to achieve. It requires a lot of thought and usually a fair amount of iteration. "Don't touch that code because it doesn't have a good ROI" is also a surprisingly good way to slam the breaks on a project.
Maintaining velocity (or even increasing it) requires a delicate balance of avoiding work that will harm you and encouraging work that will help you. There needs to be a dialog between stakeholders and developers that's 2-way to accomplish this.
a delicate balance of avoiding work that will harm you and encouraging work that will help you
Yes this swings both ways. I worked at a startup where every feature improvement was shouted down as a waste of opportunity cost.
They found a decent local maxima but their growth stalled and they degraded into a consultingware company. The more ambitious devs bailed because there was no room for growth.
> I worked at a startup where every feature improvement was shouted down as a waste of opportunity cost.
I'm curious - what were the devs doing with their time, if feature improvements were constantly being vetoed? Were they being kept busy with other tasks which you consider lower-priority, or were they just sitting idle?
>I'm curious - what were the devs doing with their time, if feature improvements were constantly being vetoed?
Yeah it was a really weird culture, there was definitely some quiet time where I would propose a product improvement but still get no traction. Eg the product website was really embarrassingly 90s, they would let us A/B test customer deployments but not their own site.
There also was a lot of repetitive content scraping tasks that should have been improved and resulted in excessive pager duty (hard to explain but think fragile regexes that parsed customer HTML). That was the last straw for me, I'll do pager duty but not every night just because the PHB is a fool.
They mostly burned our time on trivial consultingware requests instead of improving the core product. I eventually started sneaking core enhancements in by padding out the customer work and just not telling anyone.
This is the same PHB who threatened to fire me because I had to rush my partner to hospital with a concussion. Wonderful human being that.
Write code to have an impressive features list in the product, even if half of them have 3 bugs per line of code.
It's not a strategy I approve, but it can be effective from a commercial stand-point, specially if the half that doesn't work is the half barely used in real life.
It was specially true in the past, with on premise software where clients were in a kind of lock-down with the software provider. Today, as we move more and more towards SaaS, this approach is far more risky because your client could easily switch to another service provider.
I will be sarcastic, ironic and voluntarily provocative: he is an idiot, if he was a real 10x developer, he should have chosen a complex code architecture only understood by him, that way he would have been the only one able to implement and maintain features inside the code base, making all the other developers 0.1x and consequently he would have become a 10x developer himself.
More seriously, this kind of situation can happen without ill intent, simply have a fast developer with a strong personality and he will become the lead for everything and the only one able to understand the code base.
Also the velocity of an individual developer can be overvalued and mistaken for a 10x, it's certainly easier and faster to write spaghetti code instead of a well designed/architectured code base, but doing so can have dramatic effects down the line, or even immediately for the other developers in the team. Yet it can be a quality in critical situation (like our startup will shutdown if feature X is not implemented in 2 days).
It can be hard to differentiate between a developer like I describe and a real 10x developer you describe. The applications we designed can be quite difficult to implement depending on the complexity of the domain specific logic the application deals with. Sometimes it's impossible to recognize between complexity caused by the application architecture (generally avoidable) or complexity inherited by the overall domain logic (nearly unavoidable).
And lastly, it's never, ever as black and white as I describe.
That being said, your colleague seems like a really good developer, able to see the big picture and steer a code base in the right direction. I hope for you that he will part of your team for a very long time.
That was after he picked the simple threading model. Had he picked a baroque threading model it might well have been 10x the effort for the entire project.
The original software engineering study about the performance of some programmers being higher (28 times as per the paper) was in 1968 - https://dl.acm.org/citation.cfm?id=362858. After that there really haven't been any real studies to show that some programmers are an order of magnitude better than others. The reason why the original study is not longer relevant - it was done at a time when programming meant loading punch cards. I really want to see an evidence based study about the 10x progammer than something anecdotal.
Here's something I churn out every time a 10x thread starts again; it has some references in it:
A 2nd edition of Peopleware summarises it; the 10x programmer is not a myth, but it's comparing the best to the worst; NOT best to median. It's also not about programming specifically; it's simply a common distribution in many metrics of performance.
The rule of thumb Peopleware states is that you can rely on the best outperforming the worst by a factor of 10, and you can rely on the best outperforming the median by a factor of 2.5. This of course indicates that a median developer, middle of the pack, is a 4x developer. Obviously, this is a statistical rule, and if you've got a tiny sample size or some kind of singular outlier or other such; well, we're all adults and we understand how statistics and distributions work.
Peopleware uses Boehn (1981), Sackman (1968), Augustine (1979) and Lawrence (1981) as its sources. [ "Peopleware", DeMarco and Lister, 1987, p45 ]
Furthermore what does 10x mean? What is being measured? Features, quality, lines of code, dollars, bugs, time, "the mission"? If the measured things is d(dev1, dev2, ...) then there are some interplays going on. No every 10x dev can 10x every team.
Not an evidence based paper, but via https://en.wikipedia.org/wiki/Lotka%27s_law, the occurrence of a 10x programmer may occur 1/100 from the talent pool. A 28xer would mean 1/784.
Now all this is only an extrapolation of the above power law which was originally meant to describe papers published within a time frame. And to make things more meaningful, its really hard to compare people across teams and even companies. And finally, its all relative within a company.
From other posts, it feels like people tend to overestimate other's ability. I'm sure all those anecdotes are true but the evaluation of said individual to be 10xer may be overstated without a sensible metric.
> it feels like people tend to overestimate other's ability. I'm sure all those anecdotes are true but the evaluation of said individual to be 10xer may be overstated without a sensible metric
It's incredibly hard to estimate anyone's relative abilities in knowledge fields - and this includes yourself (probably worst of all - estimating your own ability is remarkably wrong much of the time).
How much of someone else's ability came down to an "aha! moment"? How much was because of what they've done before. How much was because of what they heard someone say about the problem that no one else caught? How much was seeing three other teammate's efforts, and noticing they're reproducing work, and can cut some of their workloads? How much was because they kept everyone else excited and motivated to finish the project and do well?
Some things are easy to compare - times to run a 5K, how many biscuits you can roll in an hour, how many bricks you can carry at once. Most things aren't that simple.
Analyzing the relative ability multiplier for any given contributor can really only be done after the fact (often well after), if at all.
This article is more about generalists vs specialists, which is mostly orthogonal to 10x. The best point the author makes is "use the right tool for the job", but even there, making good platform decisions is merely table stakes. Also, it's not as sensitive as the author would like to believe. To take his example, using Sinatra instead of Rails is simply not that big a deal unless you are specifically memory-constrained.
10x itself comes from the long-term observation that some programmers are just dramatically more productive than most. It doesn't come from one trick or technique, rather it's a combination of strong modeling, internalizing the high-level goals, and being able to mentally move through the layers of abstraction very fluidly. Where the 10x comes from is building elegant models that bypass whole rafts of problems and unnecessary code that lesser programmers would create. It's more about the code they don't write than the choice of tools.
I get the idea of generating less technical debt. That's important. And completely separate from the fragmentation of titles/jobs within CS. Part of 10x productivity is being great at a wide variety of tools, not just excellent at one. And yes, part of it is employing good code practice and data models and such.
You're missing my point. I'm not talking about "good code practice and data models and such". I'm talking about how you digest and model the problem as a whole, that is where the 10xer shows his quality.
Consider git vs svn for example. Linus was familiar with Subversion and its predecessors (CVS, RCS) before he created git. He knew that that model of change control was inherently flawed, and that he could create a better base model. According to Linus, he built the git core in 10 days. Even though svn had decades of a head start and tens of thousands of hours put into it, it only took a few years for git to rapidly supplant svn. The reason? Because the problem was modeled better. After 7 years of using Subversion, I was still getting bitten by weird bugs, had a terminal fear of branching, and general uncertainty about how the internals works. Within 1 month of using git, I had a better mental model of how it worked than I did about svn. Looking at the ecosystem around git now, especially considering how user hostile the porcelain CLI is, is a testament to that original core Linus created in a remarkably short amount of time.
Another example in the Ruby world is how Yehuda Katz and Carl Lerche created Bundler. Rubygems already existed of course, but there was no way to freeze dependencies, and as the Ruby ecosystem exploded in the mid 2000s we rapidly entered our own version of the famed "DLL hell". These guys came in, and over a relatively short period of time, hammered out a solution for freezing gem dependencies, layered over Rubygems (whose antagonism they often faced), addressing a huge range of use cases, and got it adopted as a defacto standard. To this day, Python has made 3 or 4 attempts, none of them as good as Rubygems. NPM, which started after Bundler was already out, was inferior and is only approaching in the last couple years with Yarn.
These are the type of things 10x developers do, they build systems that pay huge dividends by their elegance, things which others can build on. Neither one of these examples has anything to do with using a "wide variety of tools". Learning multiple tools is just something that happens organically through your career in most cases, but it's far from necessary. You could specialize entirely in C++ for your entire career and while there are lots of problems it wouldn't be a fit for, but that doesn't preclude being a 10xer.
My understanding is git’s origin was in reproducing and extending the functionality contained in Bitkeeper due to a crisis with the licensing. The apparent story of git being a exercise of Linus the 10X programmer digesting and re-imagining ‘SVN’ in 10 days seems non-historical. I think a good deal of the git vs SVN ‘10X special sauce’ was done by the company behind Bitkeeper, a tool already used for kernel source control. Linus built the core in 10 days because the problem was clear and the DVCS solution was already in use via Bitkeeper.
Yes, I'm aware of that, my intent was not to suggest that Linus invented the DVCS from whole cloth (DARCS preceded git and hg by several years). The point is that moving to an open source VCS had been a topic of discussion for years leading up to the Bitkeeper crisis, and Linus had been very vocal about the unsuitability of the mainstream options, going so far as to say email and tarballs would be preferable.
The fact that Bitkeeper precipitated git does not really diminish the accomplishment in my eyes. Remember, Bitkeeper was closed source, and the licensing crisis was precipitated by reverse engineering of Bitkeeper in the broader kernel developer community. Linus specifically set out to build a replacement from first principles and he built an incredible foundation in a very short amount of time. If it was a straight clone that would be one thing, but in reality git is its own thing, and fairly par for the course in terms of a top engineer leveraging past experience to build something better.
It’s good to point out that Linus appeared to nail the core data structures for a successful and performant DVCS in those 10 days. The git UI for the longest time was fairly obtuse yet that seems to have mattered less in this case than that the core foundations were solid as your comment points out.
It takes well planned effort to remain obstinately obtuse as thengit UI does. ;)
Joking aside, I only use a subset of git functionality which generally suffices. The part that keeps me happy is that the tools for detecting files renames, and diff’ing tools continue to improve and become very useable (despite the diff options being quite archane at times). There was an article a while back discussing how git’s straightforward approach of storing the original data blocks allowed the continued improvement of the diff and rename tracking tools in contrast to others like mercurial or bazaar which tried more sophisticated delta (?) techniques upfront. Wish I could find that article again, but it would support the parent’s premise that choosing the right models and framework make a 10x programmer.
Being a 10xer means being fast at finding the right abstraction within a reasonable time frame which allows to further diminish time constraints that are part of the execution.
You're right, abstraction is more important at API level to enable other developers grok the code faster. But the problem is also solved better at the right level of thought, reusing libraries where useful and dropping closer to the metal when perf requires it
Linus was a user of BitKeeper, a proprietary distributed VCS that already implemented that improved model. Linus did not conjure the Git model in 10 days.
Less technical debt? I always thought that you get to 10x by jostling with your coworkers for the highest-impact work, and then plowing through as fast as you can, generating as much technical debt as necessary for your 1x colleagues to clean up. At least that's how I've seen it work.
The specific examples in the post seem to boil down to architcture which is ultimately a managerial decision. Either the PM knows enough, and is respected enough, to dictate how the project will look, or they cede to the expertise of their team.
To be bloody minded, what happens if 10X says: "Why use RoR for an API-only app with no database when you could go with Sinatra? Or even serverless?"
And all the <10X devs on the team say "But we know RoR."
Then it's a cost benefit analysis of whether retraining the team to adopt a 'better' tool is worth it. Sometimes it's blindingly obvious that you shouldn't be inventing a wheel when there's a wonderful library that'll do it for you, most of the time it's not so simple.
Then part of the skill of being a "10x" developer is avoiding working with incompetent PMs and devs. There is no way to achieve outlier successes in an environment like that.
What if simple success would do? I think that delivering an high quality end result within budget and scheduke is much, much more important than being übercompetent 10x developer who only works alone.
"True 10x" (tm) is about optimizing at a higher level. Whether it's the choice of tools or the approach to solving the problem or redefining the problem it's about taking a path that is significantly more optimal than the other paths.
If you think can tell within less than 1-3 years then it's probably not. I.e. just spewing up code faster than someone else or using some technology that appears to work isn't really the test. There are those who do things very quickly, then re-do them, re-do them, fix them, all very quickly. A lot of action, not much gets done. There are things that are done slowly, not a lot of noise, but get you to a completely different place. Very rarely is this purely a question of tooling, language, editor etc.
That's not to say that being able to move quickly doesn't have value for proving a concept or prototype some idea, maybe the stability or scalability or maintainability don't matter. But when you're building stuff for the long term those are x10 factors.
Agree with a lot of this. I don't think that knowing a diverse set of options is the only factor, but I would argue it's a very important one. Even disregarding the new tools coming out, a developer is faced with the daunting task of learning massive sets of tools already created.
10x is having the experience and ability to accurately provide a 20/80 (or some range) estimate.
10x is producing code of such a high quality that time spent in QA doing break/fix dev is negligible.
10x is thinking up front about the data model and technical design before writing any code.
10x is maximizing simplicity (no unnecessary abstractions, etc.)
10x is having a deep understanding of the environment (OS, language, supporting libraries, etc.)
The difference between 1x and 10x is decreased time spent downstream in the development process, reworking code that doesn't work, maintaining it after, etc.
I see the 10x thing happen with math. I spend a long time figuring out an algorithmic way of getting a pretty close approximation for some mathematical result, but it only works for small numbers and takes a long time. Then someone gives me the closed form solution and it takes one line of code, is arbitrarily close to the answer, and has constant runtime for any size number I care about. That is more like infinity-X programming, and makes math appear like a magical incantation that controls this mysterious world. It can almost convert a man to Platonism.
One often overlooked aspect of being a good developer is surrounding yourself with people who either are better and/or have different experience.
I came into my current company as a dev lead to create a modern dev shop from scratch.
There were some areas where I had no practical experience and didn't have the breadth of knowledge I needed and of course no one else in the company did either.
My first major green field project would have turned out a lot worse if I didn't have a network of former coworkers I could ask recommendations from.
>My first major green field project would have turned out a lot worse if I didn't have a network of former coworkers I could ask recommendations from.
That's in line with research by people like Alex Pentland who have identified face-to-face networking as the most important factor that distinguishes high performers from everybody else.
Extremely successful people often have a strong network of peers who they can rapidly bounce ideas off and iterate through new solutions.
The discussion on top performers often seems to have a narcissistic tone and focuses too much on individual aptitude.
As a modest (probably 3x, since I work hard) programmer-turned-founder who has worked with a handful of 10x programmers over the past couple decades, as well as a huge number of 3x programmers, 1x programmers, and 0.1x (i.e. net-negative) programmers:
No.
It is just not true that any 1x programmer can be a 10x programmer.
General intelligence, working memory and the ability to focus for extended periods of time can be slightly improved but cannot be improved by an order of magnitude.
It is a noble fiction to pretend otherwise. But it's still a lie.
In a large organization, it arguably helps to make the 1x programmers into 1.5x (or 2x or 3x) programmers, and the 2x programmers into 3x ones. But it does not make a 1x programmer into a 10x one. Whether that's possible has probably already been decided before you met the person.
Another HUGE factor on a programmers productivity is simply their level of interest and personal motivation for this particular piece of work. If a programmer finds it interesting then they likely will be much better at it than if they are less interested or worse, actively hostile towards the work, which happens alot.
The last five years have been pretty kind to me - senior developer at two companies, delivering a number of successful projects, building up development teams, delivering talks and open-source tools, and leaving each company in a much better position than when I joined.
From my CV, you'd probably think that I was a solid developer, but in reality I've felt like I'm a passable developer that simply learned how to use one framework really well. I was a senior .NET developer, but I felt like an imposter when people would talk about Linux, or many of the tools I see on HN every day.
So, I left a cushy job, and I've jumped into the deep end. It's fairly obvious that I'm not a 10x developer because I've found it REALLY hard! I can go from C# and read a bit of Ruby, but if you were to throw me onto a project and say "fix that bug" it'd definitely take me twice as long as a competent Ruby dev, and I'd probably add more bugs than I had fixed.
Hopefully things will improve, and within a year or so I'll feel comfortable enough in a range of languages and framework to feel less like an imposter, and while it'd be nice to be a 10x developer I'd feel much happier to feel less like an imposter.
10x probably implies good soft skills too. Being able to deftly shortcut established official roadblocks might contribute just as much as technical prowess. Using influence to get best practices established, leveraging good 3rd party work, etc.
I've seen 10x'ers that mostly used existing code and political/influence moves to go faster than everyone else. Smart people for sure, but not just code slingers.
I agree with the ideas that focusing on specific tools is dangerous, and that developers should get comfortable understanding the work other jobs do. But I found the argument by the 10x dev unconvincing, it was too buzzwordy. The dev's talking about real tech, but they didn't argue that those solutions better. The main argument that I saw for not using RoR was that other options existed.
Hi, I wrote the article. I agree that I may have been too light on the details about why different technologies should be chosen. Tbh though that is a bit tangential to the point. My point is that it benefits the developer to branch out and learn a diverse set of things, regardless of the details.
Do you need to learn a diverse set of things are just be aware of a diverse set of things?
For instance, we just moved to AWS. I am not going to spend hours upon hours learning in depth all of AWS's services. I did watch a few videos and subscribe to the AWS podcast so I could be aware of the different services to know what they have. When there is a business need to implement X, I'll know what's out there and then do a proof of concept to see if what AWS has fits our needs.
Learning best practices transcend technology. We've had "consultants" come into the company where I'm the dev lead and question my choices of technologies. I ask them two questions - what's the business case and what industry standard best practices are we not following?
That's a fair point that I agree with. My comment was more about the need for real analysis when making large architectural decisions. Experimenting with new tech in small projects is good, and fun. But I've worked at places that have made major investments new tech, sometimes based on small prototypes, and then ran into unexpected issues that were very frustrating after we were already committed.
A "10X" programmer is someone like Magnus Carlsen in the world of chess. They have a knack for it which can only be explained by a convergence of nature, nurture, and sheer genius. They live and breathe programming in their professional life and their personal life. They have seemingly endless stamina. They literally solve problems in 1/10th of the time that an average programmer takes to solve them.
If they had ELO ratings for programming (like they do for chess), then a "10X" programmer would have a grandmaster rating of 2500+.
You know a "10X" programmer when you see one. Until you see one and experience working with one in real life, you simply won't understand. You will attempt to rationalize the concept by examining all of the programmers you know, and assuming that the best one must be a "10X" one. You may or may not be right.
Interestingly enough, 10X programmers almost never get paid 10 times what the average programmer makes. From what I've seen, they are lucky to get paid even twice what an average programmer makes.
Ok, so how does one actively develop the ability to always choose the right tool for the task? The closest thing I've found is browsing HN and clicking on titles that reference technologies I've heard about in passing, but it hardly seems like the most efficient way.
I think if you're genuinely interested and also pursue projects outside of work as a hobby then you're always searching for how to solve problems you're having outside of work and try a range of things. You go back to work on Monday or the next morning with battle tested experience using new things
It's hard to say. Lately I've been getting into the principles of GNU and FLOSS. Simply browsing the massive amount of open source out there has been important for my growth.
I would say 10x engineer is someone who doesn't need a team.
Who simply don't need the security of the incidental monopolies like Google or Facebook to deliver a great product used by lots of happy customers.
How many devs would pass this test? Hardly, that many.
Engineers at Google can simply have smart people who they can leverage, easy access to experts and specialized contractors and if all fails, no need to worry there is ad tech monopoly in place which keeps minting money.
You simply cannot discuss "Engineer" without also talking about the constraints involved.
Engineers at these companies are worse engineers since the most important constraints are already taken care by their respective monopolies.
I knew a 10xer and he just seemed to know everything inside and out so he made sensible design decisions. He also seemed to be always working at home. He was a nice guy, too.
Reserving 10x.guru has been one of the funniest domain-related antics I’ve done. Can tell recruiters they can just ask their computers ”whois 10x.guru” for recruitment tips
Have you seen one with /10 salary? Or some with any combined ten times multiplier in between (like 5x and /2; or 3.16x and /3.16 [sqrt(10)])? According to Glassdoor and other salary surveys it is not a discrete 1:10 duality.
Also software developer salaries are still quite similar to real estate prices. What matters is location, location and location.
But let’s face it, not many people can master so many tools, so the presence of 10x programmers is gonna be rare. There are so many tools in JS for us so we eventually got tired: JavaScript fatigue.
Yet another JavaScript tool is not going to make a significant difference in your productivity.
Knowing something completely different like R, TensorFlow, or Erlang, and the specific scenarios where those are exactly the right tools for the problem you are facing, can completely change the kinds of problems you can solve in a short amount of time.
Let's say we have a restaurant. I can do all these things to be more "efficient":
- Restrict cleaning, washing and hygiene at a large to the surfaces seen by customers.
- Reuse food from leftovers.
- Use expired or different ingredients.
In the restaurant industry, if you do this and get caught, you get sanctioned. In software there's no such thing. In software do the equivalent of these things on purpose, and many times it's encouraged.
Neglecting requirements in order to distort the perception of how much progress has been made, or the product value, is anti-consumer behavior.
I am not interested in anti-consumer behavior but rather doing my job and giving back to society. That means shipping code that meets the expectations of what's being sold.