Hacker News new | past | comments | ask | show | jobs | submit login
The Developer Coefficient: a $300B opportunity for businesses (stripe.com)
177 points by dakna on Sept 6, 2018 | hide | past | favorite | 89 comments



The real question is (and I think this is why they commissioned the study) "how much code should they have not written in the first place?" Stripes answer is "well, definitely not the payment processing portion"

The report is basically leads one to conclude that with well developed API, the world would be more productive if we can outsource common business logic (like payment processing) to we'll encapsulated and designed services. Why? So you can spend more time building new stuff instead of fixing stuff that many other people are also fixing in their own implementations.

Basically, I see this as a "code is a liability, so use a SASS; we aren't biased in our framing of it"


It's correct though.

90% of code is just reinventing the wheel, over and over again. And not just specialised modules like payments. Its insane that even small companies have "data experts", "UI/UX designers". Best practice is actually quite narrow and less important than we think, better to have 30 great designers working on the best way to allow users to interact with devices than 30.000 wasting time. Likewise, its better to have a bunch of smart people working on abstracting away data storage then everyone digging through AWS docs.

The truth is, WE have not been able to agree on standards of software development that would see the complexity abstracted so we can write applications faster. And because of this, cloud platforms and SAAS providers are filling the gap and increasing in power.

Every time I watch the mother of all demos [1] I feel depressed. Its as if builders were given the power to create new construction materials from scratch, for free. And then spent the next 50 years arguing over whose material is better than actually building cool stuff.

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY


90% of code is just reinventing the wheel...

I disagree. More than 99% of code doesn't require to (re)invent anything at all. Actually, if we stopped trying to be too clever, it would be much easier to clean and refactor code.

Edit, to be more specific: mechanic, repetitive, naive code is maybe boring to write, but easier to read and understand.


It is re-inventing the wheel, but in another business context. I have yet to witness "re-usable" business logic, anywhere.


Maybe somewhat off-topic: How much cool stuff is there even left to build in the software world?


The more you build, the more you realize how many more things haven't been built yet and you might want to build after the current one(s)... ad infinitum.


I don't know what's cool to you, but a general artificial intelligence would be cool to me. An operating system + userland without remotely exploitable holes would be pretty great. Working quantum computers would be cool too, but that's more of a hardware problem.


At least in the gamedev area, there's infinite number of games to build (the same way there's infinite amount of novels to write and songs to compose).


Infinitely and beyond


Dependencies are also a liability. There's no free lunch.


Whether or not the source is biased, the advice is sound. NIH can and does kill projects that would otherwise be viable and profitable, not to mention distract from finding product market fit.


Your last sentence was straightforward, correct? (No sarcasm in the 'bias' section.)

I ask because that fits with my reading of both the report and the rest of your comment, but a few replies seem to interpret it as cynical.


Yes and no. I do think Stripe is biased, but that doesn't prevent what they're saying from being true. I think it's mostly true.

At the very least, the report quantifies to some extent the cost of code.

I'd love to see a more in depth study that deals with integrating with services vs libraries vs written in house - each of those have costs, though probably not equally.


>"Access to developers is a bigger constraint than access to capital"

As software continues to eat the world, I am reminded of Uncle Bob's blog on a Hippocratic Oath for software developers and the gravity behind the consequences of what we produce.

http://blog.cleancoder.com/uncle-bob/2015/11/27/OathDiscussi...

It's only a matter of time before the discussion changes from "Developer Coefficient" to much more dire tone. To quote Bob Martin,

"With that great power ought to come great responsibility. And, indeed, society will hold us responsible when our actions result in disaster. And yet nothing binds us together as a profession. We share no ethics. We share no discipline. We share no standards. We are viewed, by our employers, as laborers. We are tools for others to command and use. We have no profession."


Ethics is fine and most of us probably believe that we are doing a proper job at any point in time but to look back in 10 years and say that code you wrote when you were 18 years old 3 companies ago is having a bad effect now is unlikely to illicit any practical response other than making insurance and legal companies richer.

Unfortuantely, it is harder to measure how "good" things should be in software engineering compared to the need for a bridge to stay up or a doctor to not make somebody sick.

I could see governments introducing laws that say every company who writes software need certain credentials etc. but as we already know, ISO registrations or PCI certifications do not actually mean the people who do the work know what they are doing well enough.


There is the ACM Code of Ethics that some of us subscribe to. Though I understand if many people have never heard of such a thing.

I've personally found it very difficult to discuss ethics and professionalism. Many SWEs either don't care or see it as an opportunity to try to inject their personal ethics. Neither is helpful in formulating a professional ethical system.


Do many understand what a professional ethical system. I'd say I'd only have a fuzzy idea, having not worked in a real profession.


Well, on top of everything that's being said here, one thing that strikes me is that this "calculation" omits that technical debt has value.

I consider myself a pretty ok developer, in the sense that most of my time is employed at re-writing systems that have been written poorly by employees. So instead of writing new a shiny software, my valuable time is spent replacing something existing with something else. Zero value? Not quite!

The first-written system has 2 objectives: 1. getting something out to validate the business model quickly - 2. Refine requirements following business needs.

Point 1 implies that for the original system: the sooner the better. Definitely at the expense of quality.

Point 2 implies that the system will experience incremental engineering, often leading to messy code.

But, the market will be validated and the requirements refined. That definitely has a cost, happily paid by technological debt.

That's actually why it's called technological debt!


Point 2 implies that the system will experience incremental engineering, often leading to messy code.

Yes, often. But not necessarily. The worst problems I've found were caused by bad abstractions. Naive code is seldom the source of troubles.


This report is looking at interesting issues, in a mostly pointless way (ask people what they think). The only part of it that was of any real use is the part where you can compare C-level answers to developer answers to the same question, not because either one is likely to be accurate, but because the delta between the two is of interest.

Otherwise, I would say most of what they're looking at (what is bad code, how productive are developers, etc.) is so hard to measure, that the opinions of people on them are nearly useless. There may be a company or two out there that has a good handle on the productivity of their developers, the level of technical debt in their current codebase, etc. But surely most of them do not, and thus an average of the opinions of many people about something they have no clear handle on how to measure, is worse than useless.


I'm glad they did the study, but be careful about confusing these numbers with the truth. There's obviously an incentive for C-level executives to exaggerate (when asked) how much of a labour shortage there is compared to reality. (If they say there's a bigger shortage than there is, people might flood the market and wages will go down.)


True! It also allows C-levels to imply that the problems are someone else's fault (lack of resource) rather than poor management. I struggle as a C-level to recruit good people but ultimately, it is my job to make things work regardless of the constraints I have.


Page 8 of the report:

> Which of the following technology trends, if any, are having the greatest impact on your company in 10 years?

> Blockchain: 0% developers, 20% C-Level

little C-level execs answering this positively understand what blockchain is, and if they said blockchain will impact them in the next 10 years, a lot of that comes from the BTC hype. This also is shown by 0% of developers thinking it will have an impact.


Here's what was in the report:

Blockchain: - developers, 20% C-Level

I didn't interpret "-" as "0%". To me it seems more like they didn't get data for this particular trend. The same thing appears in the next line for ML.

ML: 20% developers, - C-Level

Does it say anywhere that "-" means "0%"? Surely some C-Level execs think ML will have a big impact on their company.


The C-level executives in the company I work for are big on both ML and Blockchain, to provide at least one anecdote.


The Blockchain may not solve a practical problem better than the alternative, but a political problem - a way for a C-level exec. to make project decisions in a complex, high risk field while minimizing career risk("Nobody was ever fired for choosing Blockchain").


Yea its an overly complicated distributed doubly linked list.


Typically, they include a cryptographic hash of the previous block, which I would think would imply singly linked. (Since otherwise, you need to predict the output of a cryptographic hash function, which is by definition not possible.)

And the big ones (e.g., Bitcoin) are more properly trees, though mostly all branches except one will be ignored.


This is a valuable lens to look at the engineering talent market through. But did they really characterize 'time spent refactoring' as 'lost productivity'?


That would mean they believe if you "wrote it correctly the first time" you wouldn't need to refactor. Therefore you would have productivity to produce new features.

This is obviously shortsighted for reasons most developers already understand.


> That would mean they believe if you "wrote it correctly the first time" you wouldn't need to refactor.

Great, then we can save the money spent on version control as well!

I actually had someone suggest this to me as I was installing a VCS. ("Why don't you just write it correctly the first time?")


I tell business people they wouldn't need MS Word but could get by with a typewriter if they wrote it correctly the first time.


I version control and unit test in my mind palace before chiseling my code onto marble slabs. Lasts forever, encourages parsimony, and gives me an excuse to stare at the wall pretending to work 39 hours per week. I call it Bruno coding.


Much technical debt is caused by poor specifications, or fluctuating ones as a company flails around seeking product-market fit. Sometimes the best way to leverage developers is to improve other parts of the organization.


Unclear requirements are certainly a major cause of inefficiency and technical debt, but I would imagine that over-engineering is at least a close second, and that under-engineering, poor choice of tools and poor choice of processes probably fill out the top 5.

I believe that what is missing more than anything in our industry is the veterans who have gained knowledge and wisdom through both the breadth and the depth of their past roles and can now do better than the previous generation. In real engineering fields or other scientific disciplines like medicine, there is a culture of learning from experience, and the people doing the most challenging jobs often have decades of it. In software development, we laughably call someone with 5 years of experience split between 3 different jobs "senior", and a lot of developers in their 30s are already looking for an escape hatch before ageism halts their career development.

So another good way to leverage developers might be to improve their working environment so they don't all quit just as they're starting to figure out what they're doing.


Veterans are fine but engineering moves on, as does medicine, the legal profession etc. What worked well before might work well this time, but it might not. Also, how do I know that a veteran's opinion is still valuable/value for money/technology appropriate etc.

The real issue is there is not 1 correct way to write software but although we are understanding programming more than ever, we are still not great at quantifying and describing the values or requirements that drive our choice of process or even our selection of the right supplier to write it for us.

We had a bad experience from a supplier, not because they were rubbish or lacked skillsets or ethics but because they were not a good fit for the type of product we were building.


Veterans are fine but engineering moves on, as does medicine, the legal profession etc.

Sure. But they all move slowly and deliberately, mostly through careful evolution of good practices as new evidence and reasoned analysis become available. We don't throw out everything we've ever known about how to build reliable bridges every six months because someone thinks that suspending a paper bridge 200 sheets thick from orbiting satellites instead would be cool.

True game-changing developments do happen in software development, but they are quite rare. Most progress in industries like web development is illusory, and it's only a successful illusion because they people who have been around for a while and seen 99% of it before in other contexts have left.


Exactly... maybe just maybe because design/reqs changed? It can be as simple as... the company name changed, so maybe we need to refactor some classes.


I actually don't understand why it is shortsighted.

It's an extreme point of view, sure, but in a perfect world this would be the case every time you build something.

On the opposite end of the spectrum you have to rewrite the entire codebase every time you interact with it to make it fit.

Reality is somewhere between those.


>in a perfect world this would be the case every time you build something.

Yet the same people who hate devs spending time on refactors also laugh when you give them a six month estimate for feature X. I think "perfect" is "good enough" in this context. "Good enough" is always preferable to "non existent yet theoretically perfect".


LOL @ that pie-chart. It seems to be lifted directly out of the mythical man-month.

Can we please find a way to automate these C-level executives next?


Impossible. These are special people. The most important people in a company. They are so important that when they fail they get rich. They can't lose.

Us mere mortals could never understand them, let alone code them.


So the USD 3 000 000 000 figure is summed up on page 3 as so:

            18 000 000      Devs worldwide
    *           51 000      USD Global Domestic Product per dev
       ---------------
       918 000 000 000      Global Domestic Product of all devs
    *                 .316  Percent efficiency loss per survey
       ---------------
       290 088 000 000      USD loss for all devs globally
So the report suggests (I think) that while companies lament about a lack of devs, they're wasting a significant proportion of their existing devs' efforts. (Other commenters here question whether that's "too much" waste.)

I'll wager that we're seeing the result of management practices that don't properly leverage the ability of computers to bring competitive advantage to organizations. Instead, we see organizations struggle with the need to totally rethink their approach given the powerful tool that computing brings. Furthermore, my bet is that most organizations won't successfully transition, and that a new breed will take over.


I’d be really interested to see the correlation of project management practices to tech debt.


I read that and thought of all the other folks that do work to help put software in production and how they were all ignored in favor of developers.


I didn’t get the impression they were ignored. Just that developers were harder to hire.


> Developers spend over 17 hours every week dealing with maintenance issues like debugging and refactoring, and about a quarter of that time is spent fixing bad code. That’s nearly $300B in lost productivity every year.

Wow, there's a $300B untapped market for developers who write perfectly maintainable code on the first try and produce no tech debt?


Its like saying there is trillion dollars of untapped market if a novel writer's draft is as good as their final product.

Yes, just like anything there is inefficiency, waste and even some corruption but that is the case even for building bridges or laying 3 lane roads.

Maintenance is actually the most non-trivial part of any software esp. successful one and to discount it reflects the lack of sophistry on part of the actors who reside in C-suites.


> Its like saying there is trillion dollars of untapped market if a novel writer's draft is as good as their final product.

I mean, if you did say that, you'd have a point. It would be smaller than claimed, but there really was an untapped market opportunity.

You can go to Amazon right now and buy cheap ebooks from self-published authors who obviously haven't had any editing done. The quality of writing in these is so poor that I've stopped being willing to buy self-published books, but the books do make sales. Completely eliminating the overhead costs of editing (because you don't do it) and publishing (because that's now cheap enough to be free) has opened up a market for low-price, low-quality stories.


>The quality of writing in these is so poor that I've stopped being willing to buy self-published...

Which is the entire point. Unlike books, software systems must continue to function and grow for years. You read a bad book and you toss it aside. You can sell bad software (which will never be fixed), but there's not really a market for it.


I'm sorry, but you are almost certain to outlive your code, even the good stuff. Even a modest book on the bestseller lists will probably see more use than any program you will ever write.


You're missing the point. Books don't require maintenance, but the vast majority of software does. It's a poor analogy.


What is your basis for this bizarre and personal statement? Code often does take on a life of its own when it's not expected. And it seems trivially obvious that a bestseller will see lots of use...


You could rephrase around movies instead of books, and then you would have happened upon the CGI market.


They're calling out debugging and refactoring? I guess to avoid debugging, you need programmers who can diagnose the problem with no evidence or data.

Not refactoring is like being a file clerk who saves time by never putting anything back.


The motivated student will likely want to read about Ignatius J. Reilly's innovative filing system [0].

0. https://en.wikipedia.org/wiki/A_Confederacy_of_Dunces


Thank you for bringing up my FAVORITE book, I had a hearty chuckle remembering that passage.


This is why code analysis tooling exists and is a profitable market.

Sarcasm aside, there is a difference made between bad code and (regular) refactoring and tech debt in the report. I read bad code as the mistakes a junior might make but any reasonable experienced engineer shouldn't anymore.


Remarkably the tools for static analysis, testing, and refactoring are still pretty rudimentary. Coming from a data/saas background I'm shocked more analytics hasn't been built based on the troves of data available via GitHub and other open source projects.

That being said, most software today is written without the benefit of quality code analysis or regular linting. There's likely a distribution problem to be solved as well.


Truth, but even that is something that you need to let happen, then fix in code reviews, etc. You can't turn a junior developer into an experienced one without letting them code and make mistakes, which then must be fixed (hopefully before getting pushed to production).


Certainly that's true today. I wonder how feasible it would be in the future to leverage deep learning and pattern recognition AI technologies to build a code editor plug-in that would act something like an experienced developer always watching over the junior developer's shoulder and pointing out common mistakes. We have a very limited form of that already in static analysis tools but I feel like it ought to be possible to take the concept much further?


There's deep learning based program synthesis.


The average developer can do much more today than they could 10, 20, 30 years ago. That's because crucial pieces of infrastructure and functionality are no longer subject to the vicissitudes of new code but also because people are better at writing code than they were then (even if it doesn't feel that way). We collectively understand how to write software better and some of the hardest bits are impressively robust now (like, Linux and the infrastructure of the Internet).

If $300B is burned to maintain and refactor new code, that's $300B of value to capture with better tools, infrastructure, and theory. Now, if that were actually to happen, there would be an even larger sum of money burned to maintain and refactor new code, because we just all got $300B more efficient and that means more effort is spent applying software to new fields, industries, and markets which will in turn require brand new code to go through the wringer again.


> also because people are better at writing code than they were then (even if it doesn't feel that way)

Any reference or data to back up this assertion? My intuition would say otherwise, if for no other reason than that because the many layers of abstraction that exist today allow a person to become productive writing code with significantly less knowledge and in a shorter time than in the past. When you combine this with a general shortage of workers in the industry, you have a huge influx of individuals less educated in the field and with less experience. With all that considered, I find it hard to think that we're any better at writing software, in fact, the abstractions that exist today are all about letting us be worse at writing code while still being productive.


> people are better at writing code than they were then (even if it doesn't feel that way).

This is far too broad.

We have better tools (compilers, static analysis, linters, etc.) to find some of the errors we make earlier in the lifecycle. These provide the ability to rapidly address categories of error that were once avoided primarily by an arduous and slow process of scrutinizing hand-written programs. Coupled with more abstract programming languages and computing resources, what we have today is the ability for single developers to write incredibly complex software quickly--with bugs that have a complexity that increases non-linearly.


I'd be shocked if most tech debt isn't a side effect of sprints.


Can we say artificial deadlines are the root of most technical debt coming out of experienced developers who otherwise would not create significant technical debt? Most of the time I think a simple list of priorities with some milestone goals would be far superior to sprints.


I found a bug worth $300M per year in my last company. I guess that leaves 999 bugs to be found in the rest of the world.


I hope you asked for a raise :)


Actually, I didn't stay long and got a raise somewhere else. Long story short, the company was very bad and I ran away quickly.

Sadly, never managed to get the company or anyone to publish the fix. Last I checked, the bug was still present, it's survived at least one full redesign/rewriting. The losses are well over a billion dollars by now. :D


Interested to hear how you’re estimating these amounts?


A small percentage of users unable to login or do any order. That's a lot of missed sales for a $10B site.


I can see a number of bad managers running with this to mean "we shouldn't spend time debugging, refactoring, or fixing bad code"

I actually expected a real potential market, silly me.


Yeah, look at all the engineering resources that can be saved by not fixing bad code.


If I only lost 3.8 hr/week for "bad code"...


don't you know? react emmet snipplets go for $100k+ on the black market...


Haha the report includes the rhetorical question:

> How much of a priority is it for upper management to increase the productivity of its developers?

> High / medium priority 96%


This study doesn't show a $300B opportunity, it shows a $300B optics issue. What it highlights is that there's roughly a $300B gap between the recognized costs of technical projects and the fully burdened cost of technical projects.

Every single business function suffers from process inefficiency, rework needs, upkeep/maintenance needs, incremental changes/adaptations over time, and quality differences between deliverables/execution from senior vs. junior staff. The only difference between those functions and IT is that:

1. IT (and developers in particular) are incredibly expensive resources. Even junior employees here can cost more than seasoned professionals in other departments. This amplifies the scrutiny it receives.

2. IT work tends to get "projectized" more than other business units. This puts a higher level of visibility into time usage, allowing more granular accounting of where people are spending time.

3. Developers (as opposed to most business users[1]) have more exposure to "the right way"[2] to do things. This allows a level of self-awareness of process debt that leads to dissatisfaction and grumbling whenever they have to do things that wouldn't have been required if the process debt hadn't existed. Which translates into telling management that they could have done more if they had done things "the right way"[2], which management then interprets as lost productivity. But doing things differently would have necessitated a different cost outlay and timeframe to the original project, so in reality it's just shifting costs back to the project itself (or diffusing it into operational support) rather than post-project maintenance costs.

--

Process debt accumulates everywhere. Everyone knows it's harder to get things done in an enterprise company than a small, scrappy startup. That's not just because the technical systems are newer and unencumbered by legacy systems. It's because the entire business is newer and unencumbered by legacy processes, calcified skillsets and policies, or what "normal" is. You can assimilate best practices and newer approaches to things because there's no internal inertia yet on anything, which includes but is not exclusive to the technical components.

You can shift around those $300B in costs, and it's mostly an optics issue. Mitigating technical debt isn't free, and has no silver bullet. The cost is just shifted to the front end instead of the back end, in the form of operational support resources, process changes, automation, and hiring better skilled employees. It could well have ancillary benefits in the form of more execution agility over time, but from a cost perspective that $300B is just going to shift elsewhere, not go away.

Developers do themselves a disservice by derisively speaking about technical debt to managers, and pointing out that if they could have only done things differently, that technical debt wouldn't exist. It paints the perspective that the technical debt is the result of individual developers decisions/actions, rather than a symptom of operating within the existing operational and organizational processes. If your company runs projects with a high level of rework needs and maintenance afterward, that's the result of organizational issues beyond a single developer and should be viewed as an expected outcome that rolls up into the cost of executing projects in the existing environment. It's a separate conversation about whether to invest in process and operational improvements to reduce those types of project costs.

[1] I've set up entire drip marketing programs for companies for ~$5k before, with minimal ongoing costs for maintaining it and passing through email platform costs. I've also seen instances where companies have paid $40-50k to have an agency create a single "thanks for signing up" email, and pay $40-50k to create and send a single quarterly newsletter email to less than 10k recipients. And they didn't consider that a waste, because after accounting for their accumulated internal process debt, a three month lead time and $40-50k in costs was a competitively bid rate to execute the work for them.

[2] I put it in quotes because "the right way" is usually not, in fact, an objectively correct way to do thing. But rather an ideal best practice or a following-the-bandwagon approach, without any critical thought or evaluation on whether it truly makes sense to apply to their environment.


Maybe "studies" such as this can be used as leverage for more time in the "design" phase of things? Design phase could arguably encapsulate choices related to tooling, appropriate staffing decisions, etc.

Knowing the rough costs of rework and bug hunting, I would hope that "studies", like the above, are taken to add more weight to the activities in software engineering, not just the programming/coding aspects.


> How many hours per week do you estimate developers at your company waste on maintenance (i.e. dealing with bad code / errors, debugging, refactoring, modifying)? 17.3 hours

How is this tracked? Looks like it's just a survey of opinions. From my data I see the median time spent in IDE at 11 hours so 17 hours spent on refactoring must be a lot of non-coding tasks.


"Talented Developers working on the right things"

In my case, I hardly do any software engineering. I spent a lot of time on meetings and doing non-technical work for an army of managers, PMs and other leaders that have 100% of time availability to spend furthering their careers.


Studies conducted or sponsored by a business with vested interests really aren't worth the paper they're printed on.

Yes, the underlying principle of improved efficiency having benefits may be true - but we knew that without needing this report.


I don't think that's quite true. Every conclusion to any question could potentially benefit a business - it doesn't make it untrue. You just have to question the motivation of the report, is it unbiased? Do the conclusions follow? How wide is the scope of the report etc.

It doesn't seem unreasonable that most people should outsource work to a supplier who can specialise in something but it does not, of course, answer the questions about the skills required to manage the outsourcing in some cases. For things like payments, most people outsource anyway and the domain is relatively narrow so there aren't too many gotchas for most people.


Which companies participated in the survey? Is it possible to know?


What SDS refers to in the answer to "Which of the following technology trends, if any, are having the greatest impact on your company today?"


Refactoring is not a waste of time or lost effort, it’s a vital part of the process


They way this report is framed made me angry.

Stripe, first of all, your report misrepresents software development reality.

You present software maintenance as a massive waste of time and money. But that’s just not true. Maintenance is how software stays relevant.

Laws and regulations change, technology platforms change, features rise and fall in importance, the users themselves change over time. There are endless justifications for ongoing software maintenance.

For as long as software serves a useful purpose, it will need maintenance.

Yet you present maintenance as somehow bad, when in reality maintenance is the natural consequence and an essential aspect of software development, and it has always been so.

You’re also pandering to an ignorant attitude that believes software can be written free from bugs and perfectly formed from the outset. Thus all it takes, thinks the ignorant executive, is for my coders to be better than they are.

You highlight the cost of “bad code”.

You don’t provide a definition of bad code, so here’s mine:

Bad code is code that is hard to understand and/or hard to modify without breaking things and introducing errors.

If you know software developers, you know we are, on the whole, vocal and assertive about the avoidance and elimination of bad code. Look at what we talk about at conferences, in blog posts, and on twitter.

Writing and learning about good code is a consuming passion for many developers. Search for “clean code” to see countless examples. How do we avoid writing bad code in the first place? That is a billion-dollar question (or $85bn as you say).

Everyone without exception agrees we should not write bad code, yet bad code persists.

Bad code is a function of ambiguous requirements, unreasonable deadlines, lack of training and support, lack of a proper testing regime, lack of appropriate project sponsorship, internal politics, lack of funding, and so on.

Yes, some engineers will at times be lazy, thoughtless, short-sighted. Just like their managers. Just like their manager's manager.

But in the round, bad code exists primarily because of human, social, and political problems we all share.

And then consider how we eliminate bad code. We do that with maintenance of the code, Stripe, this is the very thing your report damns as waste.

To maintain software, we refactor, we add tests, we discuss and debate, we tease apart and reconstruct. To the software developer, maintenance is normal and expected. It’s part of the job. You build it, then you support and maintain it.

The reason I’m so hot about this is that I know (from watching it happen again and again) how many executive level managers will interpret your report. They will think it’s because their programmers are lazy, feckless, indolent, and narcissistic.

It’s cognitively and politically much easier to blame bad code on an engineer’s attitude than the ecosystem in which they work. And these executive level managers will vent their frustration on those same engineers and look for quick wins like off-shoring.

"If we’re going to suffer from bad code, at least let’s get it cheaply."

Or the exec might crank up their attitude of command-and-control to bring those apparently miscreant coders to order. Misery for the coder, and never works out well.

Stripe, your report does not help us get better. You’re throwing fuel on the fire.

(Originally posted most of this Twitter, wanted it here to increase the chance of someone relevant at Stripe seeing it)


The usual manager weasel words:

"leveraging developer talent" : exploit labor without competitive pay

"Access to talent" : noncompetitive pay

"Access to software engineers" : noncompetitive pay

"Access to capital" : noncompetitive pay

"Immigration requirements" : please let us import more labor, but still keep them indentured to us


> "Immigration requirements" : please let us import more labor, but still keep them indentured to us

Not really. A points based system or even something that just means that H-1Bs are instantly given green cards would work for me and I'm not even CEO.

In fact, the current annoyance is this bloody work stoppage going on regarding H-1B premium processing. It's a nightmare.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: