Hacker News new | past | comments | ask | show | jobs | submit login
Software Development Waste (neverworkintheory.org)
215 points by pabs3 on Aug 30, 2021 | hide | past | favorite | 127 comments



Maybe this is a mix of a couple of items in the list but when I read about the stacks in use nowadays I wonder if these would not fit the waste bucket.

For example, there is a team I know of that is building a web application that serves maybe a couple of thousand people (if they ever acquire all of them as customers). They spent all this year building a humungous infrastructure with all the modern acronyms and have nothing to show the clients but a bunch of K8s this and Docker that.


Meanwhile the two interns just finished the same web app using php, mysql and jquery because they didn't know any better.


This is industry wide issue. Working on a much larger project old school version SRE team of 4 managed all the infra. "Cloud Native" version team of 7 DevOps 1.5 years building out CI/CD setup that is still somewhat iffy.


I've experienced similar projects. Big microservices, multi-engineering team, full time SRE team etc... but it was basically a CRUD system, all the heavy lifting was done by an old mainframe black box system, hidden away by yet another old Java application.

It was a lot of engineers flexing. And of course, the same engineers that started it had moved on to greener pastures well before the system went live and was tested by real life and real customers.

I think I've finally grown enough of a pair to never tolerate something like that. Prove that you need microservices first. Prove that you - prove that the organization - can manage the overhead and complexity.


I feel much less like an impostor when I read this. Thanks, HN!

Is it even worth it adopting ci/cd for my small craft or am I fine sticking with push, ssh, pull, [vi config.json]? What do they actually achieve with it?

Edit: incorrect idiom


Think of it in terms of principles. Why do we do continuous delivery in the most abstract sense? To get real value out of, and production feedback from, your code as quickly as possible. Both of these always help you write higher-quality code.

The ideal is deploying every single commit. What could prevent you from this? Primarily two things:

1. You risk screwing the process up, causing expense in terms of damage to customers.

2. You don't like how much work it is to deploy every single commit; the transaction cost, even when everything goes right, is too high for your personal tastes.

Continuous delivery automation tooling helps with these two issues. If they are not problems (in other words, you do things right every time and you don't mind doing it manually for every commit) then you don't have the problems tooling would solve, so no reason to use said tooling.

Continuous integration is different from continuous delivery, and more about coding patterns than automation.


You are fine with what you have. In general, fix your pains when you feel them. Not before.


If you find yourself adding errors to your code that would have been caught with some basic testing that you usually do (or want to do) but forgot to do or were too lazy to do, then add some tests using whatever the typical framework is for the language you are on. Maybe do some simple github actions CI - it's pretty easy and a decent skill to have regardless.

Having "unit" tests is a good habit to get into no matter what, really, but that doesn't mean you need a bit CI/CD pipeline. CD isn't really needed until the creation of your application is complex, or you have actual customers/users that depend on it.


Do push, ssh, pull receive any parameter that may cause your program to misbehave if you get it wrong? If yes, then it's worth automating, if no, then no.

Anyway, would you call putting the entire interaction in a shell script "ci/cd"?


If yes, then you have an acknowledged risk. That doesn't immediately say you should automate it - depends on the cost and _risk introduced_ by the automation.


There is an incentive for developers to build an over-engineered solution for a CV feels like it is now a principal-agent problem.

There is then a feedback loop, as companies then have to require developers with CVs that have the technology.


> Building the wrong feature or product

A bit of background: a company I worked at had one primary product with a web interface for the customers, businesses who used the product, to do all their functions. Think AWS console but not so massive.

There was another, minor product brought in via acquisition, which had its own web interface.

A third group came into existence when my employer got acquired, which created a new web interface for its product, which did mostly the same things as the primary web interface but for a different set of customers.

These all had separate logins, so customers had to know multiple credentials. There were also several large customers of the primary product who had multiple logins for reasons which defy my full understanding to this day.

The project I was on was to bring in a system that would eliminate separate logins and provide a single sign-on system for them all. Good goal. very doable.

The VP in charge of the new group and a product manager got into their heads that what we really needed was a single landing page portal, a piece of software that didn't exist and wasn't necessary for the sign on. The team wasted at least three if not six months chasing that landing page goal before the VP decided her product needed to go live and oh by the way we didn't have time to do the single sign-on any more before her product was to go to production.

Ultimately the distraction killed any chance for the company to actually implement the desired sign-on that the customers were clamoring for, and to my knowledge they are still limping along with a every customer having at least three different logins, one for each system.


Allen Ward has a different set of wastes he finds in most types of development work. I find it interesting to compare and contrast the two.

I think Ward hits more fundamental points (e.g. this page suggests the waste lies in not consulting a domain expert, whereas Ward suggests the waste lies in the hand-off it takes to consult a domain expert in the first place, rather than have the development team become domain experts. Or when this page talks about building the wrong product, Ward talks about wishful thinking and being gut-driven rather than data-driven.)

Here are some of the wastes I remember making notes on from Ward's book. If you want more details on any, I'm happy to answer questions.

- Scatter: disruption in communication, communication barriers, changing priorities, distractions.

- Hand-off: essentially when there's a common goal, but different people are in charge of knowledge, responsibility, action, and feedback. Important information disappears into black holes during hand-offs.

- Useless information: reports, updates, analysis, etc. that is created not because it helps the customer, but because someone wants it to feel good.

- Waiting: over-utilised resources cause queues which lengthen feedback, decrease urgency, increase variability of performance, etc.

- Wishful thinking: making gut-driven rather than data-driven decisions, planning further out than you can reliably predict, reducing uncertainty too soon.

- Discarded knowledge: spending time and effort building knowledge, but then filing it away and never reusing it.



Interesting. When I started my career as a software developer I was all about gut-driven design/development: I did what felt good and discarded excess. After some years of professional experience I became a data-driven person. I felt as if I had the true wisdom the team needed; I discarded any comment that was not backed up by data. This stage didn't last much, it just didn't feel right (oh the irony). Now I'm all about gut-driven development again. Difficult to explain to other engineers with less experience why I'm like this. In fact, I believe one must become a data-driven development before one can actually discard the approach.


Well, it's not a dichotomy.

- There are some areas where humans can develop good intuition on their own. For example, I believe that with experience, you'll learn to distinguish a good variable name from a bad one. You can go with your gut when naming variables.

There are some areas where human intuition will pretty much always be very wrong. Figuring out what the customer wants is one of them. IIRC good developers guesses right about what the customer wants about 30 % of the time. This is abysmal. This is where you need to let data drive your decisions.

There's lots of psychological research on this. Meehl has his classic studies on clinical versus actuarial reasoning. Kahneman has had his adversarial collaborations with that other guy who promotes intuitive decision making.

The key is to continuously test your abilities to know your limits and don't get overconfident when your gut tells you you're right.


> Kahneman has had his adversarial collaborations with that other guy who promotes intuitive decision making.

Gerd Gigerenzer, I believe. I'm with Gigerenzer, as his research tends to replicate a little better ;)


There is a dark organizational pattern that I find wasteful: when sales have complete power over engineering, sales would routinely ask for features/projects at a breakneck pace for _potential_ customers instead of having a product-based mindset for real customers. This create a lot of deathmarch situations that waste human potential, end careers, and do not sell a whole lot in the end.


It's a sign of poor engineering department management though; their most important job is to protect their employees and to say 'no' to one of the stakeholders. Because sales is just one voice of many - the actual users is another, engineering another, data and metrics is another, etc.

As a manager, you need to manage resources and they are limited. You can't tell your staff to do overtime if it's not an actual emergency that affects paying customers, that's not sustainable.


I agree, but I think it can be formulated in a better way. I.e. it's not about saying "no", it's about having a meaningful conversation about priority to reach a shared understanding with all stakeholders why we are doing the things we are doing. Saying "no" can be a result of this process, but this doesn't have to be the case. Also, saying "no" without proper clarification can give you a reputation of hard to work with, which may result in people going around you, which reduces transparency and increases politics.


Yes. But this role doesn't always exist. For whoever is in power, creating suitable counter-powers may not feel like a pressing issue.


I've seen that pattern, too, but I would generalize it a bit. When a whole bunch of individual contributors from outside teams have too much direct pull on the product development team's priorities, prioritization suffers and the development team becomes overworked. This happens when it's sales, but also support and consulting roles, or any of a large variety of internal stakeholders. They're all just different flavors of the same problem: too many cooks in the kitchen spoils the soup.

There's really only solution I've seen that works: don't allow everyone to be a cook. Designate exactly one person to be in charge of making these sorts of decisions.


I don't work for companies where sales are running engineering. Usually a horrible environment of code monkeys patching shit together so it barely works, and no rest at all inbetween tasks, because sales are clueless about what an engineer does.


also sales get paid commission based so their motivitaion is extra $$$. most engineers do not get a piece of that pie, in this case making them a sucker.


At $PREVIOUS_JOB we in the development organization would routinely get pinged by people in the sales org with those "Hey, I need information on X so I can close this deal!", usually near the end of day, often on Friday, and always needed it RIGHT NOW.

I started by countering, "Since I'm going to be working late on your project, what % of your commission are you offering?"


One similar thing that I've seen happen in game development is having a big part of (or the whole) team working exclusively on a demo or trailer for marketing purposes, for weeks / months.


I've seen documentaries where they did that, crunching to make a demo or presentation work for E3. The worst ones are pre-rendered videos of things that they would like to do, but turns out they can't deliver - famous examples are Cyberpunk and No Man's Sky.


Which is then "sold" by the project lead/sales/pr on some conference/show to great success, unaware that they are the ones trading in project time for publicity. Deadlines of course remain fixed.


I've become unenamored of the "Measured Human" lifestyle.

This is where everything we do is metered, and our lives become "data-driven."

It does bring results, but I feel there are significant costs; usually difficult to measure.

One of the things that I will do, as I write software, is evaluate the code I'm working on, for reusability.

Often, to make something reusable, I need to avoid optimizing it for a specific application, and "generalize" it. I'll add specializations into an "adapter" or "façade" layer, over the module.

Most of my little SPM modules were developed this way. If I think it deserves it, I'll stop work on my principal project, and break out the subsystem into its own project and repo. Once I have it standalone, tested, and ready, I will re-integrate it, as an external SPM module.

In more than one instance, I have removed the code from the principal project, because it was not something that would contribute to the main goal.

But I now have this neat little SPM module, ready to be used in other projects. Here's an example of a project that never made it into any of final results, but I think is pretty neat[0].

Some of my modules are basically my "baseline." These are used in everything I do.

Many would consider my workstyle "wasteful," but it works for me.

[0] https://github.com/RiftValleySoftware/RVS_Spinner


> It does bring results, but I feel there are significant costs; usually difficult to measure.

This is my big gripe with the "data-driven cargo cult". It completely disregards the qualitative, and focuses solely on things that are easy to measure. The problem is that some of the most important properties are hard to measure. Optimizing on data feels good, but refusing to make decisions on critical parts of your business that are hard to measure will lead to suboptimal outcomes.


What gets measured gets done, which is awkward when the important things are hard to measure and don't get measured.

"data-informed" is preferable to data-driven, but requires more thought unfortunately.


I have seen the Lean/Six sigma people trying to apply manufacturing principles to software and it's the main reason I left my previous employer. For things to be measured in a way that can be compared, things need to be standardized. In manufacturing, you have a standardized product that you can improve upon. All sort of things are being repeated exactly the same way. In software, we have repeating ourselves. There is no process that can be measured in a similar manner.


Well...

You're right, of course. Development by definition requires novelty. If we just built the same thing over and over, we wouldn't be developing. We'd be manufacturing.

But development still does entail repetition on some level. We pull features through the implementation process, over and over again. We review code, over and over again. We hold retrospectives, over and over again. We do status updates, over and over again. We make supposedly backwards-compatible changes, over and over again. We deprecate features, over and over again.

If you look, you can find all sorts of things that repeat.

These things can actually benefit from being standardised. Standardisation does not mean preventing any future changes; it means doing things consistently, and systematically evaluating and implementing proposed improvements.

But why bother? You can't improve anything unless you can do it consistently. You avoid spending creative energy on repetitive processes, and can use it where it matters: on the actual development. You can learn faster what does and doesn't work, because everyone does the same thing on the same schedule.


A big difference between software and hardware, is that Development, Manufacturing, and Distribution are pretty much all the same thing.

One person can create, duplicate, and distribute earth-shattering functionality. They can then, iteratively, improve or alter the product, almost in "live" fashion.

Manufactured goods have a lot of constraints that come from rather immutable structures; like the laws of physics. There's usually a number of parallel engineering efforts that converge, and that's before the product even rolls off the assembly line.

Once there, it still needs to be placed in position to be accessed by the end users.

Each of these steps are quite different from the others, and require a confluence of disciplines, skills, and experience. Many of them can be subjected to measurable process. Maybe only the initial engineering step could be fairly compared to software development. Even then, prototyping introduces some considerable structure.


Software throughput may not be quite so standardized, and that might prevent you from applying every single nitty gritty specific improvement that someone found in a maufacturing-oriented LSS implementation. But there are still things you can meaningfully quantify.

The thing that Kanban software development chooses to focus on, lead time, would be my top candidate. That seems universal. I'm guessing most others would be team-specific things that come out of a DMAIC analysis. If you've got a problem with too much rework due to team miscommunication, the specifics of how you measure that are going to depend a lot on how your team defines rework in the first place.


> I have seen the Lean/Six sigma people trying to apply manufacturing principles to software

That's great. It tells you which stock to short.


There is a constant tension in product development between delivering value and creating reusable knowledge.

Creating reusable knowledge is not in itself wasteful, but neither will it in itself deliver value. (The reason we do it is because it makes value delivery more efficient in the future.)

Many people today will emphasise delivering value and sacrifice reusable knowledge. They will call it "wasteful" to create reusable knowledge, but it's not.

The trick is finding the right balance, of course.


I think we need to develop a notion of 2 sided value: there is value in delivering additional profit/revenue, but there's also value in reducing costs. Creating reusable knowledge is an example of reducing (the expected) cost.


I was just pontif- er...talking about this, yesterday.

I am not a fan of the "MVP" concept, as it is defined in the software startup industry. I feel that it delivers time-delay disasters.

Mostly, because of the deliberate, and premeditated, willingness to ablate Quality, in a drive for deliverable.

We get bad stuff, fast. All too often, that bad stuff forms the nucleus of the product, for the rest of its lifetime. Basically, a foundation of sand.

It's hard to argue against the "dollars and cents" rationale for it. People make money, this way; lots of money.

As long as money and profit are the main measure of success (and it can be argued that they are the only ones that matter), we will have this drive to deprecate Quality, in favor of profit.

Quality is expensive. Real expensive. I've talked about this before.

But I've worked for hardware companies, for most of my life, and it's fairly rare, when hardware companies can get away with it; often, because laws and regulations. Also, tooling up a product line is a major commitment. You really don't want to cheap out on that. Distribution channels are also some heavy-duty overhead.

For whatever reason, software shops believe that we are exempt from the restrictions that have governed all engineering; throughout history.

In a way, this is true. Software is not hardware, and requires a drastically different approach. I always had a devil of a time, convincing hardware people of this.

On the other hand, I don't like the idea that we are also exempt from the requirement to deliver a baseline level of Quality.

Personal gripe. I'm an old curmudgeon.


This is an all too common misunderstanding of what an MVP is.

A bad MVP is a bunch of features thrown together badly in the hopes that parts of it will match market demand and that the market is willing to look past the quality problems. What follows after moderate success with this approach is a long journey to try to build back in some of the quality that was avoided at the start. The end is predictable: sprawling, mediocre software. With some luck it makes money. Often it does not.

A good MVP delivers a single, deliberately chosen feature to a well-researched segment of the market. This single feature has excellent quality characteristics and blows any competition out of the water. On that success follows continued incremental, high-speed development, always with quality as a primary driver.

But it takes discipline. In order to economically deliver quality, you must do the research, and you must be willing to deliver in really small, very well chosen increments.

And that is hard. That takes excellent developers. It's easier to just throw something together and hope to get lucky.


Well, I'm not sure that I "misunderstand" "MVP," because, after the dust has settled, and the smoke has cleared, a significant percentage of what is out there, and carries the label "MVP," is duct-tape-and-baling-wire-last-checkin-five-seconds-before-ship dross.

I know what I consider "MVP" to be, ideally, and that pretty much jives with your description.

It's just that I don't think most folks actually do it. It's like "Agile," in that respect.

As someone that practices high-Quality development, I am regularly laughed at and told that I'm "wasting my time," doing Quality coding.

Words come to mean whatever their most common usage implies, so I think that both of these wonderful words now mean something other than what they used to mean.

https://www.youtube.com/watch?v=G2y8Sx4B2Sk


To echo the peer comment, I personally don't think an MVP should be poor quality, rather have fewer features. For example, it may present the end user a good experience, with well tested code. On the other hand, it may connect to a database without a cluster setup, or setting up a new enterprise account may require someone to run some SQL. Features are missing, but the features that exist are executed well.


Correct. The Ward approach to product development accounts for this. He suggests that the organisation structure should reflect that tension. Here's how:

Functional departments and their managers are responsible for creating reusable knowledge. A project leader (chief engineer, in Toyota speak) is responsible for delivering value in a particular project.

The project leader accomplishes their goals by help from the functional departments. However, the project leader has no formal authority over the engineers in the functional departments.

In other words, there will be a constant negotiation between the project leader who argues for something to be designed for a project, and the functional department which must be convinced it's worth spending time to do that thing.

The way a Toyota executive put it was "A great car is made from lots of conflict".

I'm not doing this approach justice in my description. I suggest just reading Lean Product and Process Development by Allen Ward. Lots of good ideas in it.


The organisation structure of a car maker can be seen in a car dashboard. There is a light for the tyre pressure gauge because there is a business function for "tyres".

Changing that is hard. Tesla has a screen where software decides what lights to flash - and dollars to doughnuts it's because there was not dozens of decades old business departments fighting to keep control.

And the example you have above - a Product manager persuading functional teams is a separation of powers problem - think of the US government as a surprisingly long lived example

One would need a lot of top management support to keep that from falling into one of two more stable systems - either the functional team leader tells the PMs what they are going to ask for, or the PMs get hold of budgetary control

Either way the conflict is resolved.


It is the first order effect we are able to measure. It is often the second or higher order effect that are impactful. Quality, product market fit, etc


The study was based on patient qualitative spadework, not arbitrary metrics.


Understood, but I wasn’t talking about “arbitrary metrics.” I was talking about the concept of data-driven, feedback-generating optimization, and metrics are just a part of that.

There’s also the concept of “triaging” for “waste.”

One man’s garbage, is another man’s treasure.

I have learned not to summarily dismiss a lot of practices and artifacts as “waste.”


This still feels dismissive. The categories are derived from close cooperation with experienced professionals who were reflecting on their practice and experiences.

Put another way: they weren't being data-driven optimizers. They were exercising judgement. That seems to be what you're arguing for.


Well, it looks like this is one of those places where we probably won't be seeing eye-to-eye. I wasn't being dismissive of the topic, and regret that my post made it seem so. I was just doing what we always do, here, riffing on a theme.

Not an issue.

Have a great day!


I'll do my best, I hope yours goes well.


As somebody living in "Agile", I hate retrospectives. But: If our retrospective was that we actually go through a checklist of things in the 3rd column from the table, they could become really useful. Just a thought.


https://en.wikipedia.org/wiki/Lean_software_development

It's frustrating to me, because it seems to be underutilized in the industry, but lean software development (as discussed by Mary Poppendieck) has been about exactly this idea for around 20 years. She mapped out the ideas from Lean to software development equivalents and mapped out ideas about what constituted "waste" in this field.

These are exactly the kinds of things that people should've been looking at in retrospectives, but it's become deemphasized and is instead a shortened ritual with no useful effect.


I looked in the paper and they actually compare it to Lean, and expand up on it, exactly as you say.


It is one hour to be used as break and vent on all things that went wrong and will hardly change due to reasons.


Why do you hate them? I find them very useful. Everybody gets to voice their frustrations and what works, which we can learn from and improve upon.

What if I our team has a concern that isn't mentioned in a checklist?


The recent retrospectives I had were used against me in my yearly evaluation where it was pointed out that I "complain too much" and that it "looks bad". That's about it for my involvement in this process.


Gotta love how they bank all these complaints for withdrawal at your annual bonus review instead of addressing them with you quickly and privately like an adult.


I once forgot something and several weeks later it ended up on my quarterly review. No reminder that it needed to be done (and it still needed doing) except for that one quarterly complaint.


You're attributing your hate all wrong if you blame the retrospectives for this, rather than bad management.


Eh, they are probably just focusing on what they can change. Changing managers is a hassle with a good chance of getting it wrong. I can clam up, say nothing, and work on other work during retro very easily.


That is normal management, dismissing it as "bad management" makes it sound as if it isn't widespread and that we shouldn't base our practices around working with such managers. Managers who don't penalize you for being negative are rare, even those who say they don't typically do it anyway.


You are really making me appreciate that my line management has never had any involvement in "scrum ceremonies" including retrospectives.


Yet to see any of the methods applied in working conditions. Even the best intentions on "following processes" tend to lead nowhere or just become abused by misapplication.

What works is when people share goals and collaborate.


While I really enjoy Agile and Lean development, my most recent employer had an extremely toxic culture where retrospectives were basically akin to the Hundred Flowers period of China. If you voiced frustration, you provided an O(1) way for them to find who to bully into quitting next.

My current employer just lets me do literally whatever we want. A service is basically owned by a developer. And so far my customers absolutely love my work.


Not the op, on most projects the improve upon part is mostly out of scope for the team, so it ends up being only about venting frustrations.


> which we can learn from and improve upon.

Not OP, but on my past two teams and my present team, this part has never happened. It was/is just a box ticking exercise.


What I don't like about our version is that we only focus on small portion of all things in the list (from the OP), mainly the Agile procedural issues.

I think the purpose of having such a checklist is not to limit the discussion to other concerns, but rather serve as a reminder what all the different things can be a problem and should be discussed.


Maybe I've just been unlucky, but in my experience retrospectives are mostly virtually identical every time: nothing happens about the frustrations, so they become a complaint session instead of something positive.


What can you do to change that?


Only thing you can do when your manager doesn't do this job is say you will personally take on the extra responsibility for no extra pay and no reduction in other tasks and give most of the credit to your manager. Then if you are lucky all of this extra work might lead to a promotion far down the line, or it will go to the person who were more positive during meetings making the manager like him more and all your work fixing things got wasted.


Realistically, the only thing you can do in this situation is to switch teams and/or company if the problem comes from above.


I know the feeling. The goal is supposed to be to improve the process, but often too much time is spent going through the motions rather than meaningful discussion.

In my experience, the value from retros is a function of the skill of the facilitator.

Perhaps you could be a force for change on your team, by speaking bluntly about not spending enough time on that third column. Others might be receptive to it!


Just chill out and enjoy the process. Waste happens, that's life. Americans earn huge 100k + salaries yet they are all on prozac!!!


These two from the article:

> Extraneous cognitive load

> Psychological distress

Are types of stress that introduce "waste" (or rather inefficiency). The root cause? Ignoring the human factor.

Programmers, designers and other tech workers are not robots. We can't be. That's the job of the software we build.

Our well being, happiness and engagement matters. Ultimately we are craftspeople, want to work with good people, take care of our tools and products and we need time and resources to do it right. We want to be proud of what we build or at least confident in how we build it.

Too many programmers are anxious and worried because they are not given enough time and agency. Many lack connection to the users of their products and services, so they don't see the point.

So I cannot fully agree with what you're saying. Sure, inefficiencies happen, waste is unavoidable. But the issues above are concerning.

When we do things with care and consideration, give them both enough breathing room and focused attention, then we make better things and are happier for it. Mix that with a healthy dose of pragmatism and we also avoid doing the wrong or too many things.


This is the real answer. Stop being so attached to your code. You're getting paid a good wage to write stupid code that doesn't need to exist? Just voice your concerns, and then, if those who "know better" (tech leads, VPs, directors, founders) make you write it anyway? Laugh and enjoy!


It's rarely this simple. If your team ends up wasting time thanks to poor decisions made by leadership, there is always a chance you will personally face issues because of it (this could even be losing your cushy job).


Until you're expected to work nights and weekends to meet the deadline (that doesn't really "exist" either).


No joke, taking this approach to work improved my mental health immensely. I do all the due diligence I can, but at the end of the day I have to recognise that I'm not in complete control, which is kind of liberating, actually.


You can identify problems and think for solutions without letting it frustrate you each minute of the day.


FYI, you can read about those issues in more detail from one of the authors’ (Todd Sedano) website: http://sedano.org/software-development-wastes/index


I think not investing/knowing good tooling is another sink. Mainly setup and take down.

For example, a program I recently learned about was “lerna”. If I had know about it a few months ago, it would have been less painful to manage all of our libraries. Another good example is deploying to serverless backends. Without the serverless framework, this was otherwise painful. And once we find out about these tools, there’s a ramp up cost to integrate them into the existing workflows


I just thought about this and it seems to be a paradox. I can't shake a feeling (since I started my career working with methods of the past) that programmers in the past were more productive than programmers today, despite all the talk about wonderful new tools and libraries.

Let's say I can make a product in a billion different ways. That means I have to make around, say, 30 decisions (log2 of that) how to go about it. Now let's say I am more limited, and I can only make it in a million different ways; that means only 20 decisions have to be made. More decisions mean more overhead - everybody in the team needs to learn them, accept them, etc.

So in the past, people had limited variety of tools, and the way they could build SW products were more limited, but it actually was more productive because lot of the decisions didn't have to be made. That's the paradox, we think that more specialized tooling leads to higher productivity, but in creative professions, where the job is to make decisions, and physical labor is not the main constraint, this is the opposite.


> programmers in the past were more productive

I'm not sure how we'd be able to define productivity in a meaningful way to compare.

I feel like programmers create more value for business today than ever before, even if they're not necessarily writing the code creating that value.


Do we? Having a Cobol or Basic programmer writing a simple 500 line goto mess for a administrative task was probably way more usefull for businesses than today's "take it or leave it" approach with high level applications programming. Good luck asking for a specific ad hook feuture in those monster code bases as a lowly system user.

The bar for "creating value" is way to high today and the simplicity is gone in favour of expensive complexity ofent for no good reason.

Excel (from the user perspective) is probably the last man standing of the simple way.


In what world did programmers stop writing hacky 500 line messes to automate random tasks?

> The bar for "creating value" is way to high today

It's actually really low, and usually just starts with doing work.


> I feel like programmers create more value for business today than ever before, even if they're not necessarily writing the code creating that value.

This is true, but I don't see what the tooling or the productivity have to do with that. It's just that there is an always increasing demand for new programs.


I'm not sure what your concept of productivity is, but instead of indicating something about productivity, I think the increasing amount of decisions to make is actually about higher levels of complexity inherent in software development.

All the simple programs have already been written, this means we can only go forward writing programs that are ever more complex to maintain the same amount of value to our stakeholders. And this increased complexity requires slightly different skills and activities than those we learned starting our careers. For in order not to break down from this complexity, we need much more powerful abstractions and ways to combine than before - more complex solutions are usually also more brittle.

I think it's rather a feeling of loss of control than loss of productivity, caused by the accelerating pace of innovations. We're all kind of becoming more and more involved with architecting, configuring, installation, managing and operating rather than building (or rather, building software is changing shape), and the 'building' part is where developers naturally locate 'productivity'.


Sure you're right, but doing a website in the 80s was ... nonsensical, so not needing a LESS-like preprocessor for CSS was possible.

Or profilers were not as useful when the programs were printed on punch cards.

The thing is we're orders of magnitude more productive if you look at what results we must achieve: if IBM wanted to publish news on a worldwide networks in the past they would have needed thousands of people and millions just to setup the infrastructure, let alone convincing people to buy terminals. Now an intern can do it in a few days because of all the successive layers of tools and software.

What you describe doesnt need to happen if you are more dictatorial: get 2 languages allowed for productions (one ui, one logic), 1 more for back office (for statistics and reporting) and force everything in it.

Do the same for libraries and dont do it by commitee, just say that s the stack, you can do go or groovy or rust elsewhere and done.

Then people start discussing solving actual problems rather than war-of-the-stack every meeting.


It feels more like there's a 20 year cycle - enough time for old hands to retire or move up into management so the new batch can repeat the same mistakes but with different platforms and technologies without learning what went wrong before. So 15-20 years ago people were resume-padding their way to building a small internal dashboard web app with J2EE and EJBs, now the current lot of resume-padders are doing the same with k8s and GraphQL and serverless and whatnot.


tbh, I truly think that GAFAMs are trying to be the only entities to control the "fundamentals" of computers science.

This includes :

- locking systems to stop people trying to understand how it all works

- providing and actively promoting high level tooling and platforms to allow other business to do their jobs without needing to understand how a computer works.

- hiring at any cost any engineer with the technical capacities to create real alternative technology (and to profit from his knowledge is just a good side effect of removing its free time)

I know few companies (maybe zero ?) that have not any critical dependency on a GAFAM product, be it a framework, a platform, libraries, tools ...

I'm truly fascinated (and worried) at how you can successfully run a software company with programmers who don't even understand (or remember, because, for most, we were taught) how a computer actually works. And by no means I'm criticizing those people, I'm totally in this group. It amazes me that some of my best productive coworkers never tried once in their life or have the curiosity to install some Linux distro. I'm not talking about loving it and keeping it, but just to have the curiosity.


I feel it's very unjust that you got downvoted so much. Whether the "lift-off" away from low-level programming is intentional or not, the effects are definitely pronounced.


I think the problem is that we're trying to fit everything into software frameworks rather than just occasionally redo things in a simple way. Part of what I hate about Java programming is exactly the over-dependence on shitty use of Spring idiosyncracies everywhere.


I think this falls under "extraneous cognitive load" but I could be wrong.


Consultants, here's your checklist for looking like a genius when you walk into a new org to 'fix' a delivery team


> Team churn

This is somehow nearly universally seen as a bad thing (except at Amazon) yet companies seem completely fine with it surging.


Team churn can be a good thing. When you have to constantly explain to newbies "the way we do things here" you're incentivized to keep "the way we do things here" as simple as possible. Otherwise, it can become a big list of institutional quirks and unwritten rules that serve no purpose. Lean and mean is the way.


This is actually a really complete table of issues created by engineering that harm the organization overall...


Lots of them are not created by engineering, some of them usually come from general/product/project management (wrong feature, mismanaging, rework, distress).


I think "waste" is the wrong term. If you learn from your mistake then it's not waste it's experience. It often happened to me that useless parts of one project were useful for another one. Sometimes even if the previous project was a complete failure


Notably, almost all of these relate to personnel, knowledge sharing and management. While we operate day-to-day as individuals (and see the world from our individual vantage point), the success of large-scale, long-term projects rides on team dynamics.


In proportion of list items, sure. And in most organizations that’s probably roughly proportional. But I don’t think it should be dismissed that cognitive overhead and overly complex solutions are time killers.

My team is wonderful, even astoundingly so, on most of the communication points. My legacy codebase is so challenging that I probably waste 90% of my time.

This isn’t a typical situation! But I want to make sure it isn’t dismissed in a rush to wholly classify this as people problems.


Something will always be your biggest time sink. If you've got comms right then the next biggest thing will be instead.

Also, it's hard to know whether something is a "waste of time" or simply "work". I've dealt with legacy code that's been built over many years and while it's been complex it's often necessarily complex in places. Working with that isn't a waste of time. It's just the job.


I want to delete the comment you’re replying to, because I find this whole interaction unhelpful. But I’m not allowed to delete my own comments.


Either we work for the same company, or it’s more common than you think :)


I think it’s unlikely we work for the same company. I’d recognize your handle :)


Biggest waste on my career were a couple of multi-site projects, with several years duration, trying to reboot the world, as expected they all failed due to politics and trying to fix the wrong things.

At least we all got better CVs as part of it.


> At least we all got better CVs as part of it.

I am on a team that basically just imploded after a year of chaos. I mostly got promised a better CV for work we never got to do (and never will at this point).


I would like to hear some chaos stories.


Here is the most recent one:

They fired all the product managers. All of them. At once. We call them Venture Leads, but they were effectively product managers. On Monday I am finalizing a feature for one of them and on Tuesday I have nobody to show it to. I don't even remember if I ended up pushing it.

They apparently intend to resume work on all the projects, but at a later date when they hire new product managers/venture leads.

This was not their first time firing product managers either. They fired the one for the first project I worked on.

For more, read Glassdoor.

https://www.glassdoor.ca/Reviews/AltaML-Reviews-E2534656.htm...

Why did I join this company? A year ago they had 5 stars and were highly recommended, including by the person who referred me to the company. Who was one of the product managers.


A common story was ever shifting deadlines. We use internal budgeting for projects, which is fair enough. But nobody could ever seen to figure out how things were priced internally, which lead to deadlines either falling from the sky for two days later or those deadlines being completely irrelevant, after you have already hacked something together to meet the deadline.

And it was all based on difficulty understanding how the internal billing worked.


You are not alone, it's not easy thing.


Love Pivotal, but also their software development methodology is so different from the rest of industry. Not sure how much the findings of this paper generalize since they’re from a relatively non-standard dev process.


I'm an ex-Pivot. I've seen many of these identified wastes in other organisations. Pivotal was not immune to these wastes, it just had some resistance.


As a LEAN architect by myself, the Main problem that I encounter every day is that following the principles require actual thinking. Like:

- why do we do this in the first place? What _is_ the customer value (and therefore is waste?)

- is this complicated reporting process actually needed?

- which strategic knowledge is missing so that the delivery teams can decide things on the spot?

And so on… . The bigger the company, the harder to overcome human issues. At least it helps that the LEAN focus is always to deliver customer value fast, with high quality and still cheap (by eliminating everything else).


i have the same intuition

in the end, it seems a lot of people just want well defined rules (that work to a reasonable extent) to follow and instead use their thinking for solving the actual problem at hand (programming)

its not unreasonable, especially if someone is hired to be just a "coder", there might be some hiring/expectations issues hidden in there too...


The waste I've become most sensitized to is excessive ceremony - things that you have to keep doing but don't actually contribute much to real velocity. Bad CI/CD and review processes are good examples. Even though I'm generally a huge proponent of testing and even formal methods, I've seen so much time wasted on types of testing that consume lots of developer time (not to mention machine time) but have provably never surfaced any real bugs and couldn't reasonably be expected to.


I think one discussion which is relevant was discussed here - https://news.ycombinator.com/item?id=25569148

I think as a good manager one should be able to define essential and accidental complexity.


Couldn't help sharing this here:

https://youtu.be/_NeJ3Kg6OUo Silicon valley: product and engineering


really, thank you for sharing this post which provides a summary of the paper. it got me excited about the paper itself and unfortunately i can't access it. so thanks.


While I agree that the results of publicly funded research should be available as open access, IEEE does not (yet) provide authors universally with that option. However, a quick Google Scholar search would have led you to online copies [1]

[1] https://scholar.google.com/scholar?&q=Software%20Development...


You can read the paper here: https://www.researchgate.net/publication/313360479_Software_...

You do not need an account, you can just download the PDF.


One of the authors also had a list of articles about this on his website, which I think might be similar to the paper content: http://sedano.org/software-development-wastes/index


Looks like it is available on Sci-Hub, or you could probably email the authors and get a copy.


You may want to read the book Lean Software Development: An Agile Toolkit, it describes waste in the same manner and offers principles to help reduce it.


What is a good example of capricious thrashing?

Is that just alternating working on tasks without ever finishing anything (normal thrashing) more of the time?


A good explanation of a couple of these terms were found here: http://sedano.org/software-development-wastes/mismanaging-th...

"This waste is also related a tension: intransigence versus capricious trashing. Responding to change quickly is a core tenet of agile development and often thought of as the opposite of refusing to change. However, responding to change is more like a middle ground between intransigence (unreasonably refusing to change) and thrashing (changing features too often, especially arbitrarily alternating between equally good alternatives)."


And the root cause - in all cases - is "unreasonable/pointless deadlines".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: