1. Never give an estimate in the same conversation in which it was requested, unless you're quite sure the answer is "less than an hour". Especially if they are going to turn around and quote your timeframe to a customer. The reason for this rule is:
2. The more you think about an estimate, the higher it will go. "Oh yeah and we'll also have to..." is 100x more common than "I found a shortcut that doesn't compromise on..."
3. Even if they know it's an estimate and not a deadline, they might not know the difference between "40 hours of dedicated effort" and "One week's worth of business hours". Making that distinction is just as important as the estimate/deadline distinction.
I'm lucky to work where we don't spend much effort on estimates and time tracking, both of which just make things take longer and cause productivity-reducing stress.
The distinction between labor time and calendar time is an important one! 40 hours of dedicated effort is great, but most people parse that as "1 week" when in reality the available people may only be able to get 10 or 20 hours in a week due to other demands, or it's 38 hours of dedicated time that gets done in 1 week but the 2 hours of security and legal reviews take a month of waiting to finally get completed.
> The more you think about an estimate, the higher it will go
This is excellent advice. At Apsis, we have a collection of "red flags" for any project that we call out specifically as likely sources of increased complexity and therefore a higher estimate. Some of these are:
* integrations external APIs
* existing tech from the client with no provided documentation
> 3. Even if they know it's an estimate and not a deadline, they might not know the difference between "40 hours of dedicated effort" and "One week's worth of business hours". Making that distinction is just as important as the estimate/deadline distinction.
This is the motivation for "story points", to avoid saying "hour" for the former.
(And secondarily, let the ratio of the two be discovered by analyzing previous work, in a stable team setting.)
When I switched from engineering to management, I spent a long time thinking about software estimates, read about many different approaches, and argued with many other managers (I’ll try to write up my own blog post for the next time the topic comes up on here).
The framing in this blog post is better than some (it at least discusses uncertainty) but I believe it’s still basically the wrong framework.
People (managers) like to assume that estimates are a random variable, and software projects are samples from project space. Estimates can be produced for a project by analogizing it to similar projects. This is not true at all. You cannot reason about software projects by analogy.
Some assertions (that I will explore in said promised blog post):
- Software development is chaotic: arbitrarily small changes in the input (project requirements, who’s implementing it, stakeholders, team, etc) may lead to arbitrarily large changes in time-to-completion (including “this project is now impossible”). In short, any project may explode at any moment prior to completion.
- The risk of a project is particularly sensitive to the engineer implementing it, and depends on that person’s specific knowledge. Have they used these APIs or this database before? Any part of a project that the engineer hasn’t used before represents potentially unbounded technical risk.
- the way to manage this risk is to include buffer in the project timeline, and, critically, to use that buffer not for development tasks that were harder than expected, but for re-planning and re-negotiating the deliverables with stakeholders, to un-explode the project. Agile builds this re-evaluation into the project lifecycle, which is the main (maybe only?) thing is does really right.
I agree that the points you raise are important to realise and affect delivery duration significantly.
I, however, disagree with that approach to managing delivery: if you tie in expected value with deliverables, you empower and enable engineering teams to smartly cut scope and deliver in shorter amount of time the biggest chunk of the value.
It's usually only described as "cutting scope", but it requires real creativity and smarts on the engineering team (they can best tell what's feasible in remaining time, when they have tackled all the unknown unknowns you bring up) to do that in a way that still brings the most of the original value. If you move this to your "renegotiation" step, it becomes too slow and hurts delivery as well.
I basically agree (I think) in that I don’t think there needs to be a separate “cutting scope” phase—one should it as soon as they realize they need to. And I certainly agree that it’s incumbent on engineering to invent alternatives and propose them when we realize that we need to cut scope (for the reasons you mentioned).
FWIW, though, engineers shouldn’t decide what to cut (i.e. choose alternatives) in a vacuum, in my experience. I’ve been in plenty of meetings with product/sales/support where they say “it’s better for the whole project to slip than to release it without this one detail” or “we would give up what seems like basic usability to get one particular piece of polish”
Yes, totally agreed that engineers should not "decide" in a vacuum.
But if they understand what value was supposed to be brought, have a decent product/customer focus, and are creative enough, they can propose good alternatives and not stall the delivery.
I've also experienced things you mention, but it was always in orgs where everything, including any small feature, was treated as a "big bet": unsupported by metrics (no matter how imperfect), but instead wishful thinking that it will bring meaningful improvement. As such, you can't come up with anything that's an equally good improvement with less effort because there is no baseline to compare against.
> I've also experienced things you mention, but it was always in orgs where everything, including any small feature, was treated as a "big bet": unsupported by metrics (no matter how imperfect), but instead wishful thinking that it will bring meaningful improvement. As such, you can't come up with anything that's an equally good improvement with less effort because there is no baseline to compare against.
Interesting, that’s exactly the situation I was in, but I never connected the lack of metrics to these kinds of requests. TIL.
I feel like I have lot to say about how this manifested. Product direction was very heavily guided by existing customers (because support could say “we have these three customers asking for X”), somewhat guided by closing deals (because sales could say “we have a $$$ deal that the customer says will close if we deliver Y”) and hardly guided at all by the broader market, because product’s suggestions could only ever be supported by speculation and vibes. But we were B2B, so I don’t even know what good metrics would’ve looked like—it’s not like we had billions of users
- the way to manage this risk is to include buffer in the project timeline, and, critically, to use that buffer not for development tasks that were harder than expected, but for re-planning and re-negotiating the deliverables with stakeholders, to un-explode the project. Agile builds this re-evaluation into the project lifecycle, which is the main (maybe only?) thing is does really right.
This is the only right answer. Unfortunately, engineering leadership is too detached from understanding how software works. They think of projects like contractor painting jobs. So easy to estimate - sq. ft * number of people * number of hours.
But software is not like that. Software is constantly changing every hour, every day. As a result, any estimation is changing every hour every day.
The best thing is for leadership to start acknowledging the realm they are dealing with. If they can't they should step down.
But realistically, engineers should always give 400% padding with estimates. The root cause of this estimation is poor leadership. It is not engineering team's problem that detached management doesn't understand software.
It's been several years since my colleague put this post together, but I think for us it's proven to be a very constructive framework for talking to clients.
It's probably worth calling out that we are, specifically, outside contractors providing estimates for clients --- usually clients at the start of their software development journey. I think a lot of our framework holds for developers working on in-house engineering teams, but there are bound to be some differences.
That said, I'll be curious to read your post whenever it goes up. Namely because I don't know what the conclusion of your comment is. Whether it's as a vendor or as an engineer on staff, estimates are a hard requirement of software development. Most stakeholders are not developers, and you can only educate the recipients of your estimates so much on the nuances of what is and isn't hard to accommodate. I don't really disagree with any of your assertions --- but isn't the re-evaluation process simply... more estimation? Aren't accounting for project risks --- like which developer is going to pick up the task, what requirements are likely or unlikely to change --- part of producing a quality estimate?
Communicating expectations is hard, and to me, one of the defining lines between junior and senior developers is the ability to clearly account for expected risks, and identify plausible but unlikely risks, and to incorporate mitigation into the plan of attack.
I agree, and in general I am very pro-estimates (they switch your team from cooperative scheduling to preemptive scheduling, and they’re an important tool for managing the chaos of development), but the statement “we expect this project to take eight weeks, but it could take 24 weeks in the worst case” is one I would be very loathe to make.
Granted, I’m coming from a B2B startup rather than a consultancy, so I was working with a mixture of internal (product, sales, support) and external (customers) stakeholders. The perspective I would try to give people, though, was this:
If we discover a bomb partway through the project (oops, groups are limited to 100 members in this auth system, so we will need to find a new system and migrate or else not support groups) then 8 weeks will no longer be enough, but maybe neither is 24. The right thing for us to do in that situation, IMO, is go back to the stakeholders and align on a new plan. Maybe groups will come in v2, or some groups will work and some won’t, or maybe we do the migration and add it to the estimate, or whatever. One shouldn’t use the 16 buffer weeks between 8 and 24 to sneak in a backend migration (which itself might be a 16-week project, or might not), but to pause and re-align on a plan everyone likes.
When I was giving estimates, I would try to frame them as “we’ll spend eight weeks trying to accomplish this list of things. If we discover an issue that prevents us from finishing the list for any reason, we’ll come back to you as early as possible with some alternative proposals and re-assess”. Sometimes people felt that this meant we just weren’t willing to commit to our estimates, and this idea of “software development is chaotic” is what I would say, to explain that the issue wasn’t a lack of motivation, but an inescapable knowledge risk that needed to be managed.
Totally agree, and I think your example is a good one at demonstrating the importance of communicating an estimate across a range so that stakeholders aren’t caught off guard when something blows up, and also of distinguishing between the estimate and a commitment.
The process for what happens when something does blow up (we both know something will) is a good thing to establish, and as you say, re-communicate instead of trying to sneak in a fix.
I find your framing interesting. But most importantly, even if nothing changes in the requirements or the team, the mere new information that is learned while implementing the project is often enough to change the estimate considerably.
Yes! In my last role, I aggressively pushed the concept of “technical risk” to try and get people used to this idea.
Sometimes the computers don’t work the way you thought they did at the beginning. This is a normal and ubiquitous form of risk that just needs to be managed like any other type of risk, with prototypes, multiple revisions of the implementation (each introducing additional risk), occasionally re-scoping, etc.
People outside engineering sometimes don’t like it because the risk is human in origin—it comes from engineers not knowing things, and our job is to know things—but until there’s an omniscient engineer, this risk will continue to exist.
Oftentimes I've heard people who are not and have never been software engineers ask "When we want to built a bridge, engineers can tell how long it will take and how expensive it will be, how come with software engineers it's never the case?" (often implied: you software engineers have it easy, slackers! -- I believe we put this blame on ourselves with our immature culture, but that's another discussion)
What I usually answer to that is that, to begin with, that if they had actually dealed with big construction projects like bridges then they would not idealize those so much. And secondly, that this is not the right analogy. Bridge construction is a much stabler science, it had no "complexity explosion". A better analogy would be geophysical prospection : the theory it's based on is sound and mature but the unknowns dominate everything in predicting the outcome.
If you haven’t seen it (and need something to show the “what about bridges” crew), Hillel Wayne’s “crossover project” series of posts[^1] on this is brilliant. I like to show people this quote from it:
> One person talked about how frustrating it is to start work on a bridge foundation, only to find that this particular soil freezes in a weird way that makes it liquefy too much in an earthquake. Back to the drawing board.
That's too much of a carrot IMHO: I would have appreciated it more if you dove deeper into one of those points instead of promising a blog post on a dozen.
Edit: thanks for editing the comment to make it clearer what are your points. This makes my comment somewhat useless :)
> When you're communicating an estimate the most likely mistake is that the other party considers it to be a commitment.
The root cause is typically that you're communicating it to someone who doesn't want an estimate. In that case, communicating the estimate with the commitment is often a mistake unless you're willing to negotiate the commitment, because that's effectively what it invites.
An estimate is really a probability distribution, that often resembles a Gaussian distribution and thus can be captured with two numbers: the average and the variance. Combining multiple estimates together would involve some math around these two numbers. But the surprising thing is that managers in the software industry don't really understand probabilities, and to make it understandable those managers want to reduce the model to just one number, and when this model fails to capture the probabilistic nature of the underlying process, they attempt to overrule the math with authority.
The best estimates I’ve found are “This will take: days, weeks, months, years”. No numeric values allowed. Yes, you can’t do (inherently faulty) math on these estimates to arrive at aggregate metrics: this is a feature. However, it still allows you to meaningfully schedule work.
If pushed, I get my developers to give estimates in jumps of ~5x.
Their options are
2 days
2 weeks
2 months
10 months
Then I triple the estimates before sharing with the business.
We don't estimate individual tickets/bugs at all, just overarching projects.
I also ask the business to estimate the commercial/user impact of the projects too, and we track and report the reality against their estimate, to hold a bit of a mirror up to them and as a way of pushing back on doing pointless work. Those estimates we use similar orders of magnitude for - £1k, £10k, £100k, £1m, £10m.
Fermi estimations like these really help avoid protracted negotiations and the lack of precision is a feature that makes clear they are estimated.
What is an estimate for? It is really an estimate of cost where big-O is mostly developer cost and that gives a handle on the next part priority.
Both of these then add up to an illusion of control.
Software is quickly becoming the largest moving part of many many organisations - and trying to control it is likely the wrong approach
Software is a form of capital just like a robot on the Tesla assembly line. That robot and the software being estimated in the article are to do a specific job in a specific environment- an investment that will enable more production than without.
But all capital rots - or rather depreciates. Taking a nice view you can have 10% on maintenance and repairs (but realistically more than that).
Don’t try to estimate these - don’t ask your handyman for a Gantt chart for a hundred tiny fixes - just get them done.
I think you're identifying something adjacent but materially different: an attempt to estimate and wrangle ongoing maintenance, rather than an estimate for an initial development effort.
It's worth calling out that we (Apsis) are a software development agency; our estimates are for our clients, usually for greenfield work, and usually are part of the process of bidding for projects. A very different situation from an engineering team in year 5 of production deployment.
That said, even as a member of an internal engineering team, you'll need to coordinate work with the other areas of the business: you can't maneuver a large organization with no calendar for when new features, new products, etc., will become available. Estimates are an inherent part of engineering interacting with the rest of an organization, and I think this framework is still applicable.
So I think that the idea of “you cannot maneuver a large org without a calendar” is going back to tommdemarco and risk management as the soul of project mgmt
If you run a company and ask me for an estimate on when software will be ready because the rest of the company depends on it, then what you are doing is making a huge bet on the software team delivering to time and quality (and cost!). Now that’s just hoping.
There is plenty to do to fix it (reduce scope, improve team dynamics and psychological safety, dual delivery teams etc). But in the end it’s a bet, a guess.
Now perhaps there are better ways. I would suggest we look twice at using calendars and deadlines as ways of acgieveing co-ordination - software is waaay better at co-ordination than people and calendars are.
Adjust the business model, early releases for special
Clients - these can be non-software approaches to risk mitigation, but we find the tech solutions more fun.
> If you run a company and ask me for an estimate on when software will be ready because the rest of the company depends on it
This is only one framing. The rest of the company might not depend on it, but they also have work to do that might be best done in parallel instead of serially.
A really basic example: I’m the CEO, and I’m speaking at some industry event. I want to know if feature X will be ready so I can announce it at the event, or will should we prepare something else. Engineering is going to need to back into an estimate so that the rest of the org can prepare for a calendar date — marketing, sales, whoever is writing the presentation, whoever is printing the materials for our booth. This is a totally reasonable ask of an engineering org without it being as high stakes as “this is a massive bet on our ability to estimate software timelines”
Other parts of the org also have deliverables, and ensuring those deliverables can be coordinated to land at roughly predictable times is the job of management. It seems unfathomably crazy to imply that software is some magical discipline that is immune from having to set expectations.
If the CEO is announcing it at a conference, it’s a top line feature that will / should move the needle for the company - ie this is a sizeable bet for the company. Otherwise no-one is listening to the CEO or she is talking at the wrong conference.
If it’s a commitment of that size then “here’s a date, rely on it” is terrible management. Yes of course other things need to be co-ordinated
My argument is three fold
Let’s imagine the company offers an app to record activity on a building site we are adding a photo feature where materials are scanned situ by CV - improving compliance, reducing fraud. It’s worth millions.
1. Co-ordinate with software. What levels of reliability do we want, what milestones will we hit and then turn on certain other activities in the compmay. So set up tests or milestones
2. What’s the idea that software is some
Distinct part of a company. It’s everything everywhere. If your marketing team and accounts team and logistics are not up to their eyeballs in software somethings wrong
3. The last point - management has a job. I think we over-play what management can and cannot do. I believe management is being disaggregated by software - and far too often the co-ordination job is a political compromise job not a white hit directive.
Edit: I have played with deleting this but it’s a half formed thought - but I am writing a book where the whole software / management idea is being expanded. Maybe it’s only going to stay half formed - but I hope not
Generally good advice, however I think the actual example phrases could still be clarified, e.g. bg explaining why clear estimates in software engineering are not as common as in other fields — note the person you are talking to might have a completely different experience in their field of work. And then you give your buest guess and a clear explaination of your level of confidence, which factors could lead to shorter or longer project durations.
A more interesting question is how you follow up on your estimates? How can you improve if you don't track and reflect on the actual result on what was estimated?
Fortunately for us, we have no choice! We (Apsis Labs) are a dev agency for hire: if our estimates are wrong, we lose our clients, and then we don't have money and then we all die of starvation.
Being good at estimating our work, and then communicating those estimates and expectations to our clients, is how we've stayed fed for 12 years.
Combining Software time/cost estimates with value estimates make it even more fun, because most software is low value (with some chance of being extremely valuable). Combining these into an ROI probability space makes everybody sad, but the expected value is still positive even if the most likely outcome is software that wasn't really worth paying for!
Having run a software team with good estimates, I’m disappointed in some of the commentary of, “just make up a long or obscure estimate.” I don’t expect a considerable amount of unknowns to come up in a project unless you have changing requirements or bad planning.
You can’t do anything about changing requirements so that can be forgiven. Poor planning cannot though.
Unreasonable stakeholders will absolutely hold you accountable to not meeting a schedule even when they're constantly changing requirements. And they can and will throw you under the bus for it.
You are right though, that a decent team that does decent planning should be fairly accurate much of the time. And there are project management techniques to handle the remaining uncertainty.
In practice, if you try to change the estimate, you'll be shot down and asked to work harder. The people changing the requirements assume that software is infinitely malleable and any change is doable with a small amount of effort. After all the devs are smart so they'll figure it out, like they always do. To these people, asking for a change in estimate is indistinguishable from complaining or being lazy.
I think these comments come from a very naive, engineering-centric perspective. Good estimates aren't short estimates, nor are they estimates which keep management off your back. Good estimates are accurate estimates because they allow the rest of the organization to properly plan around engineering's deliverables, to properly staff and resource the engineering team, and to manage customer and client expectations.
Arbitrary big estimates that are lies constructed to give ourselves buffer time are lazy and lead to mistrust between engineering and the rest of the organization.
Even worse in our case (devs for hire), they lead to lost clients and lost revenue. Clear communication around capabilities, expectations, and deliverables has been absolutely essential to our company staying afloat for the past 12 years.
It usually centers on whether changing requirements are due to poor planning or not?
I don't necessarily think it is, and I believe it's easily fixed by keeping the reason (expected value) for anything close to any requirements being passed down to engineering teams.
This article describes some challenges and pitfalls of communicating estimates but doesn’t offer much actual advice.
In my experience if you’re talking about risk distributions or hand wringing over the difference between an estimate and a commitment, you’re already on the defensive and not doing your reputation any favors.
The better approach is to zoom out and understand the business or product goals, including constraints felt by your stakeholders, and communicate in their terms. Speak confidently to what you can do, with appropriate footnotes for limitations and risks. Even in a low trust environment you don’t succeed by focusing on the negative (though you might want to keep a paper trail), you succeed by showing what you can achieve.
1. Never give an estimate in the same conversation in which it was requested, unless you're quite sure the answer is "less than an hour". Especially if they are going to turn around and quote your timeframe to a customer. The reason for this rule is:
2. The more you think about an estimate, the higher it will go. "Oh yeah and we'll also have to..." is 100x more common than "I found a shortcut that doesn't compromise on..."
3. Even if they know it's an estimate and not a deadline, they might not know the difference between "40 hours of dedicated effort" and "One week's worth of business hours". Making that distinction is just as important as the estimate/deadline distinction.
I'm lucky to work where we don't spend much effort on estimates and time tracking, both of which just make things take longer and cause productivity-reducing stress.
reply