Hacker News new | past | comments | ask | show | jobs | submit login
“Laws” of software estimation for complex work (2021) (mdalmijn.com)
265 points by fagnerbrack on Dec 20, 2022 | hide | past | favorite | 146 comments



In software engineering you generally have a complex dependency graph to complete something that’s largely unknowable and each edge is a random variable with a high variability skewing right. If you take the joint probability that any specific and precise date is in fact accurate, it’s effectively zero and has a long tail in the “later” direction.

However management often confuses precision for accuracy. In fact, not just management - but humans do. A precisely articulated anything, no matter how much it’s caveated, is taken as a fact. It’s precise. You can measure it. To a human that feels accurate. We feel disappointed when it’s not accurate. Even when we were warned.

This then gets into the discussion of how to estimate better. Holding the construction above to be true for any effort of any complexity that’s futile.

The way I lead software is I keep a tight view of “could we do better now” and “are we doing the right next thing,” and if we learn we made a mistake we suck it up, fix it if we can, and then keep rolling forward. I give out precise estimates when asked and don’t tell anyone on the team. When we fail to meet those estimates I tell my story of precision and accuracy and get back to work. At some point things work well enough that the next step isn’t worth it given how long it’s taken and we are done. Seems to work well, yields good products, executes fast, and as long as I have a thick skin for being grilled on my inaccurate yet precise estimates its less stressful than the alternatives I’ve seen.


Recently I had the notion of making a variant of MS Project where the Gantt charts can have range estimates, conditionals, and general probabilistic features.

Then simply use Monte Carlo to run a few hundred thousand iterations to determine not just the estimate range (with probability curves!), but also things such as "which step is more critical to timely delivery?" and "who can't go on holidays during critical periods, and what are those periods?"

Etc...

Turns out that there are about a dozen such tools already, and people do use them for estimating complex projects. Think ITER and the like.

Fundamentally, it all goes back to what you said: You simply move forward every day and stop when it's "good enough". All the estimation in the world won't change the path you take. At most it'll decide if you embark on the journey, or not at all.


“Fundamentally, it all goes back to what you said: You simply move forward every day and stop when it's "good enough". All the estimation in the world won't change the path you take. At most it'll decide if you embark on the journey, or not at all.”

I often wish management would worry more about productivity vs estimates. In my company the only thing that comes from project management is the desire for better estimates. Nobody seems to really care about improving processes and staffing.


> I often wish management would worry more about productivity vs estimates.

A thousand times this. I just tried to explain to a manager overseeing developers that the minimum time for the "inner loop" of edit-build-debug is super important to productivity.

He just didn't get it, and was fine with adding multi-minute delays to that loop on his team just to avoid speaking to another manager once.


> Turns out that there are about a dozen such tools already, and people do use them for estimating complex projects. Think ITER and the like.

Can you link some? Some year ago, I spent many hours searching for exactly that, started a bunch of threads on social media, and in the end found nothing useful at all - except for one lead I accidentally discovered near the end: GERT, as in "Graphical Evaluation and Review Technique", or "let's take PERT and add proper directed graphs, and notions of conditionals and probabilities".

https://en.wikipedia.org/wiki/Graphical_Evaluation_and_Revie...

Unfortunately, I stumbled on that at the very tail end of my search, and had no time to dig deeper - but I assume if the software industry knew or cared about this, I'd have seen it mentioned earlier in any of the dozens of "project management" tools and startups I've looked at.


I created one of these tools when I interned at Crystal Ball (before it was acquired by Oracle. On the on hand, it really was as slick as you'd hope - like, since we used Project for the calculation engine, and Project integrates with Outlook's calendar, you can see that if Alice is the only one that can complete a task, that task can't be done while she's on vacation, etc.

On the other hand, I never had a chance to see it used for real (honestly, I'm not sure it ever was), but I'm not sure it would help. The results are hugely sensitive to the distributions, and the whole point is that even the distributions are difficult to estimate - you just don't know what you don't know.


> Fundamentally, it all goes back to what you said: You simply move forward every day and stop when it's "good enough".

To me this is a basic requirement of a truly agile approach, but management always turns it into"waterfall with weekly meetings".


Yeah I was developing in the thought works crowd in the SF 90s, and agile was a wonderful insight into how to best create high velocity and high quality, with high morale, software. I went into quant trading for a long time which by its nature is agile so missed out on the process adaptation of agile. The problem came when folks added story points and burn down charts - this invited the process people in the door to make things more predictable, which invited the management in to consume that desired predictability. My belief is software, for whatever reason, suffers from an uncertainty paradox. The act of measuring development impedes and slows development.


This is such a great précis of estimation, which I try to avoid at all costs but which people insist on me making. And like you I keep them from my team.

The only real problem is if, regardless of having a thick skin, you work for people who have little experience in the field and value planning over actual delivery. In my experience you are then well screwed.


I agree. The only solution that I have found for this problem is to take advantage of their ignorance and tell them things like "we need 2 weeks to do a hardening Sprint to protect from zero days" then you can do whatever you need to do those weeks and also upgrade a few packages.


I was explaining the project estimation process to my 8 year old (whip smart) daughter and told her about the "Estimate then add 10%" heuristic and she immediately recognized the flaw and said, yeah then add 10% to that, then 10% to that, etc...

The reality is, estimates are a business function not an engineering function.

Once you realize that, then you change your approach to: Never take on cold start projects that are business critical

Do this and you'll never have to estimate work that isn't already in some form of progress!


“Once you realize that, then you change your approach to: Never take on cold start projects that are business critical”

Unfortunately these are my favorite projects even if it’s a pain to tell management that there is no way to give them meaningful estimates. They pretty much have to agree to invest some money and then see what comes out of it.


> there is no way to give them meaningful estimates

Wrong. There is, but it would offend everyone because it ignores all the details. See my post about Thinking Fast and Slow.


Time to teach her about convergent series :-)


Seems like she independently discovered Hofstadter's Law: https://en.wikipedia.org/wiki/Hofstadter%27s_law


> However management often confuses precision for accuracy. In fact, not just management - but humans do. A precisely articulated anything, no matter how much it’s caveated, is taken as a fact. It’s precise. You can measure it. To a human that feels accurate. We feel disappointed when it’s not accurate. Even when we were warned.

Most likely, the pseudocertainty effect.

https://en.wikipedia.org/wiki/Pseudocertainty_effect


> However management often confuses precision for accuracy

One of the best tools for estimation, I've discovered, is not an estimation technique at all, but a communication technique:

Build error bars into your estimates when you deliver them. My preferred way is to give 30/60/90 estimates: I'm 30% confident it'll be done in 20 hours, 60% confident it'll be done in 45 hours, and 90% confident it'll be done in 60 hours. Deliver estimates thusly to management and they can determine how safe they want to be. If they say "great, we'll plan on 45 hours," remind them that there's nearly even odds you don't hit that. If they complain that the range is too wide ("how can it be 20-60 hours?! I need more confidence!") then you can use that as justification for exercises which increase confidence (prototyping, more detailed planning, research around any unknown systems, etc).

The exact format doesn't matter - maybe you prefer to give ranges or standard deviations or what-have-you - the important thing is to make it impossible for someone receiving your estimate to ignore the uncertainty in the numbers you give.


The US Navy developed program evaluation and review technique (PERT) way back in 1958 specifically to deal with that issue. It isn't perfect but it handles complex dependency chains and skewed variability just fine.

However, PERT hasn't been widely used in software. For most programs the improvement in estimation accuracy doesn't justify the management overhead. Better to do incremental delivery and reduce scope as needed to fit the schedule.


>However management often confuses precision for accuracy. In fact, not just management - but humans do.

And further down the path of madness, you have humans who confuse _lack_ of precision for accuracy, e.g. astrologers who prefer to base their theories and predictions on the movement of a single planet because taking other ones into account "creates confusion".


I agree with what you said, but I am confused. How are you able to give precise estimates and what are you estimating? Milestones?


The CTO and CPO weren't estimating the time to build the feature, they were estimating the number they needed to win the business. Successfully - job done by the C-suite.

The job of the author wasn't to build the feature in the X weeks, it was to manage the customer while the project completed in whatever time it took. He was successful too, even though he doesn't seem to realise that was his primary value-add.

The customer didn't want or expect an X week delivery of specific scope, they wanted to sign on vague requirements that they could change as the project went along while keeping the vendor under pressure to deliver whatever thing they eventually found out that they wanted.

All of the hand-wringing in the article about estimation misses the point entirely.


They didn't seem to communicate to the author the critical bit of information then (i.e. "we have no intent of actually building this to these requirements in 12 weeks"). I get that there may be a business charade going on where impossible requirements and deadlines are juggled, but the key for the CTO here is not to simply pass that down to the team delivering, no matter how tempting that may be. His job is not only to get the deal, it's also to protect a functioning team.

The result of failing to do so is a team fighting uphill to deliver an impossible goal. So long as the CTO's actual plan is delivered to me, I have no problem with it. But too often the project evolves into negative pressure and "crunch time" to meet the impossible goal. Differences between actual goals and "paper goals" are blurred. If that's what's going to happen, I'd want no part of it. In fact, if the goals are sufficiently unrealistic I'd tell the CTO to clearly communicate with the customer what's realistically achievable in 12 weeks, or find another developer (Yes this has been a successful approach for 20 years and counting).


I doubt a CTO would want to document that they are lying to customers on estimates.


Yeah, instead they just say "software estimates are totally impossible to make! We can't help it!".

It is easy to make relatively accurate estimates if you want them, but there is very little demand for accurate estimates, managers wants overly optimistic estimates because those are easier to sell.


Estimation is doable statistically. You can estimate 100 small things with reasonable accuracy, if you work in familiar terrain. That's why scrum an similar tend to work reasonably well for seeing if you are going to hit some iteration deadline etc.

BUT you can't estimate a huge software project up front, because you can't break it into 100 small things up front and it's not usually not familiar terrain before it started. You might be forced to give an estimate up front in order to get a deal, but it's all business theater.


Then I'd be fine with clear communication among all senior executives, so that HIS boss in turn doesn't believe the team is going to deliver the paper goal, and later gold the CTO to it. If there are two different sets of requirements (Paper requirements and actual requirements) then it's pretty important that the whole C-suite agrees what the team will be held to delivering. Otherwise you are going to see the CEO breathing down the neck of the CTO once the project fails to meet the paper goals. Next thing you know, it's "pizza night" for the team.


It would be interesting to see how the CTO and CPO dealt with the blowout in delivery time. As they pretty much gave a made-up commitment to the customer, did they also create fiction when reporting to the CEO or the board in how their team was doing at meeting delivery revenue and containing costs?


Yes, all very true; and in some cases you may extend the client role motivation down further into their own organization to fund the project in the first place through a fractal of self similar patterns. Looked at from deep enough in the client organization - the contractor CTO/CPO and the Client's own PMO are all just 'that project' and have the same commitment pressures.


Very well stated. My conclusion is the same: It may not have been comfortable or unambiguous every step of the way but basically all stakeholders win in this story.


Coming to “agreement” over a fiction isn’t okay. No amount of rationalization makes it “business”. More to the point, it is unnecessary. I hold there is harm here.

A different skillset is needed to negotiate from truth (“we need to confer with our building teams to estimate the full project timeline”) and still be persuasive. It’s not more difficult, and these these skills are not rare. They don’t get used in some environments (not many, certainly not most) because of culture, and at the ultimate expense of the leaders that set it.


Maybe the company won the deal because the CTO and CPO said "yep - we can do that in 12 weeks" whereas the competitor said "we need to confer with our building teams to estimate the full project timeline" and promptly lost the deal...


And maybe the company will lose the next deal, or the one after that, because the customer won’t give a reference; or because they took too many shortcuts and hosed the product; or they have so much technical debt the next project fails.

That’s the problem with this kind of kick-the-can-down-the-road thinking. At some point you get to the end of the road, at which point your competitors can come and kick you.


They'll win the next one as the cowboys' teams are tied up for a year, and all the ones after that as the cowboys lose their good employees.


Honestly, I didn't even pay attention to the list. The important part of the story is that the execs 'delivered' on a promise, made the deal, and kept the customer happy.


This is a massively insightful comment, thanks. It also describes a world I never want to live in.


> a world I never want to live in

I have bad news for you, OP...


> The CTO and CPO weren't estimating the time to build the feature, they were estimating the number they needed to win the business.

There is a school of estimating called "Guess the number in the boss's mind".

A salesman's "estimate" is a number for him to discuss with customers; and it might bear on the price he decides to charge. But it's unrelated to the amount of time it takes to do the work.


You also have to take into account that often a certain level of power play is going on: those developers need to be shown they are much lower in the hierarchy than the sales people.


Totally agree.


> The job of the author wasn't to build the feature in the X weeks, it was to manage the customer while the project completed in whatever time it took. He was successful too, even though he doesn't seem to realise that was his primary value-add.

This seems unlikely, given the story as written.

Edit based on subsequent comment: nowhere is it even hinted at that the C[ET]O was giving them this as their real job.

> All of the hand-wringing in the article about estimation misses the point entirely.

There's no hand-wringing.


Your comment amounts to “you’re wrong!”, which is really hard to engage with.


> What’s even more interesting is that even though the wrong estimate wasn’t my fault, it definitely was my problem now.

As a 43yo developer, my advice is to push back on this. Make it clear from the first moment that it's their problem, not yours.

You can do this in multiple ways, like saying "You seem to have gotten yourself into trouble. Maybe next time, come to talk to us first so you don't make this huge mistake again". Don't give in to their smooth management talk, keep your ground.

They will try to let you solve the problem, but that's impossible. Let them solve their own mistake. If it's impossible to solve, well, it's a hard lesson for them.

At least in Europe, there is a huge shortage of developers. That means your employer needs you more than you need your employer (plenty of other employers out there). So stand your ground and make sure the problem stays on their end, not yours. And keep reminding them that they have to come to you for proper estimates.

You can already do this as a single developer. "This needs to be finished by Monday, can you make sure it is done?", "I'll do my best" (never promise!)

When they do come to you for estimates, make sure you know the difference between estimates and deadlines, and assume they don't know the difference. Make sure nobody confuses your estimates for deadlines. Better yet, never let them see your estimates, and only give the deadlines.


I completely agree with you, except that I'm currently going through a situation.

The project manager has made a plan that a feature would be done by the end of the year. Two data scientists have been the only people working on it for the whole quarter. They said it's not going to be ready and they haven't even gotten it working on their local machines. The project manager turns to me and I concur with the data scientists. We had less than two weeks left when we talked, since everyone is taking time off for the holiday. I said I'm not going to work on it until it's ready and he's making out like I'm insubordinate and responsible. I honestly think I'm going to be fired.

It could be a good thing. Maybe I can work somewhere where this sort of thing isn't tolerated. I think a great many workplaces are like this though. Pushing back, even a bit, can paint a target.


You thought you were in a room with a nice friendly dog, all of the sudden the dog starts pacing and staring at you menacingly. You've found yourself in a bad situation for sure, but you should not consider how you tolerate a scary dog for any extended period of time. In the situation you can exit the room, get mauled, fight back and kill/incapacitate the dog. The grandparents comment applies only when there is an existing power balance in the engineers favor, if you don't put that power balance into place then their advice need not apply. Fuck your manager, if you do wind up fired let it roll off your back there was nothing you could do.


My advice would be to use that feeling of "I might get fired" to get you started looking for a job.

If you are fired, you'll have started the process early.

If you aren't fired, you'll have started looking for a less-abusive (and probably more lucrative) job somewhere else.

It's absolutely win-win for you.


I don't think being fired could every be construed as a pure win, but it has accelerated my plans of looking elsewhere.


Yeah, getting fired is not a win. I meant that using that feeling to look for a job is win-win, no matter what happens at your current company.


I’m not sure of your organizational structure, and if I had to guess it probably is designed to take power away from engineers, but if you’re able to talk to a manager about your near hostile PM it might be a good time to do so so thst someone has your back. Obvious caveat that this assumes your entire management chain isn’t complete garbage, which is a big assumption considering the story you provided.


Unfortunately, you're right and I don't really have anyone to turn to within the organization.

My manager was wrangled into this by the project manager. I explained the situation and his response was "I agree with you, but you have to do whatever the project manager says".

While that isn't word for word, I'm not paraphrasing.


In most situations, that is when it’s time to look for a new job if you value your mental health. Nobody needs to work for a bully PM.


I'd probably worry less about getting fired. If the manager is truly concerned about how quickly the product will be delivered then – unless they are truly idiotic – they will realize that firing the engineers on the project only works cross to their purpose.


In this case, I think the manager cares less about the success of the project and more about their own success.

Cynically, he made me the scapegoat for his unreasonable project plan and execution.

Less cynically, he knows he could find another engineer to pick it up next quarter. I'm not that critical.


Sometimes someone wants to push a mistake they made under the rug and in doing so they will throw your efforts under it as well. I'll solve your problem, but I'm doing it loud enough that your boss will be aware of the mistake as well. (unless you have gone to bat for me previously, generic manager with the 2.5% annual raise is going to be deaf with how loudly I solve his problem)


I have (finally!) reached the point in my career where I am comfortable with responding negatively to unreasonable deadlines.

None of this, "we'll do our best" crap. The deadline was unreasonable, I will let them know it was unreasonable, the work will take as long as it takes, and if they want a good estimate they need to start with the engineers. And it's their responsibility to deal with an angry customer.

The real benefit of years of software dev experience is having enough fuck you money to not be afraid of telling your boss when they're being a shithead.


> As a 43yo developer, my advice is to push back on this. Make it clear from the first moment that it's their problem, not yours.

Yeah you can both be positive about the fact that you will get to work on the project and deliver it. You can also make sure that they understand that the dates will almost certainly slip up front. And try to dodge anything that sounds like a promise for a deadline.

The reality is that they probably don't care anywhere near as much as the tone of this article is making it sound. If they can get the deal signed and get something delivered in twice the time as was agreed to, and which isn't fully baked and may be more of a beta release, they'll be happy.

And if they're not well then they've set you up for failure, and the worst thing they can do is fire you. As long as you understand that is their issue, that should relieve you of your burdens. Usually though if you come off as a "straight shooter" who is self confident then they'll remember the delivery and not the slipped schedule and bugs.

What you don't want to be doing is projecting as much anxiety over deadlines and estimates as the article here does. Even if it sounds ominous that there are contractual deadlines, in reality usually neither side of the deal wants to blow up the contract if the schedule slips by 3 months (or whatever) as long it eventually gets delivered.


This is a great list. My thoughts on two of the points:

> The biggest value in estimating isn’t the estimate but to check if there is common understanding.

Wholeheartedly agree. The amount of extra detail I've captured out from the team, with wildly varying estimates on a particular work item, is astounding.

It forces the team to articulate assumptions and dilineate between in-scope and out-of-scope details.

This works especially well when stakeholders are in the same session, asking for why a button cannot be made blue in under 5 minutes.

> Breaking all the work down to the smallest details to arrive at a better estimate means you will deliver the project later than if you hadn’t done that.

Not sure. Perhaps breaking down tasks has a negligible effect on estimation accuracy. Maybe this is true for simple environments without complex dependencies across teams.

But breaking down tasks provides (1) some certainty that you know a thing is achievable, (2) gives you a strong idea of what can be started immediately, and what can be run in parallel, and (3) hidden dependencies on other teams. This in turn leads to a better estimation.

I have observed many failures when management assign a vaguely specified task (contrived example: "rewrite persistence layer to talk to new database") to a developer, that should really have been broken down into smaller tasks.


> It forces the team to articulate assumptions and dilineate between in-scope and out-of-scope details.

The more general rule is to force people to put numbers on things.

- Not "will this be expensive?" but "how many dollars do you think this would cost?"

- Not "is this a big project?" but "how many calendar days will this take?"

- Not "will we need a small amount of paint only?" but "how many litres of paint should we buy?"

People very easily come to false agreement over vague terms and goals, but when the numbers come out it's revealed how different their opinions really are, and that's when the useful discussion about assumptions and theory start.


> But breaking down tasks provides (1) some certainty that you know a thing is achievable, (2) gives you a strong idea of what can be started immediately, and what can be run in parallel, and (3) hidden dependencies on other teams. This in turn leads to a better estimation.

I very much agree. I think the key in the OP article is "to the smallest details." In order to truly know how to build something, you need to build it in the first place. Worse, you'll end up with a load of outdated documentation that will either be abandoned, or take up more developer time to update when it inevitably conflicts with what's produced.

I tend to stop planning once I hit some "unknown horizon," meaning that any further planning is more or less built on well-intentioned speculation, or is bikeshedding at best. Past that, and I think you end up creating more problems for yourself.


Many years consulting for large custom systems for Fortune 500s etc: executives have budgets and they want to know how much they’re spending, what they’re getting, and when. They don’t want to hear about the complexities of software estimation.

I manage the problem with waterfall:

Step 1. Produce absolutely everything you know about your requirements in your own words and formats. Whatever you have, send it in. Don’t worry about structure (assuming no RFP).

Step 2. I use that as a scope of work for a contract covering the Requirements Analysis. I can basically estimate how long the RA will take based on those client inputs, which costs me almost nothing (maybe 2-3 meetings) to collect. Client pays for the RA.

3. Develop RA. I now have a concrete scope of v1 product, a draft of the persistence model, key UI, etc, and can make a somewhat reliable delivery date.

4. RA used as the basis of the dev contract.

Agile is nice when you’re building a technical product in house or, I guess when you meet execs willing to agree to that “we’ll see what the scope and cost is as we go” but I haven’t met them.


The Startup way: make a list of features you want to build for your next "release" / deadline. Make that really minimal, what really matters to your users? Tell people you're doing the next "version" by that date, but don't go to far into specifics if you can. E.g. you can say you'll have feature x where there are variants of that feature x1/x2/x3 of increasing difficulty.

If you need to, build prototypes of the features to get a feel for how difficult they may be. Then you've got a set of features you _think_ you can get done by the deadline.

Write down the spec where folks can see it. The deadline should be say < 8 weeks away. No scope creep in that time. Stick to what you have.

Then get to work. One rule: you will ship at that deadline. You can actually ship individual features beforehand if you do CD etc. You can use feature flags to hide the features for most users if you want to have them feel like a "version" has been released. But that deadline is set in stone and you do not move it (you can allow say a week slack to be reasonable, but probably keep that to yourself if you're a manager and only use it if you absolutely have to e.g. a key dev gets sick in the final week etc.)

As the deadline approaches, things will go wrong. Keep the deadline. Cut the spec. Firstly, cut to simpler versions of the features. Second, start dropping features.

Ship. People will be annoyed you haven't shipped feature Y that got dropped. Gauge the response. Now for your next "release", Y or Z is top of the list.

After the deadline has passed, give folks a period to recover as you decide what's next, cut them some slack and then go again.

This is the only answer I have to estimation.


> Then get to work. One rule: you will ship at that deadline.

I think the happiest moment of the past 12 months for me was starting a job as a VP Eng and asking the CEO what they wanted most from the Engineering process. It was basically what you are saying. Pick dates and ship on those dates. Cut things if necessary, but regular shipment of improvements is the goal.


There are two ways to approach the problem of inaccurate estimation:

1. set a date and trim scope to hit it 2. set a scope with an understanding that the ETA is truly estimated and subject to change

Either way, stakeholders and the team have to fully understand and buy in on the approach.

When the approach isn't clear or when you promise both scope and delivery date, it's a highway to the danger zone.


Interesting, but is the “date flexible” approach really workable? How does the rest of the business work if the deadline is May, then August, then the following year? At least with the fixed date method you know that a bunch of stuff will be shipped at that date, and you can always have an adjustable plan for eg marketing.


#1, estimates have no impact on the time it will take to complete, is not generally true. Underestimating tends to significantly increase the amount of time a project will take, since the team gets pulled in to constant fire drills, rescoping, etc to try to figure out why they are behind schedule and how to get back on schedule. Overestimating tends towards the project taking longer because less pressure allows scope creep (ie Parkinson's Law).


I think #1 should be read "whatever work ends up being done takes the time it takes, not influenced by the estimate" which is true. What you're saying is "the estimate influences what work ends up being done" which is also true.

In other words, there's no conflict there, just conditioning on different things.


I think GP's point is a bit different: that different estimates will result in different type and different amount of work being done for the same deliverable. This is well-known when you've overestimated - c.f. the Parkinson's law GP mentions - but they also have a good point about underestimates, which is something I haven't thought of this way.

Consider an imaginary deliverable that should objectively (from God's point of view) take about 2 months to deliver, and different estimates being made and committed to in alternate realities:

- 6 months - delivered near the end of those 6 months, possibly to above average quality, with surplus time organically divided between improvements elsewhere in the project, other tasks, research, slacking off, and general waste.

- 2 months - delivered exactly on time, average quality, team fully focused on task.

- 1.5 month - delivered exactly on time, average quality, team fully focused on task, if a little tired.

- 1 month - delivered exactly on time, below-average quality, the team tired, demoralized by the anxiety and stress of meeting an unreasonable deadline.

There's indeed little difference in work being done if the estimate is in range of ~1.5 months to 3 months. Above, Parkinson's law will noticeably kick in. Below, corners will be cut, people will burn out, and this unsustainable approach will start as soon as people realize the deadline is unrealistic - which may be at the very beginning.


One of the most common mismatches between estimation and execution that I have encountered I phrase, "You never anticipate the unanticipated". During estimation people are generally optimistic and estimate assuming that they will not encounter any problems or issues along the way. That is rarely the case, and you can never assign an estimate of effort to them in advance since you don't know what those problems will be or how long they will take to resolve. You can assign a "buffer" of effort but this is just plucking numbers out of the air. I know that this is pretty obvious and basic stuff, but I'm amazed at how many people look at me like I'm crazy in planning meetings when I try to temper expectations when presented overly optimistic schedules.


> During estimation people are generally optimistic and estimate assuming that they will not encounter any problems or issues along the way.

This is especially prevalent when people don't anticipate the overhead of the technical aspects needed to finish some business functionality.

And yet, when my estimates are large people look at me odd and try to peer pressure me into not being an outlier.

In the end, however, there's certainly a sort of multiplier for working with legacy code or overcomplicated systems.


> but this is just plucking numbers out of the air.

It doesn't have to be. You can learn to give estimations of the "there's a 90 % chance this will be done before date X" and then when you look back at your 50 last projects, roughly 45 of them indeed were done before your estimated 90th percentile point.

Granted, the process will still look like plucking numbers out of the air, but the result will be more meaningful for planning than numbers actually plucked out of the air.


There’s also a great deal outside your control for making accurate estimates that is never accounted for: coworkers who don’t review your PRs, flakey build systems, slow build systems, coworkers who do review your PRs but build an enormous punch list of things that may or may not be in scope for the work, complex merge conflict resolution that requires understanding concurrent work, linters that fail your PR, etc.


Yes, a closely related issue is domain knowledge (or lack thereof). Just when you think you’ve finished the job, some non negotiable domain rule comes along and ruins the rest of your month.


This is the classic reference book on software estimation; and yes, it still applies in 2022…

Barry Boehm: https://en.m.wikipedia.org/wiki/Barry_Boehm

“Software Engineering Economics” https://archive.org/details/softwareengineer0000boeh


The reason software estimation is hard is that estimation is hard. We all imagine that our problems are unique and other disciplines don't have them. Software estimation is so special that no one outside of CS can possibly contribute to it.

In looking for software reasons why our estimates are bad, we ignore the glaring fact that estimates of any kind are often bad.

But they are. The reasons are psychological biases, and they're brilliantly explained by Daniel Kahneman in Thinking Fast and Slow. In the chapter The Outside View he tells the difference between the Inside view and the Outside view. In the outside view, you know nothing about the case except the class that it belongs to. In the inside view you know, or think you know, a lot about this particular project.

A short quote that summarizes it (his problem domain is a group writing a new textbook):

========================

The spectacular accuracy of the outside-view forecast in our problem was surely a fluke and should not count as evidence for the validity of the outside view. The argument for the outside-view should be made on general grounds: if the reference class is properly chosen, the outside-view will give an indication of where the ballpark is, and it may suggest, as it did in our case, that the inside-view forecasts are not even close to it.

===========================

Somehow, I'm reminded of the blindness that amateur pilots have about fatal crashes. One actually said to me, "I've looked at all those crashes, and I'm confident I wouldn't have made those mistakes." So for software disasters, we tend to shudder and say, "Hopefully nothing that terrible will happen to us."


> We all imagine that our problems are unique and other disciplines don't have them. Software estimation is so special that no one outside of CS can possibly contribute to it.

Software development is special in that, by virtue of it involving automation, each estimation is largely unprecedented.

If you are estimating how long it might take to build a house, you have a wealth of precedent on which to base your estimate to ensure you are at least in the ballpark.

Perhaps the house will be founded on a sinkhole and you will have to start over or there may be a lumber shortage when obtaining the supplies for the house, but you at least have a reference for a similar process that went well in the past to know what your own endeavor ought to resemble.

If you are estimating how long it might take to automate a task (presumably a task that has never before been automated, hence the need to automate it in the first place) you are, relatively speaking, blazing a new trail. It seems possible to me that "accurately estimate the time required to complete this software" is the halting problem promoted to a management position.


The corollary here is also that, if you do enough trailblazing that you start recognizing patterns that makes your estimates much more accurate, it means you likely know enough to automate away the predictable portions of the process - making future deliveries faster, but again not possible to estimate accurately.

And, if those predictable parts aren't very specific to your business niche, others will discover them too, and someone will eventually automate them away in form of a library/framework, service, methodology, or something else you'll end up adopting.

Ultimately, having such close feedback loop with automation, by virtue of being done in a virtual medium, makes the software process anti-inductive wrt. estimates. Like with stock market, exploiting some regularity effectively makes it go away.


That's the "everything we do is a one-off" fallacy. It isn't.

I believe there is a small set of questions you could ask before starting, and that would give you the ballpark estimate. Or, "the outside-view" if you like.


This is a good point. I've had much better luck estimating using the outside view, as Kahneman calls it (or willful ignorance, as Weisberg calls it).

I think the problem is people in IT (including developers) don't have much statistical training. Taking the outside view and believing it requires faith in statistical principles.

The people I meet in our field want to hear about nice narratives and specific examples, not reference-class generalisations and probability distributions.


Well said. It's not that they can't provide estimates, it's that they don't want to.

If you read RunSet's answer above: he really believes every project is unique, and you can't estimate anything about it. That's the attitude we have here: "I'm doing something that's never been done before!"


> estimation is hard

... and pointless. Imagine a sort of strawman (but realistic) candid conversation:

"Why do you need to know how long this is going to take to deliver?"

"Because we only have a limited budget"

"Do you know how much money you have?"

"Yes"

"So if I estimate that it's going to take longer than you have budgeted...?"


Some good stuff in there, but heard most of it in earlier publications. Reading Steve McConnell's "Software Estimation" covers a bunch of it, Cone of Uncertainty etc etc.

Here's where i've seen estimation become accurate:

1) The people doing the work are estimating the work, and KNOW the software base they are estimating for.

2) Technology being used is not shifting considerably for the piece being estimated.

3) Processes being used to go from requirements elicitation to acceptance are not shifting dramatically for the new piece of work.

You have to have probably 2 of these 3 to have any chance of reasonably accurate estimates. I've seen this work on a fairly large (1 MLOC C++) sonar system development. After a couple of 'late' releases, where those 3 premises were not true, estimation became better, and after 3 or 4 releases, teams were getting pretty accurate, such that customer trust went through the roof.

If you don't have 2 or 3 of those ticked off, you'd better add in a bunch of padding, or get some risk $$ from the C-suite signed off.


A lot of people in this thread seem to have a gripe with

> Breaking all the work down to the smallest details to arrive at a better estimate means you will deliver the project later than if you hadn’t done that.

I think some of those comments signal that they misunderstand the point. There are two reasons to decompose a system before starting to build it:

- To quickly eliminate solutions that are almost guaranteed not to work, and

- To find consistency boundaries allowing you to structure work efficiently.

These two things speed up development, they don't slow it down. What they have in common is that you don't need a very detailed decomposition to leverage the benefits. Usually decomposing into at most 5 components or fewer will get you there.

What the point in the article talks about is decomposing into the smallest details trying to produce a detailed design containing finely grained subcomponents ahead of time. That, indeed, will take more time and may not even generate a better result, as the article says.


I wouldn't disagree that decomposing a system in this way before implementation is a net positive, however I think different stake holders view plans like these very differently. If you discover partway through development that the library you planned to use for a feature will not work and as a result have to revisit your plan, then some stake holders see that as a failure or delivering late because to them the original plan was an iron-clad guarantee whereas to developers that's just an expected part of the process where not everything can be known ahead of time


Right, tracking success by the classic metric "percent of plan completed". That's a separate problem, I would think!


The root problem here seems to be that the CTO had no clue how long that feature would take to build; that seems like a criminal blunder for a Chief Technology Officer, who can’t estimate technology work!

This seems like the pattern in general though - successful tech companies (though most “tech” companies today are not really that) only succeed if the person at the helm has the ability to tell tech from bullshit. Or you need to have a truly trustable confidant who you can fully trust on their opinions (which honestly is rare looks like).

Without that ability you’re left to either guessing or depending on judgements of people beneath you who almost never have a fully aligned incentive to be honest either deliberately or just instinctively. If anything their incentives are often anticorrelated.

I also think this is the issue with most pharma biotechs. gSK is (was?) run by a lipstick maker. What do you think their ability will be to tell if an IND is gonna actually work? You end up having to trust the CSO who might have more interest in making sure their drug gets to market than their eventual success.


Some might say "Does it matter how long it takes, if 12 weeks is the bid needed to land the deal?"

To which my answer would be: yes. Just take down the sign and find something else to do if the best you can do to operate your business is a charade of moving goalposts and keeping a team in permanent crunch time.


"Program evaluation and review technique"

https://en.wikipedia.org/wiki/Program_evaluation_and_review_...

Built by smart people... for smart people. =)


The issue at discussion here seems to be if you can even truly evaluate software work chunks accurately enough for it to matter..


With enough experience it usually breaks down into the following temporal classifications:

1. 2 hours for trivial features

2. 2 days for normal features

3. 2 weeks for performant features

4. 2 months for known problems

5. 2 years for unknown problems

6. 2 decades for hard/currently-impossible problems

7. never (*note this is the default for people who can't tell the difference)

Part of PERT, is placing redundancies with statistical upper bounds on estimated complexity. i.e. if the team or technology is unfeasible it is garbage collected.

Many HR process people mistakenly assume every staff member is interchangeable/replaceable, as all tasks fall below class 4 in their mind.

Cheers =)


Amazon believes all software developers are fungible.


Agreed, the recruiter representing the hiring process at Amazon was/is very disrespectful indeed.

Some like being unencumbered legally... so much so... a few more grand for the tax man make zero difference where you work.

Amazon survives only as long as companies keep falling for the loss leader IT service models. Cloud-compatible is far wiser than a vendor lock-in that would make Oracle blush.

Best of luck =)


I would add: "It is natural for projects to appear to progress quickly at the beginning of the development cycle, since developers will seek out assignments that they understand. At some point there will be assignments that are more challenging, that are outside the primary expertise of the developers, or that take coordination with other teams. It is important for the long-term success of the project that developers not play games related to this, and specifically not to cherry-pick easier assignments to make themselves look good".


> No matter what you do, your estimates will never be accurate.

> When estimating, a different estimate usually means there is a conflicting understanding of what needs to happen.

If all estimates are inaccurate (they are), then two estimates for the same job would be expected to differ, regardless of any conflicting understandings.

The author's view seems to be that estimating is generally best not done. I had hoped there might be some tips on how to make better estimates, or how to use estimates better. Everyone hates estimating, because of the risk of being hoist by your own estimate; but sometimes estimates have to be made, and then it's best made by you, rather than by your boss or sales-dude.

A long time ago I was taught Function Point Analysis. The job is broken down into "functions", which means something like deliverables: screens, reports, inputs, processes etc. Each function scores some number of points; you just add up the points. The number of days per point depends on the coder's velocity and the estimator's bias; given a history of estimates, the estimator's bias can be calculated, and the coder's history yields his velocity. There is also scope for fudge-factors, to account for e.g. complex processing.

In addition to an estimate in days, this process also yields intelligence about your coders and your estimators.

[Edit] Part of the goal of FPA was to facilitate making estimates based on just a functional spec, with no detailed design.


Hear! Hear! Good that someone has written this all out. I like that there is no solution/system offered. You don’t need points.

That said an estimation style I like is “1h, 1d, 1w, 1m, or 1y”? Get a rough sizing. If the thing isn’t worth double or triple (oh no it took 2 days … I said 1 dammit!) then just don’t do it!

Then stick that in a well buffered 1yr plan marked as “aspirational roadmap, not reality”


Something I learned from a principal engineer- estimates from sde 1 multiply by at least 3x, from sde 2 and sde 3 multiple by at least 2x. And then add two-four weeks padding. And then pray.


Have you tried applying that technique in an environment where you're expected to estimate every little bug fix? (some of which would often take less time to do than to agree on an estimate for!)


Yes, rule still applies. But I work in an area where nothing is little. It’s not just the code change, I have to think about all the changes that come along with the code fix.


If you really want to hit your goal, multiply by π, not by 3!


There are some really great insights in here, particularly #7. We've used similar heuristics to identify if we are thinking roughly the same thing ie, if someones ballpark is wildly different from the norm its great to understand why.

It could be that there is extra knowledge/context or that its unfamiliar. Both are valuable to understand.

However I think the author is missing the point of estimates; to align multiple streams of business activity. Sure in the example they gave it seems like there are few dependencies but in the vast majority of situations there are.

The launch month determines things like marketing activity, budgeting, hiring, internal/external training and so on.

Its also useful for program directors/c-suite to identify critical paths across all streams and react accordingly.

Estimates are a necessary evil, but should never be treated as a commitment


This is good information but most of it is not actionable for the people building (I suppose it’s more a guide for the people who “threw it over the wall” this time).

The one that is actionable seems like it could be interpreted wrong to me…

“Breaking all the work down to the smallest details to arrive at a better estimate means you will deliver the project later than if you hadn’t done that.”

That may be true for estimates, but I seem to be most successful working in tiny pieces.

That said, I believe you can hit a date or you can hit a feature set, but you cannot do both (a slight modification on the saying that also adds money to the mix). Throwing additional money at a problem doesn’t seem to scale very well, in my experience.

It’s almost always best to hit a date and release what you’ve got. Hitting a feature set can drag and drag. In this articles case for more than a year.


Breaking huge tasks up is obviously necessary to make any real progress, regardless of whether you're trying to estimate anything - even if it was my job as an individual to do the whole huge task myself I'd want it in digestible chunks that I could feel I was making progress with, but more realistically it's going to be split between several developers anyway. Determining how to do that decomposition into subtasks that are genuinely independent (and ideally can be worked on in parallel) is obviously a key challenge of large-scale software development, and there almost never no "right" answers as to the best way of doing so. Interestingly if I've noticed anything is that once you split a huge task up in little ones and give each of those estimates, the total is often far bigger than you might have considered for the original task. And is more likely to be closer to the mark.


Although it wasn't explicitly prescribed, one action that was taken was quickly delivering some truth to the stakeholders. The customer was informed the original schedule wasn't possible and given an alternative (more realistic) timeline.

I imagine many teams won't have that access or flexibility, so delivering truth may be first going back internal to the company and giving them a schedule, (and not just saying the original schedule won't work which in his story only elicited, "make it work").


I don’t agree that trying to estimate things is bad as the author suggests. If you are perceptive and want to get better at it, then there’s a decent chance you will. Estimates also help us timebox work. They can help us give an objective view of when we should start exploring functionality tradeoffs. This article sounds like a lot of whining about not wanting to do something because it’s difficult.

I will say I think that holding people to estimates is not fair because it doesn’t necessarily correlate with productivity/value added. When asked for estimates my general rule is: take a conservative estimate and double it.


Let's call it what it is: Guessing. Guessing can be incredibly useful, and I've never seen an engineer or PM refuse to _guess_ (or "project"), because that implies they won't be held to it.


> trying to estimate things is bad

I've never, not once, in 30 years of developing software professionally, seen a feature request that was well defined enough to estimate to any level of accuracy. The definitions are always so vague that the answer could range from "this is already done" to "this is not possible". The hard part, and what takes the most time, is always working out exactly what they want - and that's what they want (and expect) an estimate of. "Estimate how long it's going to take you to figure out what exactly it is that I'm asking for."


But then there’s Hofstadter’s law: It always takes longer than you expect, even when you take into account Hofstadter's Law.

https://en.m.wikipedia.org/wiki/Hofstadter%27s_law


One day I'll have to attempt the 'wanna bet?' estimation process, i.e. if I get an estimate from engineers, I'll ask them how much their own money they'll bet on it being true and crucially take it. This should really turn the game theory switch on in some folks which are usually resigned to assuming that any form of estimation is useless.


> I'll ask them how much their own money they'll bet

In a way, they always try that - when the estimates (that they were bullied into agreeing to, not that they came up with themselves) inevitably turn out to be too low, the developers are expected to start working for free (nights and weekends) to make them accurate.


Round 1: pad all answers. It's going to take a week to make that button blue.

Your move?


You ask multiple engineers and take the offer from someone who bet a large amount of money with a low estimate of time.


All estimates get padded anyway. The estimator adds 10%, because he doesn't want to lowball himself. Then the boss adds another 10%, because all developers are optimists.

Then the sales dude just pulls a number out of his fundamental orifice.


This fits with my experience; I would include the following:

What do the developers have to learn? New technology? New business domain?

Are there new system interconnections? How many?

How involved is the 'customer'? Who is the 'customer'? For B2B or enterprise software, do you have access to people who will actually be using your software?

===

A question I've been thinking about: Code, config, build and release pipelines, tests, documentation, etc are all the work product of software development, and those can be quantified (kind of). But what is the unit of work for software developers? I think it's the "try" - as in I had to try 100 different things to get this stupid thing to build. Those 100 things may have been in 2 logical work paths or 10 or 30. One path may have had 2 tries or 50. A path may have split into 2 or 3 or 10 other sub-paths.


The most important takeaway here is that the customer is willing to negotiate timing and scope. The 12wk plan was actually 1yr and the author is still around to write this.

If your colleagues claim the sky is falling because a software development timeline was wrong, they're not cut out for software development.


Article not what you were hoping for? You probably wanted: https://en.m.wikipedia.org/wiki/List_of_eponymous_laws

Samples:

Hofstadter's law: It always takes longer than you expect, even when you take into account Hofstadter's law.

Murphy’s law: Anything that can go wrong will go wrong.

Cheops’ law: Nothing ever gets built on schedule or within budget.

Parkinson’s law: Work expands so as to fill the time available for its completion.

Brooks's law: Adding manpower to a late software project makes it later.

Segal’s law: A man with a watch knows what time it is. A man with two watches is never sure.

Vierordt's law: retrospectively, "short" intervals of time tend to be overestimated, and "long" intervals of time tend to be underestimated.


The fundamental problem with this is almost all managers interface with programmers through Jira tasks/Gantt charts/Scrum sprints etc. The bad ones even one level down, but two levels down it's almost exclusively the only visible thing about devs.


> If you want buy-in, and better estimates, let the people that execute the work come up with the estimates. Then if the estimates are wrong, which they invariably will be, the team can only point their fingers at themselves.

Weird advice. But if you truly want better estimates, just ask an experienced outsider:

> "A similar finding is that experienced outsiders, who know less of the details, but who have relevant memory to draw upon, are often much less optimistic and much more accurate than the actual planners and implementers."[0]

[0] - https://www.lesswrong.com/posts/CPm5LTwHrvBJCa9h5/planning-f...


My experience on large projects aligns with the overall sentiment of this article. Except

> 10. Breaking all the work down to the smallest details to arrive at a better estimate means you will deliver the project later than if you hadn’t done that.

For very large systems, demystifying whats ahead can be a huge time saver. If you march towards a near term milestone without knowing whats around the corner you run the risk of having to go backwards after the unknown unknowns surface.

The smallest of details may not be necessary but enough visibility to know you're headed down the right path is critical. None of this has anything to do with estimates.


I think the author is describing analysis paralysis. They’re right in the sense that the most complicated work (in my XP) is often demystified with a bit of engineering over white boarding and whatifing.


Software estimation doesn't work for non-trivial tasks (and trivial ones should be automated, making them non-trivial).

It's like asking a mathematician to "estimate" how long that theorem is going to take to get proven.


If it's not proved, then it isn't a theorem, it's a conjecture.


I think that was his point.


By taking this approach, the individual showed his willingness to think outside of the box and take initiative without involving his superiors. It demonstrates his resourcefulness, determination, and capability to make decisions independently. The individual took the risk of potentially facing consequences later on if his superiors did not approve of his actions. However, it was clearly a calculated risk as he had the support and permission of his clients to proceed with whatever necessary for the success of the project.


Does anyone else on HN practice the multiply by 3 formula for estimates? I don't recall where I've come across it online first, but it has worked very reliably for me in the last 3+ years.


Depends on who I'm talking to. An estimate is not actually a number, it's a probability distribution. Few people want to hear that; they want one number to summarize it. But not everyone wants the same number.

My team wants to know the mean; my manager needs to know where the cumulative distribution reaches 90%; and marketing needs to know where the cumulative distribution reaches 99%.

In my experience, a developer's educated guess hits very close to the median completion time. Software task completion tends to be lognormally distributed, so to get the appropriate scaling factors, multiply that educated guess by 1.6, 5, or 10 respectively.


One of my university professors advised "multiply your best-case-scenario estimate by Pi". It's still absolutely plucking numbers out of the air, but I've found it to be pretty reasonable over the last decade or so...


Using pi instead of 3 has the obvious advantage that your estimates are much more precise.


Absolutely. I find it far more accurate and when it's not, it's better to underpromise and overdeliver than to overpromise and underdeliver.


I use 4 - double and double again.

But yes, I subscribe to the approach of make your best guess, then multiply by 4.


My method is usually to ask a few people to make a quick guess, take the average and multiply by 5. Works like a charm but is unfortunately not "scientific" enough so people do story points and task breakdowns which then also take 5 times as much as estimated before they are really finished and buttoned up.


The factor of 5 seems to be a function of how bad the engineering team is at estimating - I would guess you have to come up with this factor for each team based on past experience.


It’s also a function of the difficulty and unpredictability of the task and the overhead other functions like legal approval and mandatory documentation are causing.


> It is our expectations on timelines that create a perception of late delivery.

Important point! I like to say it is not the delivery that is late, it is the deadline that is wrong. It doesn’t change reality if the customer expected the delivery earlier, but it frames the cause better.


Gah these articles and comment sections should come with a trigger warning for armchair speculation. I swear it's as frustrating as talking to flat earthers.

Estimation is actually well understood in academic project management. There is (Nobel Prize winning!) research about what, actually, are the problems inherent to estimation, and how to produce specific and accurate estimates despite them. This academic field is almost 50 years old, and no one who complains about estimation in blog posts or comments is aware of it.

Stop navel gazing and actually go READ about the subject. I know it's hard for us engineers to take in anything that doesn't come from StackExchange, but please try, BEFORE you write about your shitty experience with estimates and generalize to the entire problem space.

Here's what the research says:

- Humans are ALL bad at time estimation. Even the ones who consider themselves good at it, estimating tasks with which they are very familiar, "only" underestimate by 30% at best.

- humans are pretty good at estimating non-time attributes of work, even those with a direct correlation to time. Like effort, complexity, or "cups of coffee."

- if you estimate something with a time corellation (e.g. complexity) in a consistent way and measure the average throughput over time, you can very precisely and accurately estimate time to completion. This is the Law of Large Numbers, which is how casinos stay profitable when dealing with much more randomness than exists in software projects. It also makes your estimates include unexpected complexity, personal issues, illness, windows updates, etc. It's a statistical law.

- the accuracy of average time estimates is proportional to the time left on the project. It runs opposite to the uncertainty of distant features. I.e. this method does not predict how much you can build in a week; you're better off with your relatively intimate knowledge of the feature at that point in time and a gut check. Rather it predicts how much you will build over 12 weeks, with extraordinary accuracy.

- estimates are more.understandable when presented with a confidence interval, e.g. "the work as we understand it today will take 8 weeks, with a 95% confidence."

What I HAVEN'T seen in the research, but which is undoubtedly true, is that most teams violate these fundamentals and then complain that estimates are useless.

Asking your team to estimate in time units IS useless. Adding up those time estimates to create a long term plan is doubly useless. Cracking the whip on them when their estimates prove inaccurate is triply useless. And complaining about it on the Internet because you've never read any of the grown up work on the subject... well that's Hacker News.


You are incredibly overstating the efficacy of that "knowledge". It is true that we have a body of research showing that using past task performance as a guide for future estimation is better than most methods. It still has huge error bars, and much of that research wasn't on software development. Software has a rather unique property of only requiring a specific task to be done once, ever. Software development has more in common with the planning stage of other fields than it does with the actual execution of tasks in those fields.

The "Law of Large Numbers" burns people constantly, and those that rely on it fail to understand the self-similar scaling of work and the long tail of distributions.

This "grown up work" is old work that has been shown to be poorly applicable to software development, although I agree it is better than "break thing down into tasks and then use your ego to estimate time for each" which is completely useless. But that is a low bar!

Probably the best I've seen (which was built using some of the research you quote) is Three Point Estimation (https://en.wikipedia.org/wiki/Three-point_estimation) but that isn't particularly great either, but is an ok mechanism for persuasion!


> You are incredibly overstating the efficacy of that "knowledge"

He's also agreeing with almost all of the comments on here, right after saying that all the comments are wrong.


Do you have any books that you would recommend on the subject? I'm all for people being informed but I have looked for information myself sporadically over the last 10 years and it's a very sparse landscape from my point of view.


Start with Kahneman and Tversky's work on the Planning Fallacy [1], and follow the rabbit hole to Reference Class Forecasting [2]. I don't know about popular science books on project management, though, sorry. There's some good, readable material about Agile estimation as a group of practices, but you have to avoid the dogmatic stuff from so many of the specific implementations (I'm looking at you, scrum fetishists). Any of the Agile Manifesto authors are a good bet, since they've been around to watch various implementations come up. Martin Fowler, Uncle Bob, etc. Whatever you think about their code advice, their estimation advice is worth listening to.

[1] https://en.m.wikipedia.org/wiki/Planning_fallacy

[2] https://en.m.wikipedia.org/wiki/Reference_class_forecasting

Relevant papers:

Buehler, Roger; Dale Griffin; Michael Ross (1994). "Exploring the "planning fallacy": Why people underestimate their task completion times". Journal of Personality and Social Psychology. 67 (3)

Kahneman, Daniel; Tversky, Amos (1982). "Intuitive prediction: Biases and corrective procedures". In Kahneman, Daniel; Slovic, Paul; Tversky, Amos (eds.). Judgment Under Uncertainty: Heuristics and Biases. Science. Vol. 185.

Kahneman, Daniel; Tversky, Amos (1979). "Prospect Theory: An Analysis of Decision under Risk" (PDF). Econometrica. 47 (2)

Flyvbjerg, Bent (2006). "From Nobel Prize to Project Management: Getting Risks Right". Project Management Journal. 37 (3)

Hope this helps!


Software estimates are unavoidable. I find this approach the best; rapidlyeatimate.com It works well to bypass anchoring and biasing by initially focusing on ‘divergent thinking’. It's as accurate as any I'm come across.


That seems to make the mistake of summing up the lower bounds to create a lower bound for the total time (and correspondingly sum the upper bounds to create an upper bound for the total time.)

That yields a very unrealistic (as in not-relevant-for-reality) range for the total time. Am I misunderstanding something?


We use to jokingly say that whatever the estimate is, the actual time it takes is twice as long, and increase the unit.

So if the estimate is 2 weeks, it's gonna take 4 months.

Surprisingly, the latter is a more accurate estimate most of the time.


Estimation time. How many years and millions in budget does it take to write, secure, monitor and maintain a chat API that can be used by apps with hundreds of millions of users? I know the answer to this one :)


If you have the right team maybe 10M (to pay for the SaaS you bootstrapped off) and 1 year. If you have the wrong team then God help you.


i don't like estimates, but I do like reasonable deadlines.


This article feels like it comes from someone that has never worked with a high performing team. So many excuses as to why people cannot estimate.


If you think building a house and writing complex software are in any way similar, you simply do not work on complex software, nor manage teams in this sector.


This feels naive "Imposing estimates on others is a recipe for disaster". Estimates and deadlines are useful. It aligns everyone on what's required to be successful. They enable you to collaborate with other teams. If you can't estimate if something will take 3 months or 12 that's a problem.


When estimates come from the person/team doing the work, I might agree with you.

The crux of "imposing estimates" is that said estimate is coming from a third party that has nothing to do with the work. Most often, it's someone who's trying to land a contract by promising a new feature by some deadline they just pulled out of thin air, knowing full well that actually delivering it is going to be someone else's problem.

They got their commission, and no one's going to chew out the sales guy who landed a deal just because a few programmers whine about having to put in a few extra (unpaid if salary) hours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: