Speaking as a senior engineer myself, I disagree because time estimation in software is not a useful activity. The only thing it helps is political manipulation in a layer of management above you. Engineering work takes as long as it takes and often there are unknown blockers or surprise aspects of a problem that make it far more difficult than expected, so frequently that the initial estimate has no value to anyone, not even in a rough sense like, "will it take 2 hours or 30 hours." In my experience, a task estimated at 2 hours is equally as likely to take 30 hours as to actually take 2 hours, and there is no systematic way to know which case you're in.
Regular check-ins to catch the blocking issues early is much more important, and so teams should not waste time making estimates or tracking velocity. It's pure junk. Instead, start working and meet often to detect blocking issues as you go.
> I disagree because time estimation in software is not a useful activity.
Anyone who can say this has the privilege of being fully insulated from revenue generation. But at some level of the company, resourcing decisions are made, and they depend on understanding the costs and benefits of various tasks. If you are deciding between having your engineering team build an Android app and adding an API integration, you need to understand how much revenue it will bring you and how much it will cost, and that’s where estimation comes into play.
Man months are very non mythical when it comes time to write paychecks.
What you've said is true in the same way that if you could just estimate the winning lottery numbers for me, we could increase revenue drastically.
Even though it would be incredibly useful to have, it is effectively impossible to give an accurate value other than for trivial or massively constrained problems.
Even insulated from revenue generation, things have to get done and you need to let people know when that will happen. Estimation is difficult, yes, but it's vital for working in teams of more than one person.
> “Anyone who can say this has the privilege of being fully insulated from revenue generation.”
You are incorrect. I work in a directly client-facing capacity and often have phone calls to help our actual clients and their product managers, and I have many internal stakeholders for my team’s work that are sales-facing and client-facing. Most of the quarterly planning meetings I am required to give input into are directly focused on revenue generation.
Because software velocity estimates do not have any correlation to the actual delivery timeline, yet they will be used for political bikeshedding by people who don’t know about the technival details, it is exactly in revenue critical situations that you want to get rid of the pretense of estimation and admit the truth that you have to simply measure by doing and frequently reporting blockers.
Usually it is sociological, and has not much to do with good or bad architectures. The more surprising blockers tend to happen when someone on another team can act as a gatekeeper to a resource you need, like permission to make a change, and uses this blocking for some political purpose.
It can cause bang-simple engineering tasks to take weeks, during which time you never know how much longer you’ll need to wait, and depending on political capital of the entity blocking you, you may not even be allowed to publicly explain they are blocking you, and you are forced to absorb the negative externalities of their choice to block you.
Estimation is NOT commitment. Committing to estimates paves your road with good intentions (and leads to hell). Don't commit to estimates!!
Estimation is EXTREMELY valuable to help the business get a sense of engineering capacity. You can't tell the business, "we can't launch a Facebook competitor in a week of effort." No shit, Sherlock. So what can engineering work on next? What can the business ask engineering to prioritize, that's chopped up into a small enough piece that it can work it's way through development and into production in a more or less predictable manner, while still being large enough to have demonstrable business value?
Orgs that can't produce reliable software estimates suffer from one of the following: unreliable infrastructure (can't deliver new builds if your build system is down), insurmountable technical debt (can't reliably and quickly roll out requested changes if you don't have automated integration testing), bus factor 1 (look who decided would be a great week to be in a car accident! /sarcasm), lack of senior technical leadership involved in planning / chopping up tasks.
I hate to break it to people, but those are all pretty much fixable. You can have highly-available tooling. You can have competent technical leadership that balances technical debt and creates clear task work for engineers. You can have team leadership that prioritizes getting information out of team member heads and into source code or wiki (when appropriate), and cross-training. The fact that most organizations fail at best practice does not mean that best practice is inaccessible.
Estimates rarely correlate with engineering capacity. Even so, they _are_ treated as commitments any time it’s politically convenient for someone to treat them that way, regardless of how publicly you might have qualified your estimate as not a commitment.
Also, with these types of estimates, they are basically misleading without some form of error bars, yet nobody ever incorporates that into it. Capacity is not some number, but a whole distribution of possible numbers, for which the mean might not be a relevant value (for example if it has several sharp modes that depend on discrete external events).
I agree that estimating time of individual tasks is unhelpful, which is why story points as abstract units of complexity are, in my experience, more useful measures.
That said, I've always been able to provide fairly good estimates for how long work will take, something that I got a reputation for doing well.
In my experience, that's not really true; I find value in time estimation to come from actually scoping the work again and probing for traps or pieces of complexity that weren't obvious in the first estimation passes. It's not true that software takes as long as it takes. Software projects expand to fill the amount of time allotted. Holding yourself to deadlines creates room for compromises and creativity.
> Holding yourself to deadlines creates room for compromises and creativity.
In other words, when it turns out the estimate was too optimistic one can either:
- allocate more resources to the problem,
- make compromises about quality of work,
- miss deadline or
- adjust scope.
The first solution is rarely possible (or doesn't help because new hires take time to be productive), the second has unwanted side effects, so the last two are the best options... Pick your poison I guess.
- (1) More resources don't always make things go faster.
- (2) Compromising does not always compromise quality of work. You can always cut scope as you mention in (4) and ship less features at a higher level of quality. Although what I was getting at was more that we have a tendency when given time to go about refactoring and rearchitecting things that don't necessarily need to be done, or introducing unnecessary / premature abstractions and optimizations. Deadlines tend to curb instinct to do that.
- (3) You can miss deadlines, but that's not a clean win either. It hurts morale, allows for feature-creep (oh you're not shipping next week, well boy do I have some extra things you could do with your newfound time) and hurts relationships with teams depending on what you're setting out to deliver all across the company.
- (4) Adjusting scope can make sense, so long as you're very good at figuring out what doesn't need to go out. Not every team is.
tl;dr: Shipping the right 70% of the feature set at 100% quality without premature / unnecessary optimizations and abstractions can be pushed along by aggressive timelines. This must always be balanced with sustainability.
The root problem seems to be that committing to getting something done is somehow tied into honesty and dependability. I feel like in Software Engineering at least, we should be careful to not make that association right away simply because the nature of the work is kinda unpredictable, especially when working with new/unfamiliar systems.
It looks like a few people have reached somewhat similar conclusions and create frameworks around this idea (Agile, Extreme, TDD etc.) to formalize the processes. But it perhaps makes sense to realize that they are just that: processes and heuristics trying to make a hard problem (delivering software predictably on schedule) more manageable.
Software devs can become very good at estimation if you practice it.
Where 2 hours really is 2 hours—and even a week really is a week.
I know this from experience but it’s pretty basic to see that it’s true. A senior engineer (5+ years, say) is rarely encountering fundamentally novel problems. Most everything we do we’ve done before in some form.
To get good at estimation simply track how long things take. Over (short) time you will see that work is very predicatable and you can be very precise with your estimates.
If you buy this claim that ‘engineering work takes as long as it takes’ of course you won’t take the effort to get good at estimation and of course your lack of the skill will seem to reinforce your belief.
Don’t do that. Estimation is a highly valuable and acquirable skill. Acquire it!
This goes against most of the rigorous studies in the industry which have consistently found, going on 50 years that no estimation techniques have high degrees of accuracy.
The only systems that do have high degrees of accuracy tend to throw out a big chunk of the work to get there (for instance some systems don’t estimate how long a proof of concept will take & only estimate post that).
I’d love to hear what your mechanism is for estimation that is repeatedly accurate & how to implement it at scale because otherwise it’s an open problem in the industry.
SEI at Carnegie-Mellon publishes copious resources on software estimation success and practice.
What references are you referring to?
I more or less gave the pattern — track your hours. I’ve been around the block...from large FANG companies to small startups across many varied tech stacks. Estimating an API design and impl in Java vs Python vs... Estimating impl a UI library in React vs some other server-side MVC framework... We are not inventing new bleeding edge academic paradigms — our work is estimatable.
If someone wants to pay me to teach the skill, sure...
But I can guarantee you you can estimate software efforts with accuracy.
The thing I've seen with estimation practice last I researched it is, it works best if you can calibrate. That's something an established shop can do by extrapolating from their previous work, but it is also often as simple as "This other team took 7 months to do a similar thing. Therefore we will also take 7 months." An estimate like that is usually only wrong by days-to-weeks, since it encompasses all phases, eliminating the fudge-factor, unknown-unknowns and wishful-thinking aspects.
When it takes much longer, it's almost always due to design issues or political issues that create design issues. When the design is well-understood(and prototyping is hugely important to finishing design ASAP) the implementation goes smoothly. When stakeholders take turns stirring the pot to "make their mark", it goes haywire very quickly.
Note that QUELCE is an ongoing research methodology without a lot of data available about its effectiveness. But it says something that this is a very active research area in 2018. If it were a solved problem I’d not think Monte Carlo distributions would be valuable.
Very rarely is estimation about simply the work required.
Estimation is normally off because something unexpected happens. Staging environment down, build server is broken. An external dependency takes longer than expected. On a new unfamiliar legacy code base where our previous work estimates no longer apply. Those unexpected problems vary widely in how long they take to resolve.
Yeah, if you estimate on a familiar code base, when everything goes right it's easy. But very rarely is that the case.
Exactly! Development tasks where the within-task technical software issues are the determinant of time estimation are so fleetingly rare that estimation is not useful as a general practice.
In fact I might even _define_ “senior engineer” as someone who has been around long enough to know this and has come up with some way to pacify managers with meaningless estimates in a way that protects the team so it can actually still do work.
After 15 years I still struggle to estimate how many times the client will change the requirements within an ~8 week project, let alone how long it will actually take.
At one of my previous companies, during a death march like project to get a 1.0 out, it was widely known that the offshore team was outright lying about completing their work in the estimated hours (scrum). They routinely worked overtime and sometimes weekends and yet the management pretended everything was hunky dory and on schedule. The non-offshore team looked bad unless we too worked long hours. A mess on both sides. We eventually shipped the release, 9 months late.
Estimates themselves are relatively useless. But the process of estimating is extremely useful, in my experience.
If a junior engineer gives some hand-wavy estimate of a project, I always ask them to sit down for 30 minutes and break the project down into bite-sized concrete tasks. "Bite-sized" means they need to estimate the tasks at some level. This usually uncovers some unexpected ambiguity or open questions which we can work to resolve. At that point we can also search for long poles, parallelizable work, unnecessary work, conceptual misunderstandings, etc., and it makes it easier for other engineers to swarm if the project starts slipping too much.
I find the opposite. Take the hand wavy eatimate at face value and just move on, update it as you do the task and discover the reasons the estimate was bad.
The other failure mode, where the team spends time discussing every ticket, wasting N people’s time when only 1 or 2 people have the expertise on that topic to debate the estimate, is way worse. It wastes more time, makes people act petty about who is doing more fictitious Fibonacci units of work, bikeshed over meaningless things like if a ticket is a 3 or a 5, and can make work scoping way more antagonistic than it needs to be.
Agreed, the other thing that gets me is that estimates are often expected to be on the spot decisions. If you have done a task that is similar before then that's fair enough, but no one has ever told me to go off for a day or two and investigate the difficult / unknown parts to see hat the options are.
Really? We have spike tickets all the time to go and explore a problem and see what edge cases we can find, explore possible solutions, and stuff like that. We usually scope spikes to a day or so. But we've had a couple where we were given almost a full week. The minimum I've spent was an afternoon and that was mostly because I actually found a lot of info a lot faster than expected.
Spike tickets are one of the funniest ways I’ve seen this handled. All it can really tell you is whether, in some short initial investigation, there is a known blocker, usually on the technical implementation side.
But the problems that make estimation useless are problems that only surface after detailed digging that takes time and cross-team communication not realistically possible in time-boxed spike tickets, involving happening onto things that were not known and could not be known within a short spike timeframe.
Spike tickets are just an Agile bureaucracy thing to paper over the fact that estimation is intrinsically problematic to some fundamental aspects of one-size-fits-all methodologies like Agile.
Essentially for a spike ticket to be helpful in the common case, you need a spike ticket that just says, “actually go and complete the whole task you’re trying to estimate and then come back and tell us how long it really took.”
That hasn't been my experience at all (major bay area tech co). PMs and execs who I work with are generally totally okay with "I'll get back to you by [date] with scoping and ETA for that request." We even have a name for it: we call it an "ETA for an ETA". It's much better to give an ETA-for-an-ETA and then come back with a real ETA once you know the scope, rather than just guessing an ETA that turns out to be totally wrong. Sorry if that hasn't been your experience; being asked to pull ETAs out of your ass without doing due diligence sounds like it would be demoralizing.
What kind of task are we talking about here? A bug report opened by the QA team or the development of a new feature?
I agree that hunting bugs is very difficult to time-estimate correctly: it could be either a simple overflow bug that it only takes a dozen of lines to be fixed or it could be an architectural problem not spotted before that needs some serious evaluation before taking action.
But a development task should be predictable to estimate to some extent. An architectural analysis should reveal the parts of the system that need to be modified and it it takes more than 2 weeks, maybe the problem should be partitioned into smaller problems easier to estimate.
That’s a bit of a cop out though. First you are throwing a big chunk of the part of a system that can take high variance time (the analysis) out of your estimation accuracy. Second no rigorous studies have shown that breaking down tasks into smaller chunks for estimation purposes changes the accuracy rate of the broader actual ask.
Anecdotally what happens when you do the breaking down into smaller tasks thing you are either just putting off giving the broader estimate (in systems that only estimate the currently workable tasks) or you are likely missing small tasks from your broader goal that will impact your estimate later leading to standard estimate overruns.
I’m talking about development tasks. They are never predictable in a useful way, and the reasons why estimation is not useful have nothing to do with the technical details (usually). It is about sociological blockers in resources, IT blockers, unknown legacy code issues.
It’s so rare to have tasks without these blockers that estimation generally is unhelpful. Whereas measurement (beginning work and alerting people to blockers) is much more useful.
>> Speaking as a senior engineer myself, I disagree because time estimation in software is not a useful activity. The only thing it helps is political manipulation in a layer of management above you.
This is truly an amazing statement to make, especially from a senior engineer. Without a clear estimate and communicating that estimate to those who relies on your output, how can the others plan their activities, or how can you plan your activities without knowing how the others will take?
I don’t see why it’s an amazing statement since it’s been a very common perspective since at least The Mythical Man-Month decades ago.
If other people makes plans based off of junk (read: any) estimates, it just amplifies the problems.
If you’re at least honest that the estimates are meaningless, everyone can acknowledge it and come up with different solutions, especially regarding speeding up the process to get started and make checking in about blockers more meaningful and consistent.
I am curious and not trying to be a dick. If you don't any time estimates for your tasks, how do you tell your stake holders when your delivery date is going to be, or do you leave it open ended and deliver when it is finished?
> time estimation in software is not a useful activity
Only so long as you're developing your software in complete isolation from anyone else.
In terms of possibility, if you're not doing moonshot R&D then you should be able to give a reasonably bounded estimate (which occasionally will be wrong, but c'est la vie) of how long it will take to fix an issue or implement a feature. If you can't do that then IMO you aren't a senior engineer.
I agree with this also. We have to make estimates as to how long it will take. If it's not a simply READ endpoint or working on a piece of code that I've previously worked (bug fix, minor feature add) on I don't feel I can give an accurate estimate at all. I'm Mid-to-Senior (3 years exp) and fairly new (2-3 months) at my job so the institutional knowledge of the code base and specific systems just isn't there yet.
I tend to go with small (a couple of days or less), medium (a week or two), and complex (needs to be broken down into smaller chunks)...and the ever popular, no clue. :)
Regular check-ins to catch the blocking issues early is much more important, and so teams should not waste time making estimates or tracking velocity. It's pure junk. Instead, start working and meet often to detect blocking issues as you go.