I've been developing software for over 20 years, and I still can't estimate how long something will take me when I've never done it before. This uncertainty needs to become more than just a stick to beat developers about the head and shoulders with. Most of the time the PMs understand this, but there have been many projects where they just don't get it. I have suffered great anxiety from being forced to give estimates when the truth is I have no clue. It depends on how easy it is and how many unforeseen issues I encounter. It was so bad that once my husband asked me how long it would be before I was done cooking something, and I practically had a meltdown. That's when I knew it was time to leave that team. Can we stop pretending we can forecast the unknown? (edit typo)
Even bad estimates are better than no estimates. If you are having meltdowns your reputation is being tied too closely to your ability to give estimates.
You must never turn estimates into a promise, always remind people they are estimates.
Want to give fast estimates? Here’s how:
1) first determine the scale of the task? Is it a year, month, week or day kind of task?
2) Then, it’s just 3 of those units. The smallest task takes 3 days. One day to completely fuck up, one day to figure out why, one day to get right. The longest takes 3 years. One year to fuck it all up, one year to learn why, one year to finish it.
I suggest never giving estimates in units smaller than a day. They just become noise. If a task is smaller than dayscale just say the task is too small to provide any meaningful estimate but won’t take more than a day.
> Even bad estimates are better than no estimates.
No estimate is clearly better. Here's a common story I've seen across multiple companies.
1. Marketing management asks Engineering management how long it takes to do feature X so they know when to launch the online ad campaign.
2. Engineering management then asks potentially good coder how long it will take. Coder replies with a time and "it's just an estimate."
3. Engineering management reports to Marketing that coder's estimate leaving off the most important caveat, and Marketing treats that as the gospel truth.
4. Coder takes longer than expected because of some bad technical cruft that some other engineer put in because he was potentially rushed or just plain inept.
5. Marketing is pissed because they now have to withdraw the ad campaign, and starts blaming engineering.
6. Under increased scrutiny, Engineering gets a bad reputation, who then throws the coder under the bus in front of Marketing and other managers.
7. This shows up on the coder's annual review who then leaves.
8. Engineering hires replacement which will have a 3-6 month learning cycle, and potentially writes worse code than the person that just left.
EDIT: The point is that if there's no estimate, management has to deal with the uncertainty that the coder experiences. Hope for the best, plan for the worst.
What is plan for the worst in a scenario with literally zero estimate?
“It may take 0 to 3 years” ?
“We literally have no way of knowing?”
“Not even a ballpark?”
“No.”
This is what you’ve proposed with no estimate, and this seems extremely unhelpful towards the goal of helping all groups at least have some idea when certain “next steps” can be accomplished.
I work as a sound mixer for film and estimating how long it will take is always hard, but I never really got a bad reception when I just said: I canlt tell you unless I see the thing.
Hell if you ask a mechanic to fix your car they will also have to check the thing first before deciding how long it is going to take.
This is the professional thing to do: gauge the situation, take tour time to figure out the scale of the thing as long as you need and then give a pessimistic guess with a disclaimer that things can easily get out of hand without anybodies fault if unforseen problems arise.
Right, but there's a big difference between "Don't estimate until you've done your due diligence" and "Don't estimate".
It's perfectly reasonable to say "This is a big project, it'll take me a week to know where we stand"- there, you've provided an estimate of the scoping task and promised an estimate in the future as well.
My advice would be to live in the real world. We don't know how long it is going to take. Just like if you go to turn on your car and it doesn't start. Maybe it was a minor issue and the next time you turn the key it will start. Maybe there was a short circuit and the car is totaled. A passenger asking you when you are going to get moving is no help.
If your car won’t start, it will take more than a second, but less than a year to fix. That is a bad estimate, but better than no estimate for a space alien that doesn’t know what a car is.
The only tool the space alien's PM has is a deadline. "Do X by this date or else." Because he has no hope for understanding the true problem domain before the project deadline -- just like a space alien can't be expected to learn english in 4 days.
A PM with a deep understanding of the software process, can ask insightful questions, identify and possibly mitigate many of the issues beforehand. So when it gets to the software lackeys, many of the resource/architectural issues may have been solved.
> If your car won’t start, it will take more than a second, but less than a year to fix. That is a bad estimate
It's actually a better estimate than most software estimstes, because it isn't just an expected time but a range that results will usually fall within. It would be better if it included an explicit degree of confidence that th actual result would be in the range, and if it was centered around the average time it would take for events in the class.
In these cases it’s often a matter of ‘give me a week and I’ll tell you’.
Then again, a similar number of times, the only information you get is ‘we need a chat program, can you please estimate how long that will take?’. Which will leave it forever impossible to estimate.
Oh come on. Any decent project manager understands the difference between an estimate and a deadline and plans and communicates accordingly. It's not rocket science. Stuff gets shipped on time all the time.
Any decent project manager who wants to keep their job will acquiesce to people further up in the org chart who want deadlines, not estimates, and who consider the difference between the two to be as fuzzy as it needs to be.
A deadline is a requirement. It is not a PM’s job to reject requirements, though it is their job to help th customer understand and manage resolution of situations where the combination of requirements given are in conflict, such as when a deadline is insufficient to acheive the functional requirements.
And any decent engineering team will accept that a project has deadlines and raise progressively more serious risks with appropriate explanations up the chain as the confidence that completion will occur before the deadline declines below near-certainty.
Deadlines can be legitimate project requirements as much as functional requirements are. Identifying practical conflicts between requirements is part of what an engineering team does. Managing resolution of those once they are identified so that the customer gets the best result achievable is what a PM does.
1. Inability to estimate effort. Admittedly an academic issue, and should be taught to all engineers in college.
2. Inability of management to deal with delays due to bad estimation. This might be caused by a "bozo explosion", say, (where inept managers hire more inept people underneath them.)
Edited to add:
3. Why do we keep making assumptions that management must be infallible? In any dysfunctional organization, it might just take one bad manager high up in the chain that causes pain for the entire organization beneath him.
I cannot believe how much I love the term "bozo explosion". I see this situation all the time as a consultant, and now I have a fantastic term for it. Thanks for sharing!
This just happened recently. Maybe you'll appreciate it.
15 years ago or so (before the term "technical debt" got widespread usage) management kept asking why we working on so many bugs. The solution was to label them as "enhancements".
Just recently within the past year, I saw the exact same thing happen again.
They did? In what way? When I look at the online ecosystem for buying things in the EU, Amazon is consistently the one that can't promise I will get what I order, can't promise when it will come and can't make it easy to give them money. The only thing they do right is that they're theoretically cheaper, but then because you might get a product that doesn't work (and it's very hard to solve that issue to a standard that you'd expect as someone in the EU) the cost saving benefit goes right out the window.
I want to work where you do, where friction is negligible, cows are perfectly spherical, wind resistance is never a factor, all functions are continuous and differentiable across their entire domain, and all project managers are decent.
I'm not saying the story is implausible, but I dispute the idea that estimating software development efforts is an intractable problem that is better left unattempted.
Estimating things is important and valuable, but as soon as you get large corporate structures and multiple levels of people communicating involved it becomes a really bad idea. That's why it shouldn't be attempted in those kinds of situations.
Doesn’t agile presume you have some deliverables every week? At the very least they’d have ‘something’. It might not be fit for purpose, but that should have been fairly visible a ways before they blew through the whole budget.
Unless, you know, there are other systemic issues in the company.
I'm sorry, but if you're spending $500k or more based upon one engineer's "estimate" (for which you're paying $10k/month) then something is more fucked up in that company than what appears on the surface.
But I agree with you, if the engineering management can't budget and prioritize work to get done, that's a larger issue.
> but if you're spending $500k or more based upon one engineer's "estimate" (for which you're paying $10k/month)
The $500k was just a random number I came up with. Budgets wildly vary based on whether they get allocated weekly/quarterly/yearly/etc. And also it's never "one engineer's estimate", it's usually a project manager who works with N number of engineers to come up with an estimate.
But keep in mind that if a developer is in a sprint, he might start adding tickets to the epic because of technical/organizational issues. Suddenly the epic might look 3x more work than was originally planned. Note this is not theoretical, as it's happened 3 times to our team already this year.
But meanwhile in the example I gave, the developer gets held accountable still for not making the correct estimate. Management just passes it up the chain rather than trying to increase the confidence around the estimate.
It's even worse in my organization. Funding is based on projects so the amount of funding you receive is based on exactly how long your estimate is (if they choose to fund your project). Often projects are estimated based on "here's how much money they are willing to spend, so we'll just figure out the number of man hours that amounts to and give that as an estimate."
Because campaigns have to be planned ahead too - anywhere between a few weeks and a year or so, depending on the size of the project and the company, and often timed to match big trade events.
And there's always the possibility that if you announce too late competitors will eat your lunch.
One way to handle this is to spend some time on a formal estimate. Two to four weeks of R&D to scope a project can help narrow estimates to something approaching realism. You'll still be wrong, but you're less likely to be hopelessly wrong.
Asking someone for an instant opinion is madness. That's not an informed estimate, it's just a guess, and usually worthless.
This. Most developers either don't want to or can't understand that there are real and valid reasons for wanting a predictable software production schedule.
That said, I've only ever seen one software project consistently meet production deadlines. Is there benefit to committing to the original schedule with Partner X if there's no way you can deliver on schedule? Or is it one of those things where Partner X has committed itself and they have no real choice but to work with the sliding deadlines?
In the example I'm thinking of the partner wasn't really bothered that the schedule slipped, but the ideal marketing window came and went before the product officially launched, which certainly affected sales. They were really excited that all the planned features made it to market though.
I'd say that some aspects of the capability maturity model make sense, even for engineering groups that practice agile day-to-day.
> the ideal marketing window came and went before the product officially launched, which certainly affected sales.
Definitely. I used to see that all the time in the games industry. Getting the right launch window is critical because most of a game's sales happen in the first few days of launch. And yet, games are famous for shipping months, even years late. They usually seem to make it work. I guess you'd be particularly hosed if you couldn't afford the burn rate for the extra X months it would take to get to a decent launch window.
Well, standup is also terrible. In addition, there's the general agile assumption that developers are capable of consistently producing X hours of work per day. Maybe it's just me, but I'm a bit more burst-y with my work habits than that.
I wouldn’t say Standup is terrible but one where you go round the room giving updates is almost useless. It’s not just you, most developers can do 8 hours work in an hour if they aren’t interrupted with meetings.
Whoever asked you for that estimate will do that for you…
> always remind people they are estimates.
…even if you remind them. I mean, the reasonable ones won't, but many people aren't that reasonable.
I've been in an interview last week where they told me the teams were strongly committed to the features they planned to do each sprint. Which I interpreted to mean that their estimates are promises.
> Want to give fast estimates? Here’s how:
My, this looks like it could actually work. I'm going to try it right away.
It’s funny because even with capital a Agile / Scrum, in 2011 the term commitment got changed to forecast [1]. But then many companies who are dogmatic about Agile still use estimates as commitments anyway. That makes technical debt really hard to avoid... One solution is to create technical debt tickets in the backlog so people become aware of it.
Absolutely not! An estimate should not be a random number but should be constrained with available data however small it might be. If you feel that you don't have enough to form a "guesstimate" do not give me a number but first work on finding the data which will enable you to form a proper estimate.
Once you give an estimate, no matter how many times you explain that it is a "guesstimate" people tend to lock on to the given number. It then becomes a real battle trying to explain the hurdles (and there are always some unknowns) while revising the original estimate. Soon mutual distrust develops between the implementation engineers (stressful and detrimental to actual execution) and the management leading to everybody losing faith in estimates. Agile/Scrum have exacerbated the problem with their short time windows and sprints. In one team that i was on, people just gave up and started quoting 2 weeks for any and every feature, trivial or non-trivial and the whole exercise became meaningless.
PS: The book "How to Measure Anything: Finding the Value of Intangibles in Business" is worth reading to get some ideas on how one might do proper estimation.
> You must never turn estimates into a promise, always remind people they are estimates.
The person giving the estimate isn't the one who does this. Other people turn them into promises, because that's what they were actually asking for when they asked for "an estimate".
Giving some kind of uselessly vague estimate isn't particularly useful from an engineering perspective and everyone else has been trained by scrum the last decade to treat them as promises. So don't do the thing that you know will have a bad effect.
> The person giving the estimate isn't the one who does this. Other people turn them into promises,
Why is that your problem? If someone tried to hold me to such an estimate, I would simply say "I never promised this, in fact I explicitly didn't promise this. I'm sorry X lied to your face about this." Don't own other people's failures.
> Even bad estimates are better than no estimates.
I disagree. The only bad estimates that are ok have to error on the high side. I have found that biz side may not like the longer estimate, but they much prefer that over missing a date.
Right. If they asked for a 95% confidence number, they would get something to depend on more whole heartedly. But that date will be too far in the future to be comfortable.
So they look for more of a natural median. And then are shocked when dates are missed more than half of the time (!).
So if you want honesty, ask for estimate ranges or 95% sure estimates. Then pareto your plan to hell and back so everyone understands the risk. Then reestimate periodically as you home in on completion.
There are other ways. The best of humanity have a wide range of methods they use to get shit done at a level most of us dream of.
So I have a hard time taking a straight jacket process like you suggest as some sort of panacea. It’s essentially another manifesto. What’s the expiration date on this.
Everyone goes on about the value of writing less code, and here we are incentivizing manufacturing line processes for crafting more code?
Note the authors of things like the Phoenix Project: rich Silicon Valley types. We’re just discussing how to be better assembly line workers for tech aristocracy.
Figure out how to build your business in such a way it works for the business. Google didn’t get big because it has perfect process; it can and often is a mess. It got big solving it’s problems and monetizing the solution (every company will need to search for digital files, send email, edit docs, etc).
Figure yourself out, compare with others. There’s is no one size fits all to thinking about problems and even the biggest winners don’t have all the customers there is. Why isn’t Gmail the only email solution?
I do something similar, except don't assign to a number. My back of the envelope estimates are: hours, days, weeks, months.
This gives people enough information as to whether or not a change is worth it. If the difference between, say, 2 and 5 days makes or breaks the feature it's probably not worth exploring in the first place.
>You must never turn estimates into a promise, always remind people they are estimates.
Herein lies the problem. Many companies that do "Agile" fail to realise that estimates are just guesses, they're not accurate and yet, the issue is companies taking these estimates and holding developers to them. That's the real problem.
It's all well and good to say, remind people they're only estimates, but many of us who have been in this industry longer than a minute knows that estimates nine times out of ten are taken as promises and that's when we get crunch and burnout as developers are forced to achieve the impossible with excessive hours.
Deadlines need to drop dead. Code quality suffers when you put a timeframe on it.
Good lord! Buzzword Bingo and Obfuscation at its finest!
Nothing personal, but you have just confirmed to me why i loathe the Agile/Scrum/[fad] processes so much. Take easily understood commonsense ideas/terms and invent fancy names for them, convert heuristics to axioms and sell a business around it.
Bad estimates are worse than no estimates, but if you are doing work complexity estimation and measuring velocity (which you need to do to evaluate internal process changes, manage workload, and for lots of other purposes), you are incidentally gathering the info you should need for excellent estimates, which are not necessarily hyperprecise but are excellent instead because they can explicitly quantify uncertainty as well as mean expected delivery time.
> Can we stop pretending we can forecast the unknown
Within reason. Ive worked in orgs where there is no estimate at all, and that bring a different set of problems (unbound projects and no work getting done because of the complete lack of pressure).
Now you're totally right: software engineers rarely do the same thing (or even similar things) twice, so estimating is somewhere between "very hard" and "impossible".
"Scoping" however is still needed.
If you ask me "How long will it take to add a button to this page", that's a very ambiguous question.
I can solve the problem in a few ways:
- Tack on a plain html button element
- Add a button from an existing styleguide/library/design system.
- Build our own button component instead of reusing one.
- Build a full blown button editor that allows a non-software engineer to use a WYSIWYG editor that will create their own custom button and insert it on the page without code.
Now obviously there's a range between these, going from a few seconds (plus deploy), to months or years of work.
The project/product/whatever managers and the tech leads/engineers need to work together to agree on scope. An arbitrary deadline can be picked and then engineers have to do whatever they can up to that deadline. Or we can agree on minimum functionalities with onbound deadline that MUST be created regardless of how long it will take. Realistically, successful projects will be a mix of both, along with various forms of padding and revisiting estimates along the way to account for unknown and mistakes.
estimating is either easy or impossible. You either have done it repeatedly and have data to show how long it took, or you have no fucking clue.
Focusing on the most important features of each module first, and then the most important of the next then the next with a focus on progress towards something useful is the only way to go.
When you have a consistent team and some tracking data to compare task size to hours required to complete, then you can start doing estimates for future work, but you never have to ask the dev team to estimate other than setting some relative size between tasks and trying to stick to the sizing system. I prefer Fibonacci story points, but it can be anything that has a number.
I feel as you should be able to provide an estimate even if it something that you have not completely done before. One should spend some time to gauge how much of this new thing is really "new" and what parts should be easy to figure out. Then, try to look at resources about those unknown parts, and that should allow to provide a rough estimate.
And when road blocks come up just communicate early, and then if PM/boss don't understand there is not much you can do but probably look for a better gig.
Great! Please let us know when cancer will be cured.
In other words, "I don't know" is sometimes a complete answer. I don't know. Period.
This is supposedly one of the main points of this religion (agile). Some things are unknowable. So you move ahead a bit, and reassess. If you are scrumming, that is usually 2-3 weeks, and don't get me started on that arbitrary limit. But, you try to move forward for a bit, learn something, and then that might
1) let you know enough to know how to estimate
2) give you some information on what not to try next (I tried a, b, and c, and they all failed)
3) give you guidance on what to work on next (d seems promising, I can flesh it out more and see if it still seems like a promising avenue).
You proceed on, and eventually get your hands around the problem, or the person writing checks decides this is not an economical search (because that is what this is, search on a multidimensional surface) and change/delete the requirement.
That's what agile is supposed to be, with a nod to the fact that yes, we can estimate writing a single database query or something fairly well, so in some cases estimates can be useful at this scale.
The bane of my existence is the endless pressure for estimates. I'm doing research; no one has done this stuff before. It is truly unknowable. If it was known it would be in a paper somewhere, and I would merely be implementing that paper. So I get told "break it down into smaller chunks", as if my 30 years of success didn't teach me how to break down problems. Thanks PM that has never coded or produced anything intellectually novel before! I'm surely being dumb and/or obstinate!
I got that written in my previous performance review, that I don't know how to plan and break down problems, because I flatly refuse to play this game. You get punished for trying for hard things. It's nonsense. "I don't know" cannot be changed by insisting on an estimate.
Insisting on the estimate in that situation is basically a special case of the old trope of using something meaningless but easy-to-measure as a proxy for something important but hard-to-measure. It's hard to know how long something will take, and easy to ask someone how long they think it will take (or if it's you, to pull an estimate out of your butt).
An absurd comparison given most software engineering tasks have been done before, they're simply difficult to estimate for some given team without expertise in doing some particular task.
>I feel as you should be able to provide an estimate even if it something that you have not completely done before. One should spend some time to gauge how much of this new thing is really "new" and what parts should be easy to figure out. Then, try to look at resources about those unknown parts, and that should allow to provide a rough estimate.
This isn't personal, just a comment on a way of thinking that is not uncommon: each of those sentences are insane and not based in reality. You may "feel" these things are possible, but that's about as far as it goes. "Just look up all the unknown stuff." OK.
I wish people were more comfortable saying "I don't know but I can do some research and get back to you later." Often times that's where the conversation about time estimates ends because they don't ask you again unless it's actually important. And if they do ask you, now you have a better answer (assuming you actually looked into the problem you're trying to solve).
The other problem with time estimates that I find is that even though I might know for sure that a certain feature will take only a day or two, I can't tell when it'll be done because I usually have other things to work on as well.
Well here's the thing. Often I'm asked for my estimate in the same meeting I first see the story. I have no way of researching before giving an estimate.
> ... and then if PM/boss don't understand there is not much you can do but probably look for a better gig.
> Often I'm asked for my estimate in the same meeting I first see the story
That's definitely awful. Personally when I was leading agile teams, for anything non-trivial we'd start by creating an issue to scope the individual parts of the project, frequently allocating several days to do so.
And some stuff is still mostly unscoppable. That's where Fermi estimation and the whole "How many chips can you fit in the empire state building" time deal comes into play. You have no way to really know how long something will take, but we need to allocate resources SOMEHOW, even if its completely off. An imperfect guesstimate is better than none at all.
The catch is that everyone involved has to know that its an imperfect (and potentially completely wrong) guesstimate, and it has to be revisited regularly as new information comes in. Everyone also has to be ok with restructuring the project (or even cancelling it, in extreme cases!) if we learn its completely wrong.
We recently discovered at work that an assumption/decision that was made nearly 2 years ago turned out to be totally false. The project that relied on that false assumption was about 10% in (the assumption was made long ago, but actually work started recently). We had to sit together and ask ourselves if it was actually worth pushing through the 90% (and likely regret it in a year), or agree to scrap the 10% and start over now that we know what we're doing. We have to be careful not to pull off a Vista/WinFS resource blackhole though!
I used to think that too, but is it really? If estimates are always wrong, then is there really much of a difference? Besides, the lead often has business related information that informs his decision of "this feature must take no longer than X amount of time or we're going to have problems with Y" where Y is often political and not technical.
I remember when Agile development used to involve chickens and pigs. If you have no idea what I'm talking about, then you're not doing Agile as it was written. Almost nobody is today.
I don't agree with you - you're presenting the view which is the exact position people here are trying to put words to why doesn't work (and often hate about their jobs).
But downvoting you is just as counterproductive as burning the devil's advocate and you have my upvote. I hope people will get a hold of their emotions.
I've just spent a month doing a script I thought I could do in a weekend. I didn't realize several of the difficulties inherent in the data, I didn't realize I'd have to up my game with regard to techniques in shuffling data and I didn't know I'd run into so many quirks in the programming language.
I didn't know what I didn't know. That's why I guessed wrong.
In everything you haven't done before, you don't know what you don't know - and only a small portion of those things could have been realized by spending some hours researching the work I needed to do. Those didn't pop up on my radar until I had to solve them :(
But that knowledge isn't without value. Knowing that it is doing stuff you haven't done before that is risky (timewise) means you can underscore when you have a high-risk assignment.
Im not a coder, so maybe the domain is different in a way i dont understand, but I agree with you 100%.
Refueling nuclear aircraft carriers have projections start to finish, a half decade long. There are countless pre and co requisites with interrelated projects, not counting the mundane issues like material and manpower.
I simply do not accept it is impossible to project a timeline for software. If someone stops you in the hall and says "hey you, how long will this take?" That is not a reasonable question, so sure you cant give a reasonable projection. If youre a professional coder and you show up to a planning meeting and your answer is "I dont know, i have no idea and its not possible to provide an answer even on an order of magnitude" thats just ridiculous.
> Refueling nuclear aircraft carriers have projections start to finish, a half decade long.
The difference is that those projections weren't spit out on the fly by a engineer after a 15 minute pitch of some manager's great new idea. Those plans took many months of effort to put together by a team of specialists planning out the various details of the project. Software does have the equivalent of this, it's called waterfall development. The reason we don't do this is because, unlike aircraft carrier design, if you take six months to create a project plan your requirements will have likely already substantially changed.
I'm not a nuclear aircraft carrier refueller, so maybe the domain is different in a way I don't understand, but it's ridiculous to need a five year plan to just refuel something, people refuel cars in five minutes everyday.
I simply do not accept that it is impossible to refuel in a week or so, assuming it's a couple orders of magnitude more complex than refueling a car.
However, your comment didnt help me understand. It doesnt help because even if i embark on something i have no idea about, even in total ignorance i can _bound the problem_.
I dont understand how a professional coder can approach even a problem and have no idea - you have to DO the problem, so what is your approach? Just start coding and somewhere between 5 days and 5 years you stop?
Planning meetings dont happen in a vacuum, so what kind of problem can you research but have literally no guess about its solution? Bear in mind, we're not talking edge cases (research grants or whatever), but coders hired to do a job.
This is the nub of the problem when it comes to software. I understand your skepticism but software development is really very different from other activities. It is "mind-stuff" (and thus quite unstructured) which needs to be expressed in very precise language to solve an [almost always ill-defined/constantly redefined] problem. The inherent complexity involved is huge due to the number of degrees of freedom and malleability involved.
I can do no better than point you to the article "The Humble Programmer" (http://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EW...) written by one of the pioneers of Computer Science to really understand the issues that make Software Development such a complex and difficult activity. And since then (the article is from 70's) we have made matters exponentially worse by orders of magnitude.
The spell for floating an object is Wingardium Leviosa. It usually takes an 11-year old a few days to become proficient at it.
Please give an estimate for a modification of this spell that will make the object lift, complete two vertical circles (720 degrees) and then go back to the starting point.
(Before you ask: yes, we're doing magic. We're combining words in specific ways to create significant effects in the world.)
Software development is filled with fractals. To do A, you break it down into A1, A2, and A3. To do A1, you break it down into A1.1 and A1.2. To do A1.1, you break it down into A1.1.1 and A1.1.2.
In even a small project, that means that your moment-to-moment work might be something like "do task A4.1.2.3.5.3.2.1.5.4". Not literally, but conceptually, that's what's happening. Every task involves a bunch of other tasks, which involve a bunch of smaller tasks, ad infinitum.
This is probably similar to your refueling of a nuclear aircraft carrier. Big tasks involve little tasks, which involve smaller tasks. I assume. I've never refueled a nuclear aircraft carrier. But I have developed a lot of software.
Every software project is a working a plan that has never been done before. Because when it is done, the result is software that is infinitely reproducible, so there is no need to do it again. It's as if somebody needed to figure out how to refuel a nuclear aircraft carrier once. And then after that, everybody who wanted to refuel just cut-and-pasted a fully-fueled carrier.
In many ways, writing software is like figuring out that plan. It's not following the plan, it's creating the plan. And somebody who knows how to do A can figure out that involves A1, A2, and A3. And they can probably figure out that A1 involves A1.1 and A1.2. But they can't predict all the way to A2.3.1.5.2.4.2.2.5.1.1.1.5.2. (It's been tried. It didn't go well. Google "Software Crisis.") Too many issues don't appear until you try to solve them for real.
And those little edge tasks way down at the bottom? They might take five minutes. Or they might turn into another nested set of tasks that takes five hours. Just today, I was working with a team trying to solve a simple problem: print the URL of their current web page. This was no problem. The tool we were using to serve web pages told us the current URL. But it only told us the url's path (the part after the domain name). We also needed the scheme ("https:") and the domain name ("news.ycombinator.com") and the port (":80").
And that wasn't something our tool expected us to want [1]. So a five minute task turned into a half-day marathon of reading documentation, trying things, and reading more documentation. It took us half the day to figure out how to do something that should have taken us five minutes, and we had assumed it would only take five minutes when we estimated the larger task two weeks ago.
Coders who know professional estimating techniques approach this problem by using Monte Carlo simulations that provide a probabilistic range of dates. The high-confidence numbers resulting from these simulations are usually way too far in the future to satisfy stakeholders, because the simulations have a long tail. (More can go wrong than can go right.) Professionals have found it's often easier to refuse to provide estimates than to fight over high-confidence estimates or educate stakeholders in interpreting probabilistic date ranges. Not estimating saves lots of time, too.
I hope this long-winded explanation is the help you were looking for.
[1] For the nitpickers in the audience, I'm obviously leaving out a huge amount of detail about how our REST API was actually interacting with its framework. But that's the gist--we were trying to find a clean way to translate our current absolute URL to another absolute URL. Even now, I'm sure we were missing something obvious.
This is a much better explanation of the problem with estimates than my analogy :)
Especially this part:
Professionals have found it's often easier to refuse to provide estimates than to fight over high-confidence estimates or educate stakeholders in interpreting probabilistic date ranges. Not estimating saves lots of time, too.
Very true; to expand on this metaphor, past experience has lead me to ask “does this thing actually need fuel, or is it in fact a bicycle?”; aka: please give me the full context - what are you actually trying to achieve and why?
> If youre a professional coder and you show up to a planning meeting and your answer is "I dont know, i have no idea and its not possible to provide an answer even on an order of magnitude" thats just ridiculous.
True, but it's because that question is often ridiculous.
If it's technology I've never worked with before, in a domain I've never worked, for which the test data doesn't exist yet, it's perfectly reasonable to be outside an order of magnitude.
E.g., we mostly do Web app development with React and Django, sometimes going into more complicated data processing and visualization. In a recent planning meeting we were asked about recognizing waterways from aerial photographs to improve local government's GIS datasets. Ummm...
> we mostly do Web app development with React and Django, sometimes going into more complicated data processing and visualization. In a recent planning meeting we were asked about recognizing waterways from aerial photographs to improve local government's GIS datasets. Ummm
To be fair, thats something that happens all the time in "normal engineering". While the field itself is much better understand (mostly because aside of record beating sky scrappers, its usually within the same problem space), not everyone knows everything.
For a more down to earth example, if I ask my carpenter to scope out a bathroom remodel, the first thing he's going to do is call a plumber to take a look. The carpenter's not going to be able to give an answer on the plumbing themselves, but they sure as hell can say "Im going to ask someone and get back to you on it".
As a software engineer, I have a network of connections I can reach out to if I'm asked about a problem completely out of my space, and you'd be hard pressed to ask about something that no one in my team knows or has a connection to someone who knows. Definitely won't get the answer today or even tomorrow, but we can get SOME information.
Refuelling a nuclear aircraft carrier may be a complex operation but it's one for which the plan was already developed quite some time ago.
To compare to software development is missing the point. Building a PC, installing Windows and all the apps you need might take half a day if you're experienced, and it'll take half a day each and every time: it's labour and you can't reduce the time taken to zero. That's like refuelling an aircraft carrier: executing a plan.
But software development isn't like manual labour where you execute a pre-determined plan. It's more like researching how to build the aircraft carrier. Once the plan (the software) is built, installing and executing it is trivial and takes very predictable amounts of time, which means software developers are almost always doing "research" even if it doesn't seem that way.
How long did the first nuclear aircraft carrier take to design? Well, literally the first sentence of the Design section of the wiki article for USS Enterprise says:
Enterprise was intended as the first of a class of six carriers, but massive increases in construction costs led to the remaining vessels being cancelled
So I guess ship designers suck at estimating about as much as software developers do.
One way software is different is that you're always doing things you've never done before. Because if you've done it before, you can just reuse that code.
This is probably not how the refueling nuclear aircraft carriers projects work.
That said, it's usually not quite as completely unknowable as your last sentence. Then again, it frequently is.
The real tension is about trust. When people think engineers are lying about their estimates.
"We have this problem, you see, where cells sometimes start to multiply aggressively instead of doing their job. That's a bug, the customers are complaining. How long will it take to fix it? What do you mean you don't know - not even a ballpark estimate?"
You refuse to accept that there's a possibility of something in a domain you have no understanding of... That sounds like a very illogical refutation in itself
Could you expound on this? I’m not saying I disagree, I’m just not sure which part of what he says disqualifies software engineering from being “engineering”
Start with the HUGE failure rate in software development. There is no provable reliability and/or costing, nor is there any form of standardization beyond RFCs and "best practices." One might even doubt that it's even possible for there to be standards like you see in capital-E Engineering due to the unique and complex nature of general computing systems.
To sum up, software development requires too much trust, way more than would be acceptible in an aircraft carrier or airplane or nuclear bomb. Or refrigerator.
Software engineering is clearly engineering. The main difference is simply that in the software world, engineers often report directly to people who aren't engineers. This basically never happens in other fields - e.g. all buildings, tunnels, railways etc are built by dedicated engineering firms founded and run by more engineers. The exception in software is of course the tech industry, which routinely pulls off engineering marvels.
Projects usually go wrong, or are "late" (relative to estimates engineers didn't want to give in the first place), when they're being closely controlled by people who are not engineers. The recent article on Berlin's new airport being a case in point, where the politicians tried to double its size after it started being constructed and the entire project collapsed in a heap.
Now imagine that happening all the time, every day. That's the enterprise software world.
I'm the same way but only about 15 years experience. I have a friend recently retired who has done a wide variety of software development since the era of punch cards (at least 40 years development) and he agrees with me entirely.
I'm not sure how people can provide any reasonable estimates, especially these days when technology is shifting even faster under your feet unless it's a clone of something you've already done in a specific set of technologies that haven't changed.
For me, I simply make a very conservative guess and use a 2x or 2.5x multiplier to be safe. I'm usually far ahead of schedule but there have been occasions I as happy I added a 2.5x multiplier in. Everyone is typically happy... the fact is, my estimates are garbage.
This is what I do! I disagree on 'garbage', though, I would say that you sound like you estimate very well, and that factor of 2.5 uncertainty is reasonable given all the unknowns in this line of work.
You go to the mechanic and tell them your car won't start - nothing happens when you turn the key. You ask them how long it's going to take to fix and how much it will cost.
They don't assume it's the starter and tell you it will be $400 for parts and labor. They tell you it will be $150 diagnostic fee and the diagnostic will take two hours. Then they call you and tell you the cost to fix and time it will take.
For whatever reason, software engineers don't have the luxury of doing a diagnostic. We are made to guess up front and assume it's the starter, when we really have no idea what rat's nest is under the hood until we look.
> software engineers don't have the luxury of doing a diagnostic
These are called "spikes" in the agile community (no idea why), and "prototypes" elsewhere.
If you feel that the error bars are too wide on your estimate, you should build the minimum prototype required to reduce the uncertainty to tolerable levels.
I like to schedule these at least a sprint prior to kicking off the main task, so that you can benefit from the improved estimate accuracy when actually scheduling the thing.
The analogy I've been using lately is that it's like estimating how long it will take to pack your kitchen when you are moving, except sometimes you open a cupboard door and there's another kitchen inside.
For whatever reason, software engineers don't have the luxury of doing a diagnostic.
That´s not necessarily true. I´ve worked with very good, serious consultancies, and I´ve seen a quote for thousands of dollars for exploratory work to give an estimate. Of course that was for a million dollar project - smaller projects might not have this opportunity.
Other companies had some interesting approaches where they gave order-of-magnitude estimates and then refined the estimate iteratively.
Most time estimates aren't necessary in the first place and it's aggravating that we need so many of them. Yes it's mostly a stick to beat devs with.
But since folks reading this will probably need to make estimates anyway, this tends to be accurate: Estimate it'll take as long as the most recent similar thing you've done (real time, not heads down time). Resist the urge to trim out parts of the previous task that aren't related (you'll have new yak shaving) or related to mistakes you've learned not to make (you'll make new ones). If it's really different from anything previous, look for a time you had to learn something and implement a new system based on what you learned (you're allowed to be meta).
After many years in large infrastructure transformation projects, Johnny’s Rule of Threes now goes like this:
1. The first time you do something, you can guess how long it will take and what the detailed steps will be, but that is all. You should still plan it, but during execution of this phase you must take copious notes. Do not expect it to be correct.
2. After you have done something for the second time — using what you learned from the first time — you should end up with an accurate schedule. This schedule contains all of the steps required to do the thing along with accurate estimates of time and cost.
3. The third time you do the thing, you are able to do it to schedule and to budget.
I can see how this would work for software boutiques doing fairly similar projects each time, or projects with very similar parts, but not in complex products. Much less for software that does not use any kind of framework or follows any kind of recipe. Sometimes it's just step 1 in a loop.
I agree. My theory now it's due the fractal nature of the tasks in software development. To design at a high level and break out the work, you have to make some assumptions about the ground-level designs (i.e. how individual components talk to each other). As you start implementing those, you'll have often encounter unexpected constraints that require you to rework the high-level design. As you progress through implementation, you find more and more tasks that need to be done and eventually the number of tasks generated levels off and starts to decay and it's only then that you really have an idea of how long you need (assuming no large bugs surprise you at that point).
I find even heuristics like double your estimate or padding it aren't useful because there's so much variance.
Flip it around: it's really hard to budget, plan for staffing, make commitments, etc. without the ability to give estimates of this sort. The best thing to give yourself when giving estimates of a dificult sort is the freedom to be wrong, and the ability to express how confident you are in such an estimate.
> Can we stop pretending we can forecast the unknown?
Except that you're wrong.
We predict quite well, thanks. The problem is that everybody ignores the predictions.
Most people predict approximately where the 50% probability is with maybe a little fudging. And they tend to be pretty good at it.
The problem is that everybody just adds those up. And that's the disaster from a statistical viewpoint.
Things can only come in so early, but they can come in infinitely late. A blown schedule on an early dependency throws everything out of whack far more than a late dependency would.
We can roll these up in a proper way. We can run Monte Carlo simulations and get "real" numbers. People have done this and the results are remarkably accurate.
The problem is that someone in management will always undercut the realistic estimate for personal gain. And then wind up longer than the estimate at the end.
And, the worst part is that this is RATIONAL. The only way to get a project completed is to start and get the sunk-cost fallacy rolling in the powers-that-be.
In practice, for anything that's new, your project devolves into a set of spikes for development because your time-boxed investigation leaves unknown unknowns everywhere. Which is fine in that perfectly spherical Agile world, but it makes less-than-perfectly-spherical stakeholders very upset that they are not getting to use the Agile stick to beat over their developers' heads, as they have been trained to do.
Maybe a spike takes a week. Maybe a spike takes an hour. I can't estimate that.
Which would be fine, but then management or the PM expects me to loop back around when a spike is complete. I'd rather just continue engineering a solution when I'm done investigating, which is the natural progression of things.
I'd rather treat {get requirements, investigate, implement, debug, deploy} as a single atomic task, rather than splitting it across N meetings for planning the sprint, planning the spike, meeting with the PM for more requirements they forgot to put in the user story, describing it at a retrospective, showing it at a demo.
Now I know somebody will say "But that's now how agile works!". Well, we have a few agile "coaches" that were embedded in our teams who would disagree.
What are you doing in a spike? If you can't work out after a day the rough complexity of the task (bearing in mind you should be working on a small deliverable) there's something seriously wrong. You're not meant to be doing the work in the spike. If your research spike shows that it could be a can of worms, then the outcome of it should be another ticket for a proof-of-concept for another sprint, but with specific goals (e.g. "get a POC working that does X, Y & Z").
> Maybe a spike takes a week. Maybe a spike takes an hour. I can't estimate that.
Timebox it then! If you think the spike is going to take a day, and it turns out you'd need a week to even get the prototype working, then that's a successful spike -- you dramatically increased the lower bound on your estimate.
(Sadly by the asymmetry of estimation it seldom happens that you think it's going to take 5d and it ends up taking just 1d).
> Which would be fine, but then management or the PM expects me to loop back around when a spike is complete. I'd rather just continue engineering a solution when I'm done investigating, which is the natural progression of things.
This has nothing to do with agile. Feature X may sound great if it takes 2 weeks, but if it takes 6 months I may not consider it worthwhile right now. Engineering is not just about building something, but dealing with constraints like time and cost.
Sure, that works for adding something without a lot of connections. But if you are tasked to add a feature that has many hooks into a complex system (plus additional unforseen hooks that you won't realize is needed until you implement it), it's basically a stab in the dark. You can guess a time, then triple it.
>That's when I knew it was time to leave that team
Currently in that situation. My "agile" estimate blew up by a factor of as much as 10, just because the ask was conceptually very simple, even to the domain experts I consulted. And by bad luck the way I was implementing the story it happened that the issues unfolded one-at-a-time, rather than somewhere in the beginning where we could have broken things up into more stories with additional time.
I mean, it happens in engineering sometimes. We engineer our way out of it, and the engineering is solid in the end. But no, we blew up our story, so all that working nights and weekends to get back on track are for naught.
In retro we would have asked 'what happened and could we have avoided this?'. Then we would have broken up the now expanded unfinished work, and asked the PM if they want to continue knowing it is 10x more work than expected. If yes, cool we pull in the tickets next sprint and keep going. If no, we wrap up anything in progress and maybe come back to it later.
Shit happens. The end of a sprint is there to highlight issues like this so the business side can re-evaluate if continuing is still a good idea.
I don't want anyone on my team working nights and weekends to finish a sprint task (system going kaput is the only time I would reach out for help). Anyone letting their team work normal sprint tasks on nights/weekends is not going to keep their team very long. They will either quit or burn out and stop working.
I've had to give estimates for my entire 20 year career, half of which wasn't 'Agile'. So first, Agile has nothing to do with this, in fact, Scrum tries to over come this by splitting up tasks into smaller chunks with frequent demoing, re-prioritization etc. It was much worse before this.
But to everyone, if management is going to beat you up with your estimates, maybe find a place where management doesn't? The pathological thing described in all of these situation isn't Scrum, Agile, or giving estimates... Its the managers who beat up employees.
Agree completely. There is a difference between management pushing and beating up though. Sometimes I wonder if people get upset at normal pushing/asking for clarification. I've seen both from management, but have also seen engineers get unreasonably upset at simple questions around an estimate.
What I've found is that when engineers are sensitive to giving estimates its a symptom of some sort of insecurity around their work. They're probably also feeling that they are 'outside' the decision making/power structure.
What helps is first identifying why an engineer is feeling insecure about their job, imposter syndrome, some sort of life events you may not know about. If the engineer hasn't been having issues delivering, then you want to really talk with them and find out what it is. Why do they feel they're going to get in trouble or 'beat up' if they don't hit the estimate. If they have been having issues, then you should be identifying why those issues have been having, but that is a whole different set of issues than what I think this thread is talking about.
If they are feeling outside, this is probably evidence that this engineer feels that they've been shut out giving input for the 'how' or 'why' questions surrounding the design of the project. They also maybe having resistance to opting into what the goal/product is. Perhaps the goal isn't clear, perhaps the goal isn't well defined, or there is some bigger issues.
Finally, if you as a manager aren't constantly getting estimates as the project is progressing, especially if you aren't using an agile framework, then you need to. If you are using it, have you pointed your backlog without input of the team? Are the scores stale and if you repointed, with new info the scores would be much different?
If you (anyone reading this) are a manager and you think your job is to get an estimate and drive your engineers towards that, then you are only doing only a superficial part of the job.
>My "agile" estimate blew up by a factor of as much as 10, just because the ask was conceptually very simple
Please do not feel bad, nor let anybody make you feel bad about this. It's absolutely natural for this to happen. If anything you could use it to ask for a junior or intern to delegate implementation details while you dig further into the coal mine of this feature.
"Simple is hard." What you can learn from this (beyond team pushback) is what kinds of questions can be asked to figure out if a simple ask is masking 10 other things.
Broken, but very very common, IME. All of the agile-phile places that I have had the misfortune to interact with had things like sprint commitments, and etc. Very abusive and quite the opposite of 'people over process'.
I know that kind of environment too. Usually some department head is a YES person, saying the delivery can be done in some timeframe X. Then timeframe X is told to the dev team and the dev team says X + 50 days. Then the dept. head goes back to the sales team and by that time it's too late. Then the entire organization pushes down on the dev team with fury. The product is of course released in X + 75. (because you always need to double or triple the dev team estimate)
Those environments are toxic and the sign of toxicity is that the estimation process is not up for discussion (AKA the dept head keeps doing it again and again and no heads roll)
There's a balance to it for sure. It sounds like estimation was taken far too seriously on your team if it was affecting you outside of work too. However, estimation is one of the primary skills of a software engineer. It's always hard to estimate well, but it's infinitely harder for a less technical person to do it for you. I think it's important to understand that and do one's best to improve at it over time.
I just screamed this at someone (over IM, not really screaming) that estimates in scrum are meant to measure complexity and not time to completion. There are still people in 2019 saying things like "1 point is 1 day of work" and I want to murder them. The whole point of scrum is that time estimation is futile.
Estimate complexity and then measure your velocity as tasks get completed and you can make a rough forecast of future velocity.
Estimates are like the cones of uncertainty NOAA publishes during hurricane season. A simple task I have done before is at the bottom of the cone.
Possible complexity and unknowns push my estimates further out the cone. Less than a year, but more than 3 months is my standard answer for those random 'how long will this feature take' that I only have a vague idea about [0]. I then follow up asking if they would like to schedule a few weeks of research to close the cone a bit.
The point is that you have to manage the person asking for the estimate. Teach them that unknowns means something could take a day or a month.
I've only had one person really be a jerk about it, and my response was to make the estimate whatever they wanted. If they were not going to listen to me, then there was no point in giving an estimate at all. That response was from my younger, smart ass self though YMMV.
[0] This also depends on what is being asked. How unknown are the unknowns? For example, is the feature clearly visible in another product?
If you are a good engineer and a valuable member of the team people will know, whether you hit your estimates or not. It’s almost like the better you are at actually producing the less you need to worry about the red tape. If you are invaluable and crushing projects no sane manager is going to fire you over your estimate accuracy, but if you are doing poorly and not producing they may point to it as a problem area. I have only worked for small-mid size teams though so maybe it is not like that for huge ones.
I usually reply: Give me a week to look into it then I will have more insight into how long it might take.
But you are right the problem is you can't estimate (correctly) how long it will take to do something your organization has never done before, that's what should be made clear to them.
At the same time it is true that estimates get better as we start working on it, and the work can start by focusing on the unknowns to figure out how difficult they are to do. If that is a plan they agree to.
Sometimes it's good to give people what they ask for [insist on], even if it does not make a sense.
No anxiety needed.
Then calmly explain that the reason things didn't work out as [they] expected is because in real world things doesn't work the way they want or fantasy about.
Suggest better approach. If they refuse and keep doing [asking] the same thing all over again and expect a different result - then let it be their insanity [or anxiety], not yours.
There are ways to get better at estimates. And a proper process will help you to do it. Unfortunately most of the orgs do not do it.
1) We know that "People are bad at estimates". That is why the estimate should come from the team (not one person in the team). Most of the orgs that I know do not do this. And to be honest they are not set up to do it because people in a team do not have overlapping skills. Ideally you have a team where people can take other people jobs (to a certain degree). You can then, during grooming, play "poker scrum" to estimate (where it is harder for people to cheat following what the lead person says). It is ok for somebody to say "I have no idea". The AVERAGE (not the MODE) is used for the estimation. If the items are more than 2 card apart have the 2 people at the extreme say why they think they think it takes the little amount of points and the other person why they think should take the large number of points.
2) If it is hard to agree on the estimation, the item probably needs to have a "spike" where more assessment or design is done in order to make clear what need to be done. In my team we also agree that a story has more than a certain number of points, we may say that it is too big and we have to break it down.
3) During retrospective or a specific meeting we have the team look at the time that an item really took and when it was more than 2 card off, try to understand what was under or over estimated.
Usually after about 5 cycles, people start developing a sense of what the right amount of story point should be for a story.
If you are always left by yourself at estimates and you never properly scope the task before and reflect on the variance afterward, it will be very hard to become good at estimations.
Spot on! I’ve also been developing software for over 20 years and my estimates have never improved. I think an order of magnitude is the only reasonably useful estimate, which can then feed into the question “is it worth doing it or not?”
I think there is a way to give an estimate (have been doing it myself for a decade), even for dependencies that are not known from the start.
(I mean, reasonable dependencies. No one can account for unknown unknowns, but the software engineering field is not an art, more of a skill, we do have well established processes to minimize the surprises).
Is this something that HN audience is interested in?
> I think there is a way to give an estimate (have been doing it myself for a decade), even for dependencies that are not known from the start.
I agree with this. The problem is that estimates vary, and that management teams often don't understand the way they vary. It's one thing if your estimates are precise and are generally accurate within 20%; it's another if your estimate is "it will likely take between six months and three years", and a third of the time the actual required time falls outside even that range.
... and yes, there are many projects for which such a large estimate wouldn't be unreasonable; nor would it be unreasonable for there to be unexpected breakthrough or challenges that significantly impact the timeline.
> Is this something that HN audience is interested in?
I can't speak for the community, but I think the only way you can know the answer to this question is to post it :)
It just sounded like teasing because it's a famously hard problem.
But having read your proposal, I actually totally believe that that could result in accurate time estimates. To summarize, remove the unknowns by planning things out beforehand, then count how many hundreds of small pieces you'll need! That sounds plausible as long as what you're doing is straightforward. (E.g., no crazy database optimizations needed to make things fast enough, no crazy ML techniques needed to make things accurate enough, no crazy algorithms needed to solve NP-hard problems, etc.)
You are right: non-trivial things are hard to estimate - as tasks may not even have an upper time limit.
But in the majority of software development projects the tasks are trivial, and it is possible to enumerate all the little details and their dependencies.
I was thinking of writing a tool to support the process instead of using excel, but unfortunately the dev has stalled due to lack of time on my part.
Sorry, tried to answer and apparently I was "posting too fast".
One may start with enumerating the features that needs development (a.k.a. "user stories"). Also enumerate external dependencies, team overhead (more people working together means overhead is greater). For each feature enumerated in step 1, try listing as much detail as possible, also listing open issues and unknowns.
For example, list all the UI elements under development. List all individual functions / use cases that need distinct functionality (classes, functions) to be developed. List all the test cases. List external APIs, enumerate different ways the said API can be used. Enumerate failures and possible recovery actions.
I'll give you an example: a login dialog for an application. This starts as a two page requirements document, which balloons to around 120 items using this method.
This alone should give a good idea at the start of the project. Keep doing it (enhancing the level or detail), while keeping the ratio of initial estimate vs real value - that will help in estimating the remaining portion of the work.
The problem (the way I see it) is that we don't really have the tools to do this kind of tracking and analysis. Jira and similar systems don't even come close.
Here is an attempt to apply the methodology using Excel:
This is roughly how I was taught to estimate. It is accurate but takes time and everybody hates the result. Management halved my estimates without telling me, made fun of my mentor’s estimates, complained about needing to know everything upfront (not very agile eh!), etc. I think you are right and estimating is more of a solved problem than we think, some people just don’t want to admit it!
And then the customer decides that they don't want to move forward with the project because it's too expensive. Oh, and they don't want to pay you for all the time you spent estimating because no one pays for estimates up front anyway.
But actually, once you show the customer what's involved, and give them a realistic estimate, it might reduce the chances of cancellation. The discussion might be steered towards what they really want, or prioritization of deliverables.
A mockup (and multiple iterations thereof) might be necessary as well.
> I still can't estimate how long something will take me when I've never done it before
In that case you're meant to do a spike to research the issue. You can timebox the spike to half/whole a day. Then use that knowledge to help your estimation for the following sprint.
I've become convinced over the years that upper management doesn't really care about 'estimates'. What they care about 'commitments'.
Every team I've been on has had upper management drive team leaders to get 'commitments'. They want some form of emotional investment in the work. That way, when scope has arbitrarily changed or a massive problem has lead to run over, they can stare at you like a whipped puppy-dog and say "I know it's 10PM and you want to get home to your wife, but you promised us, you committed, to getting this done on time..."
Pure emotional exploitation. The ideals of agile are laudable. I was the excited bannerman of my first agile experience and I read the manifesto with a lot of hope. However high-minded agile is, the language is too easily co-oped into the worst sins of management. In many places it exists solely to blackmail free overtime out of engineers, get away with micro-management, and ride engineers with false deadlines under a guise of hipness and modernity.
Estimates are just there for a false sense of stakeholding, or at best a ballpark cap so you don't work yourself into suicide. They can't be made accurately even if we could estimate properly, because all estimates are constantly second guessed by people who are making judgmental jokes about sandbagging, or asking "Are you really sure it'll take that long?" in voices that practically scream "chilling effect". More than a few possibly realistic estimates are torpedo'd by managers who are looking at their roadmap and tsking about how they don't line up.
I would even say accurate estimates are actively undermined. I've worked at companies which have positive work images abroad, and even there any team which was on time found either its sprint budget slashed, or scope increased until it was forced behind. The underlings won't do over-time if they don't feel the squeeze.
Story points, velocity, burndown, and any other metric you can think of that Agile courts are absolutely useless, because this industry refuses to come to terms with Goodhart's Law. I just watched a 10 million dollar project go up in flames. Their sprint graphs were great, and a constant buzz was going about the company about how these teams were setting the standard, meeting their metrics and goals consistently. Only recently did the CEO get down to business and figure out that all the metrics were bogus, and that for the past year all of the PMs and teams had been gaming the system to hide the fact that they were way behind schedule and over budget. The project was quietly scrapped and nothing changed. The shell games continue to actively sabotage data. I only know of the fallout because of who I sit next to.
Most of the benefits of agile seem to fail basic contact with humans unless they are backed by outstanding and visionary leadership. Most companies do not have that, yet they still find value in switching to Agile. Why? Maybe because even in total collapse it provides an unparalleled system for squeezing out more overtime, in my cynical opinion anyways.
I don't really what else to offer in place of Agile. I'm not that intelligent, but I don't really feel the industry can come up with a good development model of software development culture until it stops what it's doing and starts acknowledging problems arising from basic psychology, politics, and data gathering.
Nobody in charge cares about the estimate of an individual task. They care about progress toward program goals. With experience and skill, estimation mistakes tend to come out in the wash.
Any software development methodology or process is going to come up short if it ignores the fundamental nature of software development, which is: We do not know what we're doing until we do. Any process or methodology that tries to extract promises out of the software development organization about things that are still in the we do not know what we're doing phase is going to result in disappointment at best and serious harm to the overall effort at worst.
And it turns out that it's not unusual for software developers (and consequently software development organizations) to spend more time in the we don't know what we're doing phase than the we do know what we're doing phase. That sucks from a management point of view, but no amount of process is going to make the problem go away. The best you can do is manage the risk. We don't need good processes or methodologies nearly so badly as we just need management enlightenment about how software development actually works.
That's the truth, except, I can't tell clients that "I'll know how long it will take when I'm halfway done." The best I can do is look at prior work and multiply by the LarryDarrell Constant of 1.5.
I think one of the top skills for management / senior dev is to be able to translate between software estimate uncertainty and the business reality surrounding the schedule.
For example, if you ask, "How long will this project take?"
You could just say, "I don't know. At least two weeks, but it might take as long as six months." This may be accurate but it's not helpful.
Or you could say something like, "In a half day I could put together a preliminary design, and identify the most uncertain parts of the project. From there we could figure out what is needed to tighten the schedule or adjust the requirements."
And I think that one of the more common problems with managers in larger orgs is that they can't tell the difference between somewhat arbitrary deadlines (we want this for Q3 to look good in our org, but we can break this project into smaller milestones and push it back if necessary) and deadlines that reflect the reality of business (we want this for Q3 because this will have cascading effects on other parts of our business, and if there are problems we need additional resources or an adjusted scope).
Any time there is a deadline, the manager should have a clear understanding of the consequences of missing that deadline, and be able to communicate that to the engineers. The efforts to come up with a schedule and milestones should be collaborative. Too often there are mandates that have unclear consequences for missing them, and when schedules slip you have no idea what you are supposed to prioritize.
Funny thing. If you do this organizationally, then you eventually end up making larger and larger estimates. IMO it is better for management to understand the inherent risks associated with engineering estimation and build the padding on their end.
I usually do 2x my expectation. I keep track of my prior estimates and compare against my real time. I normally miss by about 30% - 50%, this leaves me a fair margin. It’s also been improving with time, which means I’m getting better at assessing my skills and problems.
For me, I tend to work quicker than other contractors / employees, so I still often beat out the other quotes (time or money). That’s not always the case for others, so they margin of safety may not be there. In which case, multiply by a smaller constant.
I kind of lean towards double the schedule and cut the scope in half and hope that maybe we can deliver a little extra credit work.
It's important to note that padding schedules is not just done in software, it's a common practices in all sorts of engineering. It's not bad that we do this, it's bad that management often doesn't understand that we do this and doesn't understand why. The understanding why part is actually really important, because that means management can be expected to do their part in managing the uncertainty or at the very least they can be expected to be careful about not making the situation worse.
It's a tough nut to crack. Think about fields like architecture and engineering, where the historical knowledge is well over three thousand years. Yet they still have failures, go over estimates, and generally suffer the same issues that we do, but at much greater costs.
I was prepared for our weekly "I Hate Agile" post, but this one is actually really great. It's a lot of the arguments I make to Agile haters. The fundamental problem that drives most agile failures isn't in the team's execution, it's in the business' expectations. One side is signed up for incremental delivery, and one side is set up for a fixed scope and deadline and the result is misery. I think this article makes a lot of good points. Scrum isn't the complete answer, but it's a big step in an organizational transformation. I think it's suffered a lot from being oversold.
> One side is signed up for incremental delivery, and one side is set up for a fixed scope and deadline and the result is misery.
This is a brilliant summary, thank you.
The best 'agile' experiences I've had are situations where the 'clients' are directly involved, often within the same organization. Instead of a hard scope or deadline, there's just a shared interest in producing a valuable product efficiently, and the users are on-hand throughout the process for feedback and reevaluation.
The worst experiences have been waterfall contracts, developed by an internal simulation of agile. The software team does frequent "releases" to business or management, who provide feedback and feature requests, but the actual recipients are uninvolved outside of occasional demos, or contacted only indirectly by non-programmers. The result is almost always thrashing, with time and effort spent pointlessly satisfying the forms of agile even though the real timeline and customer feedback are unyielding.
In large enough orgs, internal clients can end up being just as bad as external clients, or even worse since they have a direct line to your PM, can track your feature board, etc. yet there is basically no sense of camaraderie or shared goals.
I'd say generally IME they are still preferable, but occasionally can be more painful.
I often sit between the business and tech orgs, so the way I explain it to the business is this: I can get you detailed status reports and metrics, but they will slow progress and be expensive. So think about why you need that information: if there are legitimate business reasons with dollars attached, go for it. If it’s just to soothe your anxiety about the timeline, a therapist will be far cheaper and more effective.
“One side is signed up for incremental delivery, and one side is set up for a fixed scope and deadline and the result is misery.“
That's just one example of the general value I like to ascribe to "AINO": it gives you a useful mental model of the delta between what you have and a known good process, making it quite easy to name the missing pieces. This usually doesn't make it easier to actually fix their absence, but at least you know where to start with damage mitigation. When waterfall fails, you just throw your hands in the air and say "more of the same".
Actually, I think Scrum is just a first step to give the development team some autonomy to create a bubble in which it can play by its own rules. AFAIK, mediating between the development team and the expectations of the management is the job of the Scrum Master [1]:
> The Scrum Master serves the organization in several ways, including:
> ...
> - Helping employees and stakeholders understand and enact Scrum and empirical product development;
So, I am not saying that it is an easy job. Many managers just hear what they want to hear, so it can be quite difficult. But if your team has such issues, be sure to support your Scrum Master with good arguments to help him make managers understand what it means to use Scrum (and become Agile).
Agile never gave organizations a holistic, viable alternative to Waterfall. Because there’s a difference between theory and practice. Product work is more about practice. When we complain about “AINO” (Agile In Name Only), we’re not being honest with ourselves.
I agree with most of the article. Specially the keep-learning part.
All Agile did was put software development teams unfairly under a microscope.
I believe Agile has been tremendously beneficial for the industry globally, especially in some subtle ways. For example, Agile says you have to communicate a lot, if you want to get software done. Here is the subtlety: if, today, you stop telling the average programmer to communicate, they will stop and go back to silo mode, "naturally". At least some of them. If you think about software at the industry scale, you have to think about a wide population which require processes.
I was once working in a nice little company where, one day, they introduced agile. It was so much beneficial. Before, the bosses thought that talking was simply a lost of productivity. You're a programmer right? So code, don't talk. So we would develop software without talking to each other. After we had daily meetings, scrums things and stuff. Our velocity has sky-rocketed.
You have to realise that before Agile, a fair portion of all software development projects that were started would simply bust and never get shipped. The code is a complete monster or the budget is nuked. When I started in the company I just mentioned, I started working on a codebase that was the worse code I've ever seen in my life. You would touch one line and everything would stop working. It was a condensed piece of spaghetti with hacks on top of hacks on top of hacks. Software architecture? That requires some talking and thinking, forget about that, not permitted.
Now I agree with you, agile is not much useful for a hacker who consistently get software done and understand deeply what's happening.
edit: not that I'm an agile guru or anything. Actually I only know the thing superficially. I show up at the meetings, when I'm ask how hard something is, I answer, then I mess around with my office friends. Still I can appreciate it works much better than any process the bosses can come up with.
> You have to realise that before Agile, a fair portion of all software development projects that were started would simply bust and never get shipped.
I have plenty of gripes with Agile, but there's definitely a "victim of its own success" aspect to the whole thing.
"Agile in name only" is frustrating compared to a good system, but in many cases it's still far better than what came before it. Basic ideas like "we should expect requirements to change" and "if the programmer doesn't know what their code will be used for, something is wrong" weren't necessarily accepted prior to agile. Projects that are chaotic, mismanaged messes under AINO might well have been orderly deathmarches or cancelled outright in the past. Some of this progress is technological (source control, post-release patching, digital distribution), but some of it really does owe to Agile.
It reminds me a bit of the scientific method. The elaborate eight-step thing taught in school feels like silly boilerplate, but it's shocking to realize that "ideas are tested by experiment" was a genuine breakthrough from a past that made major choices like disease treatments and scurvy cures based on 'reasoning' without testing them at all.
> You have to realise that before Agile, a fair portion of the software development projects that were started would simply bust and never get shipped. The code is a complete monster or the budget is nuked.
The difference is that of ivory tower planning and the following phases of development, testing and a “big bang” release (waterfall) vs working with an MVP with the purpose of releasing as soon as possible and then work in iterations based off of actual feedback and demand (agile). If you manage to nuke your budget or create a monster of a code base already at the MVP stage no methodology is going to save you.
I really do believe agile (or at least "not waterfall") has reduced the rate of major software flops. But I'd be fascinated to know how often it turns failures into successes, versus revealing failures earlier in the project cycle. Both are valuable, obviously. It's just interesting that part of Agile's value is in revealing engineer dysfunction or fundamentally bad ideas at the MVP stage instead of the completion stage.
That’s a point, and I agile development would not be possible without the internet. But working incrementally based off of customer/user input is only possible after the first release. What was observed here was the huge number of projects that would previously fail without ever getting that point.
I have the subjective sense that the rate of massive, irreparable flops in software has gone down a lot. (Very anecdotally, the latest "permanent failure" on Wikipedia's list of major failed software projects is from 2014. That list is worth a read regardless.)
Projects still get cancelled, budgets and timelines get blown out, features and testing get compromised. But at least in dedicated software companies, it's pretty rare for something to get through 90% (or 150%) of its allotted development time and then be completely discarded as unsalvageable. (For technical reasons; market changes are a different beast.) Projects in technical crisis are either apparent sooner or have usable elements, rather than failing outright at the end of a waterfall.
Of course, there's a lot of room to debate what changed and why. If Agile led projects to fail sooner or less dramatically, that's still important, but less notable than if it changed them into successes. And a lot of technological advances have helped too; things that might have been outright failures as shrinkwrapped software can become late or overrun projects in the era of digital distribution.
I was interested in comp sci precisely because I didn't have to talk to people, an area where every interaction is a performance. I want to be alone - what's so wrong with that?
If you are writing software just for yourself, there is nothing wrong with that. If you are writing software for others, be prepared to be steamrolled by people who know how to write good code AND how to communicate with consumers of the software (i.e., paid customers, OSS devs, enterprises, etc., that one will depend on what kind of software you are writing).
I think that a lot of developers with a mindset similar to yours tend to underestimate how important good/valuable feedback (and communication in general) is. And I am saying that as someone who initially went into comp sci for reasons similar to yours. Some of the best engineers I ever worked with had amazing communication skills, and it acted as a x10 multiplier to their technical skills and overall productivity.
Feel ya. Surprisingly enough, I'd say that I've not had to attend so many boring meetings in my career but certainly I'm just lucky. Also, once I did get a manager who was an "agile guru" and he was annoying. He talked to me like he's the jedi of software development and I'm his padawan. Thankfully, he got fired.
I had a professor at university who acted that way about Agile. He thought he was literally Uncle Bob incarnate. But when you looked at his credentials, he had never once worked in a real production environment, he had only ever taught.
Nothing is wrong with that, but don't expect to keep a job where you don't have someone to do all the business and project planning and coordination for you, at the expense of your salary. If your computer science talent is good enough, that might work for you.
I had a similar sentiment - As a programmer, I did not want to be communicating with a client, I wanted to be left alone in a sense that I only ever had to interact with people that understand how coding works.
I think it's doable; say you are the sole developer of a massively used library. The programmers being your only users - they read the API docs, submit an occasional feature request/bug report.
Heh. I have a reverse sentiment. As a programmer, I want to communicate with the actual users who'll be using my product. But without two layers of intermediaries on both our and customer side, who end up turning this whole thing into a game of telephone, ostensibly in the name of all the other important stakeholders.
The programmer users are arguably "clients" in this context too. Much the same sorts of issues, different labels.
One need only look at the bug tracker on the average "massively used library" to see that you will be communicating with clients a great deal. Your dream of a small group of perfect programmers filing neat and accurate feature requests or bug reports doesn't match any reality I've ever seen. To expect them all to "understand how coding works" is also likely a pipe-dream - users of all ability levels are out there writing code, filing bug reports for things that aren't bugs, asking questions in forums that aren't meant for those questions, mad at you because the direction the project has taken isn't the one they or the company they work for want/need, etc etc...
Users of a library are still users. You can write a library in an ivory tower, just like you can write applications in an ivory tower, but that kind of software is written for yourself and only incidentally for anyone else. People might still use it if there are no viable alternatives, but don’t expect them to be very happy about it. Especially because scarcity of communication also implies scarcity of documentation.
You're abdicating your responsibility of developing working software. Your job is to solve problems, using code where appropriate. Your job is to understand the entire system and explain how the corporate system above interacts with the technology you are tasked to develop.
If you can't do that, what is the difference between you and a group of outsourced employees making pennies on the dollar?
you’re presupposing he wants to fit into that holistic model, as a senior person. maybe he is fine plugging away at code, the task defined, like a monk transcribing texts.
the world needs both kinds, and a range in between.
Ultimately, these days programming is table stakes - IMHO you get zero credit for knowing how to code. The best developers are those who are able to step up and actually interact with people to articulate and understand problems. Whether that's with fellow developers about code, with managers/stakeholders about technical things, or with end-users/clients about users needs.
You're more than welcome to want to sit in isolation and code away, but I would suspect you would find it difficult to do this and be happy. I've never worked in a team that would work with someone like that.
There is nothing wrong with that, but careers under capitalism are about _signaling value_. I hope you find a place (or have found a place) where you're appreciated. :)
As far as your career - salary and promotions and recognition - you can't just sit in a corner and be quiet. People have to recognize that you're good at what you do and it's hard for others to see when you only communicate in code check-ins.
You have to start attending meetings, speaking at meetings, and being useful beyond the code.
I'm not sure why you're being downvoted but some of the best career advice I've heard is someone saying you should assume when you go to work that you're working under a communist dictatorship. I.E. Toe the party line, make your boss look good, (pretend to) eat up the company propaganda, assume the leader(s) (C-suite) will do whatever they want, when they want, especially giving themselves bonuses regardless of company performance.
It's a weird contrast to normal life, considering a lot of us live in democracies, but once I understood this and started acting accordingly, it helped me handle my work life.
I don't know if that's great advice, but its certainly advice that will help get you promotions and salary bumps.
Like anything there's a grey zone here. Understanding the political realities of a business is highly beneficial and will help you move forward with the company and let you know when to pick your battles. But I've watched people who do nothing but this lose the trust of those at their level. And that trust is crucial for agile development. I also personally feel like pushing back on the company when appropriate can indeed provide a lot of value to the organization. "Why are we doing this meeting?" "We need another week for testing" "I'd like to see the roadmap you are planning". Just don't push back all the time.
It is probably good advise for most cases but individually depends on the shop and culture. Like if you really should always wear a three piece suit to interviews regardless of what they request. Some are flatter, others are more hierarchical (ironically practice can fail to line up with org chart structures).
Some are all about the politics, others don't have the luxury of self delusion or actually value the sort of brutal honesty to say "I have looked at the new framework - while trendy it is buggy inefficient crap." The ones who lack the luxury tend to be smaller but small size is no guarantee that they'll just say no to the flavor aid.
In all areas of life we judge those around us not by the true facts, but on our limited knowledge. Working to improve that knowledge can result in a change in judgment. It seems completely reasonable for there to be a person who does good work but who isn't know for doing such, and as such is viewed worse than they should be. By working to increase the knowledge of the work they do, their evaluation in the eyes of others will improve.
Of course, it isn't a straight forward or simple in practice. There are those who lie and misrepresent, and if you are to obvious about your intent you will be viewed as manipulative.
I don't see why this deserves either the label of BS or a ban from HN?
Perhaps some thing think it's uncomfortably close to the 'slur' of 'virtue signalling' that people throw around. While I'm sure the intention was nothing but pure in how it was used here, it did make bit second-read the comment to assess whether it was being used in a negative way, and to really understand the point being put across.
> > All Agile did was put software development teams unfairly under a microscope.
Most managers crave control over their teams; this is not something that Agile introduced. If anything, Agile let them use the sort of control they were already demanding towards more useful and productive goals, by introducing bazaar-like practices to centralized software development (release early, release often; shorten feedback loops as much as possible; make extensive use of refactoring, software testing and XP principles); while at the same time not being altogether incompatible with self-organizing development teams (these were mentioned in the original Agile manifesto, after all).
Eh? I'd compare it to No True Scotsman, not AI: I mostly hear people insist that anything which doesn't work isn't Agile.
I'm extremely grateful for the changes agile wrought on the software industry, and while I think many of the best insights have become commonplace, I don't think formal Agile methods are outdated or exhausted yet. But I do think formal Agile is extremely hard to do right, and has very common failure modes of knowingly-unrealistic planning, unproductive meetings, excess design changes, and tech debt neglected to ship MVPs.
One measure of a methodology is how much value it provides when it's used right, and I agree that Agile shines here. But it's also worth asking how easy a method is to get right, and how gracefully it devolves when things aren't perfect. My experience is that agile advocates commonly neglect those parts, dismissing widespread frustration with 'bad agile' on the basis that if it had been implemented perfectly, those issues wouldn't have come up.
I have a theory that commercially successful software development methodologies are like diets: they have to be almost impossible to follow. This ensures that when you fail to lose weight/achieve bug free software, you blame yourself for not following the rules exactly, rather than the rules for not working.
At the very least, I think it's common to judge methodologies by "how well does it work when it's done right?", when we should be using a three-part test.
1. How much value is there if it goes right?
2. How easy it is to get right?
3. How gracefully will it break down if it's not done perfectly?
Methodologies don't have to shine on 2 or 3. NASA-style clean-room coding is the gold standard of hard-but-effective; you have to suck it up and commit entirely with a great team, but the results are unbeatable. Waterfall was "good, bad, bad", but used by lots of less-exceptional teams with hideous results.
Agile, I think, improved #3, but not #2 so much. It's commonly sold by appealing to how much it can potentially help, but I think the more honest pitch is "it's tough to do right and it'll sting if we get it wrong, but it hurts a lot less than getting waterfall wrong."
Then maybe there should be one rule (as was my most successful dieting effort): Is what you're doing helping make or save money?
I know such a simple rule is probably causing a lot of you to cringe, but I can very easily fit just about any activity of value you or your team embarks on into that question, and it's a rule the entire company can get behind.
That's probably a very apt metaphor, but there's an upside too: many diets actually do at least temporarily help people in spite of their irrational basis, just because $random_diet causes the dieter to observe and reflect on their eating habits more carefully in general. Even a non-sense diet like "eat only foods colored yellow or red" probably does a lot of people a lot of good, compared to their usual state of being completely unconscious of their awful habits. Same goes for software development practices.
I think with this analogy you're presuming that people are exposed to a variety of foods and there is some positive correlation between action and awareness. However, for someone who only ever goes to McDonald's to eat, they'll just switch to french fries and ketchup for your nonsense diet.
"Thou shalt not be negative" (HN) - well I'm sorry. Just some example text:
> both Agile and Waterfall are focused on building. Design is about validating.
or
> So…what’s the way out? It’s a smart focus on clear outcomes, not output, with roadmapped outcomes replacing planned milestones, with trusted product teams, not project teams, empowered to vet assumptions and discover the minimal path to value.
Satire? Enough words to fill a hot air balloon or two. "Smart" focus, "clear" outcomes, "outcomes" vs. "milestones"(?), "trusted" product teams (as opposed to... what?), "empowered" etc.?
Far more interesting than the contents of many links that make it to the HN homepage is discovering the fact that they do so.
Honestly that statement is the nail on the head for me. There's a huge distinction between an output and an outcome and a huge distinction between a project plan and a road map.
> between an output and an outcome and a huge distinction between a project plan and a road map.
I mention neither of those.
I'm not complaining about your reply, I think it shows how those texts are (to be?) used: Just like art. You project your own experiences into it, it's more about getting you to think. So naturally different people are going to react differently and read different things into the same text. That's perfectly alright - you get something out of it (only) if you put something in. If there is nothing to begin with, which may just be lack of experience (new programmer), you'll get less out of it. If you have two decades large project experience in various companies and teams you'll get a lot out of it. Let's attempt to make the best out of it then, out of any text, if we are presented with that opportunity and the headline got us to click on it :-)
In the spirit of that second paragraph - focusing on outcomes, not specific implementations - here's a rewrite:
Original:
> So…what’s the way out? It’s a smart focus on clear outcomes, not output, with roadmapped outcomes replacing planned milestones, with trusted product teams, not project teams, empowered to vet assumptions and discover the minimal path to value.
Rewritten:
> So… what’s the way out? It’s a focus on clear outcomes, not output. There should be dates for when specific outcomes should happen. Teams should be empowered to find the minimal amount of work needed to achieve the outcomes.
It's a really wordy way to say what any good leadership class would tell you. Focus on what you want more than how you get it, and let your subordinates decide how to get it. Both good product teams and project teams are capable of working that way.
This kinda sums up my problem with endless agile vs. waterfall vs. whatever discussions. The interesting parts aren't really new ideas and the extra buzzwords add nothing.
Interesting, I’d interpret “roadmapped outcomes replacing planned milestones” to mean there should not be predetermined dates and deliverables (planned milestones), but rather, a sequence of value-delivering outcomes (roadmapped outcomes).
A relative order of outcomes could also work, yeah. You'll have a date in there somewhere, though, even if it's not driven off the amount of work. Could be "and if we can't achieve this outcome by X date we give up on this whole outcome".
You don’t hear the name Joel Spolsky much any more, but he was pretty influential in software process thinking in the 90’s - not really for being particularly insightful or original, but more because he was one of the first people who thought of writing a blog about software design. One of his early “observations” about software project management was that “you wouldn’t buy a pair of jeans without knowing how much they were going to cost, why would you expect your manager to sign off on a software project without knowing how much it was going to cost?” The utter blithering stupidity of this perspective highlights how managers look at software design: the purpose of software methodology is to put a fixed cost on every software project. As long as that’s what they’re looking for, software methodology is going to continue to be a lost cause.
Your argument here is against "business". In a dream world, where we could build for building's sake, and create "more perfect" things with no considerations for cost, I'd agree with you, but businesses (in the abstract) are a math function: Does it cost us less to make this than we charge for it? If so, we are a profitable business, if not, we are an unprofitable business.
Managers are there to manage costs, to protect the "business".
I think every owner of a casino knows there is luck involved in their profits, and that the odds are stacked slightly in their favor.
I don't think any owner of a business thinks that there is luck involved in creation of their software systems. They absolutely look at the $10 million per year they are spending on "IT" and look at the $26 million per year in value that spending is unlocking in absolute dollar terms.
But they know that the sum of all cards drawn will be profitable, because their practices guarantee it. Normal businesses cannot guarantee that the sum of all their projects will be profitable.
That's one of the big reasons that I've become a proponent of Donald G. Reinertsen's approach to product development.
He emphasizes a variation of the cost benefit formula + urgency, called Cost of Delay for prioritizing that basically boils down to "Value / Time Remaining" because your cost is the time remaining. It naturally prioritizes work based on shortest time to value because, unless the value proposition is enormous, dividing by estimated time is almost always going to favor the shorter tasks.
This will of course vary by team, but in general it puts a focus on responsiveness and getting the little stuff out of the way first (including tech debt). When you have a formula that doesn't let the little stuff build up, it gets easier to focus on the big stuff too. It's also make it easy to pivot because it totally disregards prior time investments.
It meshes better with what agile is supposed to be better than any other methodology that I've encountered to this point.
Tech Debt is not little, or we wouldn't carry it around, rather than devote time (and risk of revealing new unknowns) to fixing it. My agile experience has always favored conquering little stuff first, since ppl recognize stories more than points (which vary in value, over time). This, unfortunately, sometimes leads to the hilariously subversive act of splitting a story up into as many tiny pieces as possible so that some people may inflate that "What have you done for me lately" list while more tangible coding is done by others.
splitting a story up into as many tiny pieces as possible
Honestly I think that's the intended outcome from the business's perspective. The smaller the piece of work, the easier it is to assign a cost and a benefit to it, and the easier it is to hand off to another developer.
>... and the easier it is to hand off to another developer.
I have never seen this work out in practice. And it's like a 1%-of-the-time thing anyway.
On the flipside, I've seen quite a few hours wasted adding fake subdivisions to tasks, probably close to 30% of the time. Outside of small, known bug fixes or tech debt cleanup, it's rare for a thing to wholly fit within a sprint.
Cost of Delay metrics do a good job of explaining how the cost of that debt can grow over time the longer it goes unfixed. Some are smaller than others, but uniformly it tends to get harder to fix the longer you wait to fix it.
He also doesn't place any type of value in points or other, bluntly, useless metrics for measurement. The entire focus is around delivered value. You can break it into however many stories you want, the value is the same. The measurement of how many things you delivered or how much time they took is not important to the company.
What's important is how much value was been delivered to the company. It's the only metric that matters.
The subtler point is that if the developer won't make an estimate for how much time a feature will take, than the manager will. And the manager's estimate will tend to be even worse than the developer's, so we should all prefer that the developer makes the estimate even though it will be a bad estimate in an absolute scale.
Note that the manager's estimate may be implicit or explicit, conscious or unconscious, but they will make one in the absence of the developer's. An implicit estimate would look like the manager doing a "gut call" when deciding whether or not to give the go-ahead to a feature, with no numbers attached to time and cost.
The problem is that if the developer gives an estimate, the manager might hold him accountable to it. If the developer can't even meet his own estimate, that proves to the manager that he's a bad developer. (He's not; he's just bad at estimating development time, as is everybody else.)
And that's what Scrum and story points are for: to help you make long term estimates once part of the work has been done. You use the feedback on how abstract story points turn out in practice, to get a rough long term estimate of how long the work is actually going to take. Without being so explicit that anyone can be held accountable to them.
"Yeah, yeah. It doesn't have to be perfect analysis, just a rough idea for scheduling. Not written in stone, ha ha."
"Uh, a week?"
"Okay, great, so if I say 8 days you should have it done by then?"
"Yeah, I hope so."
"Great, thanks."
Day 3: actually doing X requires unforeseen Y and Z which will each take a month.
"We need to add Y and Z to the schedule, which will each take a month."
"There's no room for that in the schedule."
"Then I can't do X"
"We've already costed X and committed to it. We've got to do X. We need to make X happen. When can you get back to me with a path to green for X? And also, let's schedule a post-mortem to figure out why we missed on X."
At the post-mortem, PM doesn't show up: "Uh, turns out we missed on X because I estimated it too quickly and the PM committed to those estimates. I didn't know about Y and Z when estimating, and these are things I could have done to detect them as requirements if I had known to do so."
Email from PM: "Hey, could you give me an estimate for X'? Doesn't have to be perfect, ha ha."
"A week" is a terrible estimate for complex things that have components that you can't foresee. If you lose "a day" on some problem outside of your control (compiling starts to fail and it turns out someone checked in some code into your branch by mistake), you've basically wasted 20% of your estimate.
Never say a week when there's the chance something will require a month because you didn't understand it.
You need to learn this lesson. A week is a blink of an eye in software, and only things you understand take a week.
"I don't know why, but I don't feel comfortable guessing less because I don't fully understand X."
"Okay, that's not a good way to estimate. Why don't you spend some time to come up with a plan for X and we can estimate it after that. When can I have the plan?"
I've solved this by not panicking about software someone else owns. Worked out OK so far. They can fix their processes and expectations (I'll help!) to get more value out of me, or choose not to. Not my problem. Environment gets too toxic, well, see ya, and good luck (you'll need it).
This would've worked if you'd taken your initial estimate (1 week) - bumped it to the next unit (1 month) - then doubled it (2 months). Maybe you would've been given 9 or 10 weeks, so you'd have the week for X, and 1 month each for Y and Z.
I understand this is a contrived example, but it's strange that, when you take a real project that "overran" based on the initial estimate(s), and you apply the formula - just how often the formula is actually really close to the actual amount of time.
> "How long will X take?"
>
> "I don't know, I've never done X before."
I fail to see how your example makes any case on how estimating how many resources a project needs is stupid.
At most your example argues that asking inexperienced and clueless devs to estimate stuff they know nothing about produces highly unreliable information.
The main difference between hacking away at a code base and software engineering is identical to the difference between construction workers and civil engineering. Sure, estimating is hard. Yet of you want reliable estimates you need to check with experienced professionals.
I've worked about 10 years at FAANG companies and I'm yet to meet the experienced and expert devs who are good at estimating things. I think your characterization of "inexperienced and clueless" devs is off.
The reason I think my example illustrates a problem is it's moving away from agile. I shouldn't estimate X without understanding it? Okay, I'll take the time to figure out, look around corners, etc. I realize I need Y and Z. I've got to estimate those, to estimate X, so I start working with other teams to figure these things out, and drawing up plans and schedules and who's doing what when to get all this stuff delivered. Now we're basically at the waterfall model where we're planning everything out before doing it.
The other problem is that all the process is genuinely a waste. If we need X then let me work on it rather than do all the planning and scheduling nonsense. The schedule will be wrong anyway. If I have problems, need help, etc with X then I can raise those concerns and management can react in an agile way.
If you look through the managers who are weighing in on this thread, you’ll notice a common perspective: they don’t trust you to actually work on something. That’s why I have so little hope for any software methodology - even if it starts from something positive (like XP, which was the precursor to “agile”, did), it will be turned into “I know all of my programmers are stealing from me, how can I stop them?”
In part because we still measure effort in hours instead of in wear and tear. That guy answering comments on Hacker News is probably trying to recharge, not steal from you.
I learned recently that the crane operators that unload cargo ships at some major ports can’t work more than four hours at a time. The movements are too precise and take a great deal of attention. I think you can work two shifts per day but there’s a mandated gap of some number of hours between them.
> At most your example argues that asking inexperienced and clueless devs to estimate stuff they know nothing about produces highly unreliable information.
Wow, so much hubris in such as small post.
Unless you have been doing the exact same thing forever(in which case you are at a risk of being replaced), new work _always_ comes with a degree of uncertainty. Be it new technologies, new business requirements or what have you. I have never worked in two identical projects in my life.
The correct answer would be a 'spike' or proof of concept. Either that, or hire those magical developers you seem to have who know everything there is to know.
Writing software is design, not manufacturing [1]. Do companies know accurately in advance how much it will cost to design a new nuclear power plant, aircraft carrier, or jet engine?
There are times when something truly original is being built and it's a valid argument for estimate uncertainty. But a significant portion of the industry is engaged in building CRUD app #237 or Ho-Hum SaaS #17, and a significant percentage of estimates end up wrong because some developer who was bored with his job decided to use a blingy new Javascript framework he had no experience with because he wanted a challenge. Money gets lit on fire again and again this way and it's frankly selfish and irresponsible.
If the software is truly that unoriginal, use something off the shelf. The issue is that in literally every non-toy product I've every been involved in the product owners will always turn CRUD app #237 or Ho-Hum SaaS #17 into something significantly more custom and much more complex.
>because some developer who was bored with his job decided to use a blingy new Javascript framework he had no experience with because he wanted a challenge.
That definitely happens, but I think it's a different problem.
> The issue is that in literally every non-toy product I've every been involved in the product owners will always turn CRUD app #237 or Ho-Hum SaaS #17 into something significantly more custom and much more complex.
YES! The cost of custom-everything design and bespoke gee-whiz is vastly under-appreciated, IMO. CRUDy Android and iOS apps using built-in widgets and simple color theming? Relatively easy to estimate, and pretty damn fast. Custom-everything, we want both to look pretty similar (so, probably Material-ish given current trends, which you'd think would come for free at least on Android but... doesn't), custom animations everywhere, oh I hate that default date picker on this particular Samsung phone (and sure, it's god-awful) can we customize it, and the designer came up with this layout for this form that requires all kinds of twisty-bendy manipulation to reproduce versus something more straightforward but that's what the "stakeholders" approved so let's do that.
Et c. et c. and pretty soon the app's 10x as expensive (no exaggeration!) because you couldn't live with the easiest-for-the-user-anyway default styles with some custom coloring.
And the Web's at least as bad. Often you're also making your page way less usable and breaking it on certain browser/device combos due to the extra effort you put in to make it "pretty", at great cost.
[EDIT] in fact I've in the past suggested providing design points for "pretty gee-whiz versus not-actually-bad straightforward implementation" to make product owners choose between getting e.g. three normal (and OK-looking but not "pretty") features or one pretty feature in a sprint, to make this cost more visible and give owners on a budget more flexibility in shipping features, but it never caught on—in particular designers hate, hate, hate the idea, I think because it might expose that their work is sometimes (often...) more expensive than it's worth (and vastly more expensive than they're being paid, since it also eats huge amounts of developer time) given you've got some developers with even a hint of aesthetic and UX sense on a project.
I wonder if you looked at time estimates from developers of common process applications (like CRUD) in say, the 1980s, who used COBOL or FORTRAN, vs say those using Pascal or C - if those developers of code using extremely well known processes and libraries had better estimates than those using the "newer bling"?
Nothing, inherently, except perhaps the fallacy of comparing a pair of jeans to the rather chaotic and unpredictable world of bespoke systems development. One is inherently known (a pair of jeans you've presumably already manufactured), the other is one big unknown, basically.
He compared to jeans because they're an extremely trivial purchase that someone would give little thought to, but still demand to know what they cost. The point is that in all cases you need that information, with a major software project you simply need it far more.
And that's the core of the problem (which, ironically in the context of the article, agile methodologies were supposed to address): wanting or needing something to be true/exist doesn't make it so.
I think maybe the point they're trying to make is that comparing a software development project that may include a number of unknowns to buying a pair of jeans where there are probably very few if any unknowns is fallacious and a waste of time.
That said, I don't think it's black and white. Some projects, making a simple web site that follows whatever Squarespace template is popular and filling it with some relevant content can be a pretty simple project, even to the point where the buying a pair of jeans comparison makes more sense.
The problem, I suppose, is when you believe strongly that every project can be accurately priced up front. Not every project is a moonshot, but some truly are and it's not always obvious up front which ones are.
And every engineer likes to pretend that their project is a moonshot.
If you are inventing something incredible,why are you someone's employee instead of getting capital investment or self-financing from your last achievements?
"Sweet, easy enough, I'll just use X, Y and Z, no problem."
"Oh but sorry you have to use A, B, and C instead. Oh and it'll need to connect to D for reporting, and we'd like it to sync with E which we've never done but they have an API so it can't be too hard, right? Oh and it needs to fit this design we've already approved that's about as far from native to the platform you're developing for as it could be and has a ton of UX issues that require fundamental changes to fix, which you'll uncover as you work. That's about as easy, right?"
"...."
[EDIT] oh and also one "minor" feature we're going to toss in later is actually big enough to build an entire company around so you'll have to waste a bunch of time talking us down from that while we quietly lower our opinion of you, and we've accidentally described a few things which are basically impossible by egregiously violating e.g. CAP theorem or being what's effectively a highly-available distributed filesystem that needs to fit without our timeframe and budget, which is 1/20 what it would take to maybe build that and that alone.
As a consumer, I can buy an already manufactured pair of jeans or an already programmed Adobe Photoshop for a fixed price.
As a producter, the cost of building a jeans factory does not seem to be inherently more predictable than the budget of a software development project.
The recent Foxconn plant in Wisconsin that may or may not be happening is a great example. They committed earlier, and claim now that their initial estimates were incorrect and that a plant in that area will not be profitable for them. They have now stopped the plan entirely and reworked what it is they are going to build completely from scratch. This despite having already committed to building the factory.
Even at scale people can pull out of complex agreements if their estimates turn out to be faulty.
You should be able to put it in a range otherwise you probably don't know enough to build it. You don't want to spend 100 millions on something that will save 1 million a year.
managers and business leaders make decisions based upon those estimates.
Imagine if that cable guy coming your house said between 1 hour and 2 weeks. Good luck making anything approaching a good decision surrounding the cable guy's visit.
The best way I've seen agile described is "designing code in a way that acknowledges change.' I think there's a lot of value in understanding that you may have to go back to the drawing board when designing parts of your program. I have little regard for the entire industry that has risen up around agile, however.
There is a similar problem in the research world; the work must be genuinely novel (and fundamentally unpredictable), but funding organizations such as NSF/NIH/etc. demand a timetable and specific deliverables.
The problem is usually solved in three important ways:
1) making the deliverables as vague as humanly possible
2) promising experiments and not results
3) applying for grants for work that you have already completed, and using the money to work on the next grant's project
The problem of having deadlines that are too far out is mitigated somewhat by the pressure to publish. Publishing has fixed deadlines but somewhat flexible deliverables.
Agile is not science-based, it is an ad hoc bag of techniques that seem to have merit. As such you have to be able to reason about what your practices are and you have to relate everything to concrete business goals.
Your goal should never be to become "more agile". You should be looking to improve efficiency, to make sure you are building the right thing, to improve communication so that everyone is on the same page, etc. "Agile" is never going to magically improve anything.
Uncle Bob wrote about this in The Tragedy of Craftsmanship (https://blog.cleancoder.com/uncle-bob/2018/08/28/Craftsmansh...). His perspective is Agile lost its way once the Project Managers stepped in. It seems to me the problem with Agile in practice today is there's too much focus on process and not enough focus on, you know, the actual software being delivered. Add to that a host of less-than-desirable ideas that have taken hold (don't even get me started on emergent architecture) and you realize modern Agile has become a cesspool.
What to do if you're on an Agile team? There's value in the Lean Methodology. Remember, the whole point was to deliver software your customer needs to fulfill their business objectives. Continually assess your Agile practice and see if what you're doing still makes sense for the project you're working on right now and for the current stage that project is in. This continual assessment with a focus on the end goal of delivering software can go a long way to addressing Agile's modern ills.
I had not heard of "emergent architecture" until you implied you hated it, and so I thought I would Google it.
I have no idea what emergent architecture is in the agile context, but the idea of emergence is fascinating to me, and has been ever since I read "Emergence: The Connected Lives of Ants, Brains, Cities, and Software" by Steven Johnson.
We see the idea of emergent design everywhere in nature and society. I'm not suggesting everything can be built and designed "emergently", but certainly some things can be. See the book (and concept of) "the cathedral and the bazaar". The idea of something useful coming into existence without central planning is an enticing one, and so I can see the draw.
Thank you for that book recommendation - I just bought it and am looking forward to reading it :)
I also had not heard the term "emergent architecture" in relation to software development. I'm intimately familiar with the concept of emergent order, though, and my grokking it was part of my "political awakening" in my 20s. It led me from being a libertarian-leaning conservative to an anarcho-capitalist. To put it another way, I believe that the system create by independent actors acting in their own self-interest naturally create the most efficient system possible within the constraints of the environment within which they operate.
There's a name for this concept when applied to company management, but I can't recall it at the moment :(. I want to say the word has a Greek root, and it reminded me of "autarchy" ("self organized") but that wasn't what it was called.
You'd probably like Milton Friedman's ideas, and I'd be surprised if you've not already come across his work and YouTube lectures.
There is something to be said for small government, but also can go too far: actors acting in their own best interest can obviously cause harm to society or to the larger group as a whole (See "tragedy of the commons").
Related to emergence is rapid iteration, and it's importance to analysing complexity, which is caputured nicely in "Boyd's law". See https://blog.codinghorror.com/boyds-law-of-iteration/, and then see if you can find the referenced paper by Roger Sessions. (Sessions seemed to caputure the essence of "Lean Startup" before it was cool...)
Not everyone. Some are very successful with it. The problem is that everybody wants to copy that success and tries to cargo cult themselves through it without understanding it.
"let’s stop pretending Agile was some sort of cure all."
That's certainly good advice. Agile is more of an attitude than a solution, and attitude is not going to solve all your problems. Even if you are as Agile as you can be, you still need to solve your problems. It's just that being Agile might make you more flexible and effective at solving those problems.
And Lean is certainly more comprehensive than Agile. And I don't think they contradict each other. In fact, they're very similar; they're both about empowering the people who are doing the actual work.
"management thinks the focus should still be scope and deadlines and efficiency, ignoring that the deadlines are arbitrary and the requested time estimates are a form of waste."
Then management is not Agile and hasn't properly learned what it is. That is sadly common. Agile is not magically going to solve bad management, but Agile does emphasize keeping management at a distance and empowering developers.
"Agile actually tends to mask the core problem, which is a systemic, bidirectional lack of vertical trust."
Does it mask that? I think it exposes it. Without that trust, Agile cannot work. Agile insists on that trust. I suspect that every case of broken Agile is caused by a lack of trust. Fix the lack of trust (or use a system that works without trust if you absolutely must).
"Did you know story points were actually invented to obscure time and help alleviate this problem? That backfired too, didn’t it?"
Yes I did and no it didn't. Every single team I've worked in over the past 6 years knew this and kept repeating it every time someone comes along and tries to match story points to actual hours.
This first half of the article suggests the author has mostly worked in very toxic management cultures.
My last company (before going self-employed) had an agile environment and it was mostly a good system to work in.
I don't recognize the fairly strictly Agile (not AINO) environment I worked in in that article. Not perfect, but it worked and better than a lot of places that had no good methodology.
I agree that almost all of the orgs I encountered that were Doing Agile were pretty horrible.
On the other hand, I've also been in orgs that were very successful in developing software in a way that we would recognise as being agile. We just didn't make a big deal out of it. And we didn't do standups (most of the time), or 2 week sprints, or retrospectives. We didn't even pair consistently: we split up on trivial stuff, paired on hard stuff, as needed and available.
In one org we started without TDD, and even without having much in the way of automated tests at all. But when we discovered testing and TDD, boy did we ever embrace it. Not because anyone told us to (mid-to-late 90s, so no Agile Manifesto), but because the difference for us developers was night and day.
So develop software. Concentrate on what helps you develop software. In my experience, you will probably discover something recognisably agile all by yourself. So when you discover that, help yourself to what helps you. Don't make a big deal out of it.
What you are describing is exactly what Agile should be. You have a team of people who care about the craft and they slowly improve processes based on real world feedback and experience. If something doesn't work you drop it and try something else. This happens in environments where people respect each other.
Management does not want to hear that software development is a craft. They want it to be a trade, with fungible workers operating in a predictable process with fungible components. The great tragedy of Agile is that it was co-opted as the vehicle of this transformation. It was seized by management, weaponized to other purposes, and then aimed right back at us.
Yeah, I've come to similar conclusions about Agile. It can be a great way for an already well-disciplined organization to think about the work they're doing. But many, many undisciplined organizations thought that Agile would be a catalyst for them to become disciplined. But the nitty gritty of "doing Agile" required EVEN MORE discipline than what all these organizations were already capable of exercising, so it just highlights all the frustrations everyone already has.
That does seem to be the downside of Agile. It's a collection of maybe a dozen different techniques and practices. But if one of those practices falters, whether it's the TDD, or the business side still wants a certain deadline, or you don't bother to demo at the end of a sprint, then the whole house of cards falls like dominoes..... checkmate!
Sounds a lot like TPS, I assume Toyota did a lot of the stuff before any books were written. Also, my personal experience, the more companies talk about Lean and Six Sigma the less they do it.
Similar with "culture". Companies with a great culture don't talk much about it, they just get shit done without much fuss and typically have some fun doing it. Because getting shit done is fun.
On the other hand, if the company all about how fantastic the culture is...run! Or at least tread very, very carefully.
Totally agree. Talking about culture or hanging up "inspirational" posters is usually a warning sign. Good organizations live their values and don't feel a need to constantly talk about them.
Got any insight as to why TDD is such a great thing?
From how Ive heard it described it seems to be a way for management to impose rules on how developers can approach writing a feature and not necessarily a benefit to the devs.
If I understand it properly, it's essentially write a test so it fails then fill in the feature gap. The whole write first bit is just so you don't have to retest later to verify that your test didn't hit a false positive, right? Something a careful person would check anyways...
TDD was discovered as technique by developers, for their own benefit. Management imposing it can be a bit of an anti-pattern, though it may be well-intentioned. I have certainly pushed teams to TDD, though primarily by example and by the positive experience you quickly get.
Anyway, TDD has nothing to do with management imposition. It is really diametrically opposed, at most orthogonal.
2. "test didn't hit a false positive"
Yes, that's a good thing, though hardly the main point. You write that a careful person does this anyway. I'd rather have a machine do that for me, and have less of a requirement on me to be careful. Computers are much better at being careful, and I want them to do the tedious stuff.
I also am wary of claims of "you just need to be careful". Probably lots of bugs ahead!
3. So why is TDD so great?
First, it's a design technique. Writing the test first forces you to think about how your new production code will be called. It also forces you to be clear about what the code is supposed to do, precisely, before you write the code. Writing the test tends to be significantly less difficult than writing (correct) code to solve the problem. You then write only the minimal code that is required to pass the test, which surprisingly also tends to be fairly trivial. Also, you know when you're done: when the tests are green again.
So you've transformed one difficult step into two fairly trivial steps. Which sounds weirdly magical, but actually works in practice. Also, since you only wrote the minimal code that was required for the tests to pass, you know that you have good test coverage.
After you've done a few, you're in a place to detect duplication and eliminate it by refactoring. This tends to be somewhat more intellectually challenging. However, you are truly refactoring, that is not changing the functionality, so you have a great safety net with the good test coverage you achieved before.
So you get this small steps that are fun and easy, and you tend to not regress.
Psychologically, it's also very pleasant, compared to the usual flow, which goes somewhat as follows:
1. I have a rough understanding of the problem,
2. I code a solution, which is at the limits of my understanding
(because everything is up in the air)
This part feels very good, "I am the master of the universe"
3. Then I should write some tests, but the incentives are a bit negative
-> if the tests all pass, nothing happens, so why did I write the tests?
also if nothing happens, how do I know I actually tested something?
(not by "being careful")
-> if the tests do show a failure, it's a distinct downer from step 2
(and so I probably subconsciously write my test so they don't fail)
4. I don't really want to refactor, because my solution from step 2 was already awesome!
TDD:
1. I write a test. It fails appropriately. That makes me happy.
2. I write minimal code. Tests go green. That also makes me happy.
3. I refactor. The code now is much nicer and tests still green. More happiness.
So with most normal development flows you get that initial rush and then a lot of downers. With TDD, you don't get quite the same initial excitement, but instead you get these nice regular dopamine hits without really much downside.
Am tired, just dead bones tired of all of it. It's all just this or that money making scheme with guys like me caught in the middle. I despised Waterfall, fell into scrum and in the past ten or so years kinda sorta saw one project where agile basically worked right. The rest has been mostly waste. I've worked on billion dollar systems on down to just little nothings. Doesn't matter, companies are struggling to make anything work. I think there's just too much complexity and they all believe IT solutions can manage that complexity. It's failing like never before while expectations have never been higher.
Message to the Fortune 500: Just get rid of anyone with the title "scrum master" they're dead weight. You were fooled, deal with it. I am trying to move my team over to Kanban right now, but we have all this reporting crap up the chain all designed around CA Rally, the worst productivity tool ever made.
You also need to give people the chance to make mistakes and learn from them. I don’t envy a lot of the young developers who get micromanaged all the time and never have a chance to try something.
Perhaps I've just always been extraordinarily lucky, but I've never heard the developers I work with complain about our Pointing and Development cycle (and I'd like to think I do try my best to 1) proactively ask them for feedback and 2) recognize and squelch any of my own reactionary defensiveness that comes from feedback).
One thing I see a lot in the comments is about time estimates and the amount of pressure that that brings and the rarity in which it works.
In my teams we NEVER talk about time of delivery with Engineering. I've always been trained to think of pointing as an estimate of "complexity" (e.g. 1 point is roughly the complexity associated with a copy change).
Then, by getting a track record of historical points completed in a week, we can transform our way from feature => complexity => time.
Complexity only matters if it can be correlated to time.
For example, if you ask someone to transcribe 10,000 pages of text, and you know that they type 60 words per minute, and that there are an average of 300 words per page, then you can estimate that it will take approximately 833 hours to transcribe.
If we followed your method of estimating stories strictly by complexity, however, this is clearly only a one pointer story. After all, there is nothing complex at all about the dull task of rote transcription, and all quantities are known and established.
When people talk about "complexity" in software development, what they are really talking about is how certain they are in their knowledge about all the things they have to touch. The more things they have to touch, and the more things they are unfamiliar with, the more things can unexpectedly go wrong.
Each task should really receive two estimates:
1. How long do you think this will take?
2. How certain are you of this estimate, given all the factors you are aware of?
That would be too much work, though, so people condense this into a single point value and hope for the best.
That would be more helpful than the far more common response of just "you are doing it wrong" argument with absolutely no indication of how to do it right.
The biggest problem I’ve found with Agile hasn’t necessarily been Agile itself but the lack of buy-in from company leadership. Even when we’ve had professional Agile trainers come in and tell them that not only must they embrace it for us to be successful, but they themselves must go through the training. They usually waive it off as something they’ll get to eventually. Almost immediately we wind up with a bastardized version of Agile that basically doesn’t work because the leadership doesn’t understand why we “can’t just” do something.
Agile was dead to me when I finished my PhD and joined a company that paid a Scrum consultant more than they paid me to help us move tickets without knowing what was the project about.
You'll probably find that the most highly paid people in most organisations only have a fairly abstract understanding of the work being done. This is nothing to do with agile.
I am perfectly fine with paying more for a technical project manager, or a team lead who can help us glue things together and see the big picture. But I am not fine with hiring a consultant who doesn't know anything about our product and thinks that Agile has some magic to deliver products.
Agile in practice is a way for Corporate to have its cake and eat it too: get the results and productivity associated with outstanding programmers, using the programming talent they have/can hire, with a minimum of risk exposure. Part of this is the attempt (also seen in other corporate quality efforts like ISO9000) to distill skills into process: to produce a book of rules that, rigidly enforced on relatively unskilled workers, will cause them to perform at the level of highly skilled workers. Hence my repeated quip that "Agile is attempting to emulate a good programmer with an array of average programmers". Unfortunately for Corporate, It Just Doesn't Work That Way.
Now I believe that Agile was started with good intentions, but today it is much more a motte-and-bailey doctrine. The motte being "Agile is the values in the Agile Manifesto". The bailey being "Agile is Scrum and SAFe, and all these meetings, ceremonies, and process are a necessary component of Agile". And I think this is why people fall into no-true-Agile discussions and eventually end up disappointed with something that showed a lot of promise. Agile started off a plea from programmers to Corporate, and it caught Corporate's ear. But once in Corporate's possession, it was deployed to serve Corporate ends only.
I love this talk by Dave Thomas, one of the people whose names are on the original Agile Manifesto.
Among other things he speaks about the simplicity of the Manifesto, about the cultish movement and consulting services which rose around it and the cultish, distorted, complex vortex of BS that has formed around a super simple concept.
The longer I stay in this industry, the more anecdotes I accumulate that suggest that software development is not exclusively software development.
Software development is solutions development; but for a large subclass of problems, custom-built software that automates the tedious, predictable, and repetitive parts of the solution has been and always will be a competitive way to solve them. It's synecdoche, a metonymy for "software development [and other assorted disciplines of problem-solving]".
Software is the hammer in the toolbox. Things that look like nails get hammered. The hammer is so useful to organizations that have many nail-like objects sticking up, that handypersons who only have a hammer in their toolbox can still work continuously, and may never need to do anything else but hammer things. This may become common enough that folks who also keep vice grips, screwdrivers, hand saws, duct tape, and penetrating oil in their toolboxes are forbidden from using those tools. They were hired for the hammer; they should use the hammer. And what's all this nonsense about non-hammer tools, anyway? We've never needed anything but hammers here, and that's the way we like it.
Agile is about advising organizations with strict rules and style guides for nail-hammering to loosen up and let the hammerers determine how to wield their own hammers, and possibly also choose the length and width of the nails.
Despite the amount I actually get paid, working upstream of the problem far enough to eliminate both the nail and the hammer--possibly using a more appropriate tool--is apparently above my pay grade. Agile never fixed that. It never even suggested that a non-hammer world exists. Even the CTO position might as well be chief hammering officer. The 'T' restricts the domain of authority. An organization that is agile instead of just Agile needs to have someone around that can meaningfully restructure the company itself, such that the assumptions already built in to its organizational hierarchy are not inevitably replicated in the processes of all of its subdivisions. That's why Agile never fixed anything. The CEOs still kept telling their CTOs to take charge of all the hammers and pound something out. Those companies founded to be Agile put different assumptions into their organization. When that changes, going through the motions of Agile won't help them any more.
I’ve been a tech lead for 6 years and Agile has given me a lot of imposter syndrome because I can’t seem to make it work for me. The biggest point for our org is to cost work items and realize a consistent throughput for our team on which we can make scheduling predictions.
Unfortunately my tasks tend to be nebulous. They involve investigating very large potential systems and breaking out smaller tasks, many of which won’t be fully understood till we’re in the trenches. How do I cost the time it takes to learn how Spanner will handle my design and update architecture accordingly? How do I cost “interrupt driven” work like getting buyoff on a given architecture? One of my OKRs was literally “Create a security model for a new foundational technology that is acceptable to all integrators, follows Google Cloud best practices, and can be represented in Kubernetes.” How would you break that into (bi)weekly sprints? You don’t even know what work you’ll have to do until you go to a committee of stakeholders and get told where you have to go back to the drawing board.
See, this is about agile going wrong.... back in the early days of XP, a term started popping up "Brain Engaged", meaning that you don't blindly follow perscribed processes. The manifesto tried to capture this in somewhat frilly language. So be agile ( not in the dogmatic process sense of Agile with a capital 'A'). You have to do something like "create a security...blah blah". You don't have to do anything bi weekly, what you need is a way of of doing that where you can get feedback as soon as possible, how do you iterate through possibilities and arrive at a good decision? how do you involve the people who need to be involved in a way that you get good contribution? How do you make it so it is robust, how do you progressively work towards the goal while minimizing risks and failing fast? If nothing else, the core of agile is feedback loops and communication with the various people involved at a rate that people need.
To expand a little, since the org wants accurate estimates we need to accurately measure throughout and the total quantity of work. Though throughout trends stabilize pretty quickly, the system falls apart when one task reveals another. Accurately breaking down and costing every sub task at sub-week granularity for a multi-year project is an almost insurmountable task. It requires effectively doing all the work in your head.
I’ve seen one compromise and benefit to the system:
Compromise: push back on scheduling until the date you give for coming up for a date of completion. This works best during the final phases of development.
Benefit: with weekly throughput info, it’s easy to tell if someone is trying to do too much in a sprint.
Oh look, it's the weekly "Agile is bad" post. I've successfully worked on two teams (including my current job) where we're a truly agile dev team. We value interactions over processes and tools, collaboration over contracts, and responding to change over following a plan.
Agile is not and has never been a cure-all. Scrum doesn't solve any problems. It's supposed to reveal the problems that exist and make them apparent to anyone looking at the scrumboard. If you don't change your team dynamic and just start looking at a JIRA board every day in 30 minute "standup" meetings, you're not agile. You're just cargo culting.
I'm tired of people blaming "agile" -- which is mainly about self-organizing teams and adapting quickly to changing circumstances, for what's really a management failure.
But but but, the problem is Waterscrumfall, not Agile as intended in the Manifesto
Here's the thing. Corporations are hierarchical. Human beings are hierarchical. We're also communal, but start putting us into larger and larger group situations where we're facing competition, stress, and we must engage in concerted effort, and we go into hierarchical mode.
Why does everything tend to go back to waterfall? It's because that's how management likes it, for reasons which go down to basic human nature in the kinds of environments technology companies create. Heck, even the financial markets demand fictional planning and forecasts of the unknowable future. Bosses want to be able to make plans.
One thing which Waterscrumfall has achieved, is that the iterations are much faster. That even fits the "next quarter" mindset of the financial markets.
Very well said. If there's a methodology that starts with: these are the people involve, these are their incentives and this is the expected outcome in game theory, then I think we'll be making progress on methodology.
Methodologies are pretty good about that in the short term. If Agile were bad at that, then it wouldn't have spread. I think the problem is more Agile in the long term. It's a more general problem faced by longer term software projects.
The answer might be to have shorter term software projects. Make everything more modular, have a standard backplane into which everything can plug into, and enable competing short term projects. I think today's trend is to make corporations into ecosystems, and this change might be able to shape incentives positively.
> How does the first line of the Agile Manifesto begin?
It is kinda funny. For me the first line has always been:
> Individuals and interactions over processes and tools
And when I am asked to explain Agile, that is always at the core of my message. Form a team, let them do their work and start learning how they can improve how they do it.
From my perspective, the primary role of a (software development) Project Manager/Scrum Master is to create an environment where the team can optimize itself while not drifting into personal conflicts (create a healthy environment for collaboration and feedback). Everything else, like planning/risk management/change management/stakeholder management/reporting/presenting come afterward.
Those other things are important too (and should not be skipped completely), but if the team doesn't work, most of the other tasks create little value.
I don't know any description of Agile better than meaningless buzzword. Buzzwords are not without value. They are a tool for selling a change without describing all the nitty gritty.
"Agile" is a lie that before 2001 nobody at all had ever done iterative software development, and all software process ideas that came after 2001 are by definition "agile."
"Agile", as with any other buzzword, is not for people who actually understand the details of the change; it's for someone who needs a handle onto concepts that would fly over their heads without a buzzword. Lets stop litigating the term, and move on.
>Quit with the local optimizations already, and realize trust is the №1 issue.
So how do you trust someone who would outsource your position if they believed it would save money and reduce their work load? Management and employees have an inherently adversarial relationship that is poisonous to trust. I think it can still develop at times, but under our current culture where everyone knows someone whose job was made redundant with little recourse for their employment, you might as well be asking for a coconut to grow in an active lava flow.
In my experience, these discussions boil down to anecdotes at twenty paces.
I've seen software development done very poorly and done very well, sailing under a variety of flags. But on an eyeballing there are far more bad experiences than good that get labeled "Agile".
Whether this is because of causal differences or because it's easy to affix any label to anything (and most experiences suck regardless of the label) is hard to tell without a control-group universe were the Agile Manifesto was never written.
I have a theory here... culture beats process, but companies without the culture have to rely on process. So you’ll find that companies with the right culture can deliver better results without needing a bunch of process, but companies without the culture end up having to force it with process.
And tbh, despite how bad “Agile” process can be, it’s a lot better than pure waterfall.
Pivotal on the whole does software development as well as I've experienced. When you hit the sweetspot for our core practices it's really impressive. Everything just sort of flows.
Sometimes it goes off the rails. That is in the nature of things. But it goes off the rails as an exception, rather than as a rule.
If there's a key to the magic it's (1) hire for empathy as well as smarts and (2) reflect on what can be done better.
In addition to the difficulty in making a meaningful estimate, I'm frustrated by how often management really wants to hear a particular answer and won't settle for anything but that answer. It can go like this:
1. Developers size development of X and says it will take roughly 12 months.
2. Management says the rest of the organization is going into system test in 9 months and needs X finished by then. Couldn't there be some way of speeding up the development of X without impacting the rest of the organization?
3. X's developers say no, not really.
Now management tries one or more of the following:
-- find someone else to size the work that X's developers will be expected to do
-- hire some outside "expert" to size X
-- hire contract programmers that claim to be experts in X that will join the X team
-- suggest reduced functionality for X
-- propose some modified "streamlined" component testing for X
-- promise to provide X's required dependencies ahead of their scheduled completion dates so X's development won't be gated by these requirements
Now jump back to step 1. Repeat until X's team relents and says that maybe, just perhaps they might be able to finish X in 9 months if everything goes perfectly even though the company has never done X before
This is now passed up the management chain and it becomes part of the official overall system's schedule that planning, marketing, sales, finance, etc. is depending on.
Iterating over all of these estimates is a big waste of time. Management has already decided that they want X in the system and will keep pushing until an estimate comes back that they accept.
If X's team is lucky, some other team will fail to make their estimated completion date and X will end up with the 12 months it might take to finish X. Every team in the organization is hoping that it will be a different team that first has to admit that they will not make their date. This causes a kind of game of chicken where every manager is unwilling to be the first to admit that their team is going to miss it date. Consequently, every date in the organization cannot be relied upon. The distributed file system needs the TCP stack to be finished on time, but the TCP stack needs the ethernet device drivers to be finished on time and no one will admit slippage of any dates first.
Yeah, I thought that point might be confusing because I wasn't clear. You're right limiting the scope could reduce dev time and it might often be a welcome suggestion. It doesn't always go that way though and some proposals for modifying the functionality are costly because they can require re-architecting the design even before development starts.
Usually, lightening the ship by throwing functionality overboard in a panic doesn't happen until around half way through a project, but what I was referring to was the sort of impractical proposals that come from non-developers about how to reduce development time.
"Couldn't we remove the ability to customize user icons--everyone in the icon based user view can have the same icon." Okay, why have icons at all if every icon is identical?
"Deadlock, what's that?, do we really have to detect it in the first release?" Hmm...
"Can we leave out distributed shared virtual memory segments?" They weren't included in the original sizing
"Do we have to do persistent client caching?" Well then we will be no better than competing products.
"Does X have to run over both UDP and TCP?" That was originally a requirement. We can go back and examine that in light of other parts of the system depending on X.
I personally found that a good methodology: each team member has clear area of ownership/responsibility. John does X, Jane does Y, Ann tests X, Jeff tests Y. Everyone logs their own tasks. We define and work towards a sprint. No elaborate planning, no estimates. The peer pressure is enough to get everyone moving.
Managers often tell me that my process is chaotic. They want more visibility and deterministic completion dates. We adopt Agile process. Now we have long sprint meetings. I think we get less done.
> So find a good booklist. Follow some good blogs. Here’s a start: If you haven’t read Sense & Respond, Lean Enterprise, A Seat at the Table, and Everyone Is a Change Agent, I suggest you do so pronto. Your leaders too.
I'd add Peopleware to that list.
> Start reading posts by John Cutler, Melissa Perri, Bob Marshall, Allen Holub, Laura Klein, Erika Hall, Neil Killick, and branch out from there.
Anyone care to add their own handful of books and bloggers to these recommendations?
> let’s stop pretending Agile was some sort of cure all
But we won't. That's why snake oil salesmen were so effective: we want to believe in the cure-all. No matter how often you chastise, people'll keep buying it, because they're aspirational. They want there to be greener grass on the other side, and they're willing to try anything to find it.
The business doesn't actually care about Agile at all. It just has heard of the term and realized it has something to do with getting work done better. So leadership will say things like "We're an Agile organization now", not having the faintest idea what that means. A snake oil salesman doesn't need to know how snake oil works; he only needs to know what it promises, and how to sell it.
You can have the perfect software development methodology and people will still do it wrong. But if everyone has the same goal in mind, is highly motivated, and is willing to cross aisles to work with other people to do the best job possible, a methodology will naturally arise. Self-motivating humans can get some crazy shit done. The best working organizations I've ever seen followed no methodology. They just did what was needed to keep a site running well and making money, and it was kind of wonderous.
I'm currently experiencing this, and especially the whole delivery vs discovery, knows vs unknowns and their impacts on estimations. It's quite interesting, especially how differently you have to handle delivery vs discovery projects.
Basically, my team consists of some devops engineers mostly working on maintaining and extending the config management, and some other technical consultants who deliver and execute projects, ideally fully standardized.
For the standard delivery projects, it's extremely valuable to hammer the scope down into a precisely known constant and track the time spent to implement specific parts, as well as the delays incurred for different reasons. This allows for a strong, deliberate optimization of execution time with a measurable, predictable business impact as well as a reliable estimation of these tasks.
Discovery projects on the other hand? Yeah. We're pretty good at giving a lower bound for the time necessary by tracking similar tasks in a job. Extending a database entity is going to take a roughly similar time each time. Packaging, downloading and extracting binaries isn't going to differ that much in time from the last 3 times we did that. If you can identify a bunch of similar tasks you've done some time ago already, you can give a lower bound - "The tasks we consider familiar will take about 2 weeks".
But after that? Who knows. At that point we're rather estimating by asking: When do you stop feeling silly about it? Do you need a year to do this? A month? Two weeks? A week? A day? So usually we hand our project managers something like "This will take 2 weeks we know, plus 3ish weeks we don't know".
I wish the post visited each of the principles. I can't find one I disagree with. It's all in the interpretation and implementation. Agile for me is not a single methodology that you develop. It's about refining and redefining processes that work for you team and org.
Principles behind the Agile Manifesto
We follow these principles:
Our highest priority is to satisfy the customer
through early and continuous delivery
of valuable software.
Welcome changing requirements, even late in
development. Agile processes harness change for
the customer's competitive advantage.
Deliver working software frequently, from a
couple of weeks to a couple of months, with a
preference to the shorter timescale.
Business people and developers must work
together daily throughout the project.
Build projects around motivated individuals.
Give them the environment and support they need,
and trust them to get the job done.
The most efficient and effective method of
conveying information to and within a development
team is face-to-face conversation.
Working software is the primary measure of progress.
Agile processes promote sustainable development.
The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.
Continuous attention to technical excellence
and good design enhances agility.
Simplicity--the art of maximizing the amount
of work not done--is essential.
The best architectures, requirements, and designs
emerge from self-organizing teams.
At regular intervals, the team reflects on how
to become more effective, then tunes and adjusts
its behavior accordingly.
Most of my complaints about agile have centered around estimates. I'm of the opinion that anyone claiming to give accurate estimates for software development is a liar.
But I would like to see an organization that only had the team lead give estimations. This would remove the temptation on individual team members to size too large and fill the time, while also keeping management happy that they're receiving estimates for tasks they requested. Measuring throughput would immediately become a lot more natural since team members wouldn't fall into the trap of "When a measure becomes a target, it ceases to be a good measure."
I was actually just rejected from a job I applied to since I voiced a few small concerns with agile/scrum in the interview. My friend that joined that company recently reported he was doing maybe 10% of the work he and I were doing before he left my team. Even though I aced the technical portion they didn't want to risk hiring me since I asked their willingness to flex the rules of scrum.
Organizations have to find their own way of working. It's going to change based on the size of the organization, its goals, and its people. With the exception of 1.5 years in a software company, most of my organizations have been internal IT. What I find is that user areas, who initiate projects with us, often don't have a clear view of what they want but think it's going to help the organization. Our job is to build models quickly so that they can validate an idea. Only after iterating through a few models can they reliably say the benefit/cost of the idea.
A very good engineer within my organization said to me, "Our job is to get something in front of them quickly so they can figure out what they don't want." Our successful projects have been ones in which we kept to this maxim.
I think this is what the author was trying to say but I had a hard time following his writing style.
The whole point of Agile is to force the entire organization to recognize the fact that we don't know how long it will take to crate a satisfactory product. That doesn't mean just dev time but also figuring out what you want it to actually do throughout the process.
Instead of thinking you can write a perfect requirements document, then can just design and build it (waterfall) doesn't work.
The idea that you can have any sort of meaningful estimate even when you don't have perfect requirements is insane. Having an estimate with perfect requirements doesn't even work.
The most effectivly analogy I've used lately is: "Take your car to a mechanic and say "My car is making a funny noise, how long will it take to fix?" and see what kind of response you get"
In the same way that software is eating the world, software management is eating traditional company management. Software projects are getting more important, sometimes centrally important to the future viability to more and more companies that aren't traditionally software oriented. Software projects are moving up the stack so they touch every part of an organization.
Agile is being blamed for problems that are really organizational, company-wide issues that a software team has to deal with because they now are central to how everything runs. The author's suggest to read more general business/organizational management type books is apropos, because the overall goal is agility of the entire organization, not just the software team.
Software development has some fundamental challenges that have been faced since the start, and always will. Understanding them matters. Here are some of the big ones.
1. Developers need to communicate with each other. A lot. This takes time. The amount of time goes up non-linearly with the size of the team. (The Mythical Man-Month claims a scaling law of n^2.)
2. Adding process + documentation of various kinds can turn that from unbounded time spent speaking to a large but bounded written effort.
3. Nobody is good at predicting schedules.
4. Good news travels, bad news doesn't (until the last moment).
The traditional solution was to solve #1 with #2, put a lot of effort into schedules for #3, and get blindsided over and over again by #4. The average result was that the average delivered software project took over 2x what was estimated, cost over 2x, and was delivered with under half the promised features. But, no matter how delayed, usually didn't start slipping the promised deadline until about 2 weeks before the first official release deadline. And these were the success stories, since most software projects never really got delivered.
What Agile opened up was the perspective that there were other approaches possible. For example we can solve #1 by having small teams, which makes #2 unnecessary, solve #3 by making projects small enough to not need good prediction, which creates a feedback loop to avoid #4. And that is not the only valid tradeoff.
But this is not a one size fits all. Real world data shows that a team hits peak throughput at 5-8 people, declines, and doesn't get back to the same overall throughput until 20+. (After which it is almost linear, but with much lower individual productivity.) The teams as drawn on the org chart don't matter. The teams as they actually interact, do.
If you understand your options, and understand the tradeoffs, you can figure out how to customize the right solution to the people/organization. If you don't understand the tradeoffs, the best that you can do is blindly copy what worked somewhere else and pray that you got it right. And if it turns out that you didn't, develop superstitions about what you should have done instead.
> "So let’s get on board with continuous learning, and let’s stop pretending Agile was some sort of cure all."
Did anyone - who has seen shiney new panaceas come and go - believe it was a cure all? What it is, like MVP, is a way to mitigate risk. It doesn't eliminate risk. It can only mitigate it.
Furthermore, it still boils down to team and people. No tool will save a dysfunctional team. The key isn't the tool. The key is the key agreeing on the tool, and sticking to it. That is, tool is not a proxy for deep and solid team work.
IMO focusing more on systems than people in management is a mistake.
To the extent your work isn't repetitive, the only job process can do is set a cadence on communication & decisions. Yes 2-week scrum does this but in my experience it distracts companies from the real goal of management -- attaching subject matter experts to projects, i.e. concentrating time & resources towards a goal.
Process can get rid of bad people and share around your best people, but unless you have an assembly line, beware of any other claim.
I think most of the people who wrote the agile manifesto said "Agile" been warped into something they don't agree with. Its supposed to empower teams, not enforced from on high to increase pressure on teams.
"Agile transformations" are bad, but I still think agile itself is fine. Teams of self-organized, competent developers tend naturally toward agile-like practices. I've seen agile's story-centric model of development help new developers grow by giving them responsibility for specific, testable units of work.
As a lot of people have already said, the problems arise when managers expect agile to give them software on a timeline.
So, Agile solved one problem when requirements are less than clear. Small batch sizes improve responsiveness, costing a bit of throughput. Like the old kernel HZ setting. Now you've got 99 problems, and responsiveness ain't one.
Yet folks expect Agile to solve everything for some reason, and forget all the other issues around bags of meat working together. Those continue to need handling and still get out of control at times.
It is a balance: We don't know how long something we've never done before is going to take, and at the same time, we often need some time boundaries so we don't go off tangent building some wildly ambitious re-write of the world.
Perhsps the qustion we ask in projects shouldn't be "how long will this take?" but: "How long do you want to spend on this problem?".
"Stop unfairly putting dev under a microscope and letting everyone else hide in a black box. Why aren’t we just as concerned with how strategy teams operate? Or how legacy architects are constraints in the system?"
Are other ladders in an organization not subject to performance review?! Managers? PMs?
I just always double what I think it will take. I've been doing software now for 20 years and I have NEVER seen something come in on time. If I think it will take a month, I say it will take two. This has saved me more than I can count since I pad for the unseen circumstances that will occur.
The root cause of the problem is that workplace is inherently undemocratic. While we live in democratic societies, we work in feudal organizations. Unless we fix that, nothing will change.
“I estimate that 75% of those organizations using Scrum will not succeed in getting the benefits that they hope for from it . . . Scrum is a very simple framework within which the “game” of complex product development is played. Scrum exposes every inadequacy or dysfunction within an organization’s product and system development practices. The intention of Scrum is to make them transparent so the organization can fix them. Unfortunately, many organizations change Scrum to accommodate the inadequacies or dysfunctions instead of solving them.”
This was Ken Schwaber, co-creator of Scrum. 10 years ago.
Ten years after, things did not get a lot better for big corporations that were born with leaders used to waterfall.
I agree with almost everything in this article and I completely agree with what Tootie wrote above: "The fundamental problem that drives most agile failures isn't in the team's execution, it's in the business' expectations. One side is signed up for incremental delivery, and one side is set up for a fixed scope and deadline and the result is misery."
As the article rightly noticed being Agile is not just building faster. It is delivering "value faster". That also means: "do not build what does not need to be built" and every 2 weeks be willing to scratch what you have worked on. Instead in most of the big companies the business people present shiny objects to the leadership and their goals is to build those shiny objects. Nobody wants to say: "We were wrong". The result is a few projects were people worked for 1 year and then the project is abandon. Even if the idea was good, nobody wants to work on little improvements.
It seems that Agile, Scrum, and most of classic agile lingo have a bad connotation for most of the people in these days. Some (over) simplifications that were made to make Agile easier to adopt ended up making things worse.
People in big corporations are used to the "flavor of the month" Most of the people really do not look for a revolutions there. They are just looking to how they can seem smart to impress their bosses and being promoted. That how Scrum got in. And that is why it will never realize its full potential there. They do not want to change how they think and how they do things.
I run a pretty successful team inside a bigger organization. I wanted to use story points. Start from scratch and have the team get a sense for what 1 point is. And my boss really wanted us to use story points. He only had one requirement we had to use this equation 1 story point = 1/2 day of work...
Trying to explain him that one of the reason you use story points in the first place is to stay away from time did not go anywhere. All the other teams were using that equation and we could not be the only team that was not to using it.
Frankly speaking, I think it is hard to work at the "inter-team" level if you cannot use the same unit. That's why in Agile you want truly autonomous teams. But big organization were not organized in that way. So you need to re-organize the company in a different way... but assuming there is the desire, who is going to drive that change? Who is going to take the risk of changing something that work in some way in a way that is not going to work too well for a few months?
In my humble opinion, it is almost impossible to change an existing a waterfall organization and make Agile work inside of it. Only a leader with a lot of trust by everybody in the company and a lot of courage may be able to make it work.
Most of the people that hate Scrum, Kanban or other flavor of Agile had experience with a "Cargo Cult" Agile preached in big companies and usually end up thinking that Agile is sooo screwed up. A big corporation will usually embrace embracing some things that usually really help (ex: standup meetings) and change a ton of other things that will make things worse.
You like the Agile mindset, you are going to be a lot more successful if you start from scratch with new young people or people that really want to commit to make things great.
For a new organization is easy to start with a simple "Agile template" and make sure that the team is really onboard with a continuous improvement mindset. Keeping regular retrospectives will make things evolve quickly in the right way.
People get far more done if they spend less time labeling things and thinking there's a new canned technique that's better than what's been done before.
I have tried Scrum. I have also tried various pick-and-chose hybrid approaches which were generally less successful. For example having scrum-like sprints and daily stand-up, but still having a fixed feature set and fixed deadline. This doesn't really work. (You can have deadlines in Scrum, and you can have fixed feature set, but you cant have both!) This happens if management does not really buy into the process. The place where scrum worked was because management was willing and able to try something new and willing to learn.
For what it is worth, I have also seen classic waterfall be successful. But I have also seen several big-scale disasters with waterfall, and in my experience waterfall carry much larger risk and is much harder to salvage when it goes off track.