Hacker News new | past | comments | ask | show | jobs | submit login
When Agile jumped the shark (deathrayresearch.tumblr.com)
20 points by ljw1001 on July 15, 2012 | hide | past | favorite | 25 comments



This was a bit meandering, but I feel his pain.

I was very skeptical at first, but I've become a big fan of story points: they decouple estimation from scheduling, and that's a good thing.

Note that they are not "... a new, fuzzy unit of measure..." likewise they are also not a "...metrics sleight-of-hand..."

This a very simple, yet powerful exercise. Relatively size the things you have to do. Now, without caring about what the points are, select what you can do in the next time-frame. Measure your ability to deliver against this.

Over time, your relative estimates get better and your ability to commit gets better. The kicker is that none of this has a damn thing to do with scheduling. Once you can tell reliably tell me you're going to deliver 10% of the remaining total work in the next two weeks, I know for a fact you have 18 more weeks left in the project. Then I can then release plan and work dependencies based on real-world data and without breathing down a developer's neck. Meeting schedules doesn't have to be (and shouldn't be) the higher-stress thing many of us make it out to be.

The author seems to not be able to get scheduling out of his head. If you're constantly wondering how many days a story point is, don't use story points. You don't understand them.

Note that for such a simple idea, there are several gotchas here. Most teams screw up story points and velocity. Anybody have the PM that divides the points by the sprints remaining and then announces what the velocity will be? Or how about the ScrumMaster that empirically determines current velocity and then tells the team how many stories they can do for the upcoming sprint? Ouch. Lots of bad practices out there. That doesn't make story points bad, though. Just makes most people suck at them. (Shameless plug: I have a book on being a ScrumMaster and an upcoming one on backlogs. http://tiny-giant-books.com/scrummaster.htm )


Decoupling estimation from scheduling is not something that was invented after the Agile Manifesto.

The rule for serious estimators has always been to estimate size first and schedule second. I'm talking wisdom that's been around for decades. Pretty sure Boehm was talking about this as far back as the first COCOMO. It's literally older than I am.

I've only ever seen agile use time as a unit of size (the "ideal day") and it leads to the sort of trouble you're thinking of: conflation of size estimate with schedule estimate.

Non-time estimations might have been expressed in SLOCs, representative ojects, function points and so on. Inputs to estimation could include expert opinion (PERT, Wideband Delphi), historical records and judgement (COCOMO) or early process artefacts such as the number of paragraphs in a requirements document (PROBE).

I'm only 31. It shouldn't be the case that I am the "old fart" here. Sometimes I feel like I'm the only one talking about work, damn good work, that was done before agile came along and apparently hit the memory-flush button on an entire industry.


Agile is best practices around iterative and incremental development. That's it. It's a marketing term. Of course most of this stuff has been around forever. Most folks who "get" Agile say something like "This rocks! It's like we used to do things when we had a lot of fun and kicked out a shitload of code."

From observing teams the interesting thing is how many people used these techniques back in the day, stopped using them over time, and then don't want to go back because it's something "new". People are strange.


> Agile is best practices around iterative and incremental development. That's it.

I wish I could broadcast this on TV.


You should attend one of my classes. I ask what Agile is and then tell everybody it has no meaning at all. It's just a big blanket term we use to re-wrap a lot of that older stuff (and some new stuff too).

In my opinion the biggest problem Agile has comes from the people who like it. Skeptics are fine. I can show you this stuff works. But people who did "Agile" one time and become cheerleaders and set in their ways can be insufferable.


I'm mostly peeved by the goldfish memory this industry exhibits.

Partly a function of the pyramidal demographics, I suppose.

The ACM and IEEE doing the whole Smaug routine with the treasures of decades of research isn't helping either.


Once you can tell reliably tell me you're going to deliver 10% of the remaining total work in the next two weeks, I know for a fact you have 18 more weeks left in the project.

Really? I would automatically estimate 38 more weeks.

The difference being that once someone is confident of their figures for time periods that long, they are generally wrong by a factor of 2.

In essence this is a disagreement about what "reliably" means. But in general time estimates tend to be "the smallest number that nobody can prove is wrong" rather than "how long this is likely to take". And you can tell that because there is a nice little plan, and a breakdown of how long each step could take, with no big fudge factor for "Things we don't know about yet, like the requirement that we'll get told about in 2 weeks."

Cynical? No, realist( * ). ;-)

* Cynics always claim to be realists.


> The kicker is that none of this has a damn thing to do with scheduling. Once you can tell reliably tell me you're going to deliver 10% of the remaining total work in the next two weeks, I know for a fact you have 18 more weeks left in the project.

Wait am I missing something or did you just say that this has nothing to do with scheduling but then extrapolate some estimate in order some up with a schedule? (not trolling, legitimately confused)


Yes. What you end up doing is getting good at determining what percentage of remaining work you can accomplish in each sprint. This information can be applied offline to schedule. You get a schedule without ever having any conversation about duration or time. The "magic" works because by picking what you can do in a sprint, you are associating a certain percentage of work with a time unit. So you it's not like there is no time discussion. It's more accurate to say that you don't keep multiple sets of books.

Like a cool programming concept, it's one of those things that once you see, you're like wow! But granted it can be a head-scratcher beforehand. Like I said, I came at this as a skeptic. Once you see it work in multiple teams, however, you don't want to go back to the old way. It just doesn't make any sense to do it like that.


  > Or how about the ScrumMaster that empirically determines
  > current velocity and then tells the team how many stories
  > they can do for the upcoming sprint?
Honest question: how should a team manage what will be done in the next sprint without basing on its velocity?

edit: formatting


This also applies to the T-Hawk's question.

Velocity is the empirically-measured percentage of backlog complexity that the team can reliably deliver. It is determined (and varies over time) as the problem gets solved.

It's a measurement, not a control factor. The team has control, not some number. Look at it this way: each sprint when you look at the work, you keep getting better and better at executing: your unit tests are dialed in, the CI server is rocking, you figured out that cool thing where you cut the amount of code in half, and so on. So when you look at what you can honestly do in the next sprint, that's an important piece of business feedback that other people need. You're "recalibrating" the release plan, which you should always do because stuff changes. In Agile, when you pick what you can do at the beginning, the whole schedule slides around. Business guys know at the front of the sprint that things are out of whack. That's great. It gives them the entire sprint to figure out what to do. In other systems they don't know until the end, which sucks for everybody.

Doing it backwards provides absolutely no new information to the business. You're saying the problem hasn't changed, the team hasn't gotten any better or worse, and there are no new risks. Instead of adding your expertise to people who are looking at the big picture, all you're doing is playing some kind of game.

Combine this with other bad stuff like never re-estimating or re-stating the backlog (or having a 3,000-item backlog) and it's like taking a sports car and putting a lawn-mower engine in it. Yes, you can call it Agile, but you're really doing it in a very painful and inefficient way. That hurts. Don't do that.


I think Mike Cohn has been _real clear_ on the subject and he says story points are NOT a measure of complexity, they're a measure of effort, which he (and I) connect directly with time.

If points are not a measure of complexity than velocity can't be a measure of complexity either, since it's just points/time.

I don't always agree with Mike, obviously, but I think he's about as knowledgeable on agile scheduling as anybody out there.


You know, as soon as I typed "complexity" I knew we'd end up here.

I believe we're deep into semantics. The fact is, whether you call it complexity, effort-till-delivery, or chopped liver, the process is still the same. And it still works.

And while I like story points, frankly I could give a rat's ass with Cohn has to say about it. He's a nice guy, and he's obviously an expert, but I'm much more interested in what works than in anything else. Story points work because we pick them up and use them for stuff, not because Cohn calls them one thing or another or recommends doing them a certain way. Remember the point here is that the predictive modeling process the team uses evolves and becomes more accurate over time. It's really nowhere near as important where you start as that you keep getting better. I've started teams using a 2-tier H/M/L process for assigning points and it works fine. No matter how they think about it, the team gets better over time and it ends up giving them powerful predictive capabilities. That's the key item here. You're getting wrapped up in the weeds.

Sorry, but when I talk about insufferable Agile people, the number one annoying thing they do is name drop. Really bugs me. People drop some famous author's name and then all rational thought shuts down. The great one has spoken.

I'm very sorry you seem to not be getting the answer you wanted. Like I said, I feel your pain.


If they're not fuzzy, how big is a point?


It's not fuzzy. It's unit-less. The number has no size or meaning outside the list of other sizes representing the work for that particular team for that particular sprint. It doesn't transport between teams, it doesn't mean the same thing from sprint to sprint, and it doesn't represent any qualities of the underlying system. You're asking for the definition of something that's just a relative number. It's like asking how long a piece of string is.

That sounds whacked, so let me try again.

What I'm doing when I take a list of future system behaviors and ask a team to relatively them is creating the parameters to a model that doesn't exist yet. The process of creating, revising, and implementing the model is what the team does each sprint. That's why initial estimates are so whacked (and they should be). Yes, for your particular team on your particular project maybe you get to the end and say "Hey, on average a story point amounted to 1.5 days" But I have no earthly idea why anybody would do that or care about that translation. It would a _very bad thing to do_. You only know the "answer" once you're done. And the answer is empirically determined by application of the model; it's not a guess.

Remember the old quadratic equation? ax^2+bx+c=0? Story points are like knowing "b". So how big is "b"? See? Makes no sense. It's just part of a parameter used to do other things. "Size" has no meaning here.


> It's not fuzzy. It's unit-less.

Except that somebody will come along and just use it as a proxy for some unit they care about. And that will probably be time. Tada! You're right back where you started.

Counting things out in magic beans doesn't make the problem go away. Humans are pretty good at this exchange-rate business.

You might as well estimate in domain-specific terms that you can later actually verify and calibrate. Such as SLOCs, function points, object points, modules + tests and so on.


Somebody might come along. And they might not. The point is you don't have to create an exchange rate. It's not needed. When people do that they do it because they don't know what the hell they're doing. As I said, if you don't know what the hell you're doing, don't use story points.

You might as well estimate in domain-specific terms

Actually no. You could use your astrology chart to estimate, but the purpose here is to get better and better as you go along. Remember that we re-estimate each sprint (or at least we should). So the point is the informal model of execution evolves over time. The old way was constructing this huge model and spending maybe 20 hours creating the perfect estimate, then executing. The new way is spending 2 hours each sprint over 10 sprints, increasing accuracy each time. You want a simple, less complex model that can be rapidly iterated. Lots of little bad guesses converging on an answer based on real-world execution data.

This is getting much longer than a HN thread. I would caution you that things like COCOMO exist as a way to mathematically model how time interacts with development. It's a model, not an explanation. Don't confuse tweaking numbers in a spreadsheet with controlling or changing execution. Various models have various uses. Technology development does not break down in a scientific management kind of way. Yes you can decompose technical problems into tiny pieces and solve them. But people working together are not robots, and their work cannot be broken down in the same fashion as a algorithm can. Wish I had time to go into this. Check out this blog entry. http://tiny-giant-books.com/blog/agile-backlogs-sigh/

Thanks for the chat!


You're welcome. I agree that it can't be done purely algorithmically and I have an umpteen-thousand word essay on why coming down the pipe one of these days. I seem to be getting nigh-Yeggesque these days.


> Except that somebody will come along and just use it as a proxy for some unit they care about. And that will probably be time. Tada! You're right back where you started.

That can happen. But it doesn't have to. You can get adept at giving people the raw data and then reminding them that any interpretation is their data.

But there's something subtler at work. If you have a new version of the product ready every week that stakeholders look at, and if you allow them to adjust the plan weekly, then they will start to change the plan. As they recognize the feedback-driven nature of things, they stop pretending that their math on points can predict anything. Instead they use the points to control outcomes. E.g., by cutting scope and fending off nice-to-haves to make the dates they want.


Story points vs time estimates is not an either-or question. You can use both. My team at my job does. We do time estimating for each two-week sprint (Scrum) so we know what will fit, and have fuzzier point estimates in the backlog for longer-term planning by the product owners. Both levels of abstraction are appropriate in the right context.

To DanielBMarkham in a sibling comment:

> Or how about the ScrumMaster that empirically determines current velocity and then tells the team how many stories they can do for the upcoming sprint?

That's what we have, but is that bad? Isn't that how the points should be used to gauge predicted velocity? Of course it shouldn't be treated as a concrete inviolate set-in-stone prediction, but that methodology comes in pretty accurate for estimating.


It used to be that you had Earned-Value Management and the concomitant EV Charts. One could take the first derivative of the current point on the chart (or perhaps do something fancier involving moving weighted averages) and use that to predict[1] how the EV chart would play out in future.

But EV charts and first derivatives are old fashioned and hokey. Practically waterfall!

Instead we use the latest in agile management: burndown charts and velocity. Totally different.

[1]: Though see every software project ever for an example of why linear extrapolation is a hilariously foolish way to settle on a firm estimate.


My big problem with estimating in story points is that it ignores the reality that how long something takes is directly tied to who is doing the work. A junior eng, a senior eng, someone who just started, someone who isn't familiar with that piece of the system, the person who wrote and has always maintained that piece of the system. These people will all do a given task in a different amount of time. Possibly an order of magnitude differently when comparing a junior eng not familiar with the code to the codes author. This is okay. Embrace it. Plan for it.


I would argue that this is actually an advantage of story points. You don't know who will be working on it at estimation time. But a 2-pointer should take any developer about twice as long as it would take the same developer to do a 1-pointer. If you have many super fast, very senior developers you get more points done and have a higher velocity. That's where the speed will be reflected and you can see how much your team can get doen in one iteration. This allows you to plan your iteration without having to care who works on any given story.


On a sufficiently large project (more than a few people) the differences in productivity average out, as do to some extent estimation errors. I never expect any individual estimate to be correct; it's more important to get an accurate estimate of how far off your estimates are in aggregate.


Not totally different, but simpler, which counts. Earned value presupposes things like someone put a value on individual deliverables and that there's a budget that matters. Neither happens much in commercial software in my experience.

Since most software cost is headcount * time * cost-per-person, tracking time is a pretty good proxy for that.

Of course no metric matters if you produce bad code and call it progress.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: