Hacker News new | past | comments | ask | show | jobs | submit login




What's supposed to be the take-away from this? Is it to prove that knowing one or two bits of trivia and maybe a formula, by rote, can make your (unaided) estimation of related things vastly more accurate? That's all I'm getting from it, but maybe that's what's intended.

Examples: if I didn't happen to have an accurate-enough figure for the diameter of the Earth in miles, plus a formula for the surface area of a sphere, plus roughly the proportion of the Earth's surface that's land, all in my head, there's no chance at all I could produce a useful-for-any-purpose-whatsoever estimate of "area of the Asian continent" without researching it (at which point I could just look up a fairly exact figure, without knowing any of that). Year of Alexander the Great's birth, well I happen to know roughly when Aristotle was active and that they were alive at the same time. Otherwise, again, I'd produce a useless-for-most-any-purpose guess. Total US currency, I bet knowing something like the current annual GDP of the US would at least narrow that down, and is something someone might plausibly have at hand (I don't, my guessed range on that would be hilariously bad). If you have a sense of blockbuster movie budgets and/or returns, which one can acquire from paying attention to entertainment headlines, it's easy to come up with a reasonable range for Titanic's box office receipts. And so on.

Is the point that trivia's highly valuable, actually, if you have to estimate a bunch of arbitrary stuff purely from memory?


You're supposed to put down ranges such that you have a 90% confidence that the correct answer is in that range, and then can expect to get roughly 9 of them "correct" (that is, the actual correct answer is included in your range).

The point of the test -- as shown by the response graph after it -- is to show that when someone asks us for a 90%-confidence estimate, we don't really understand what that means, and end up giving 30%-confidence estimates. The point is that people need to understand what they do and don't know, and reflect their level of uncertainty with the width of the range.

If I have a trivial task that I've done a hundred times before, I might say that it'll take me 45-60 minutes to complete, and 90% of the time I'd be right. But if it's something I've never done before, and I don't understand the steps or complexity, I might say that it'll take me between 30 minutes and 8 hours.

This scales up, too. For a larger project that I understand well, I might say 6-8 weeks, while for something I don't understand, I might say 4-12 weeks.

Over time, I can determine if I make good-enough estimates by checking to see if 90% of the time the actual time to delivery fell within the stated range. It doesn't matter if it's at the beginning of the range, end of the range, or right in the middle. I just need to hit somewhere in the range, 90% of the time.

For example, for the Alexander the Great question, I don't have a clue. Your mention that he was a contemporary of Aristotle actually made me realize I believed he was much more modern-day than he is. So I might give a range of like 1000BC to 0AD, because I recall that Aristotle was definitely BC, but I don't really have much confidence as to when. Looks like the right answer is 356BC. So my estimate was correct, even though it had a wide range. Giving people (like your manager) a wider range also communicates your uncertainty, which is a useful piece of information for them to have. The issue is that I think many engineering managers simply won't accept a true 90% estimate if it's wider than they know what to do with from a planning perspective.


Yeah, I get that and name narrower ranges in a business context (trying to keep the upper bound from getting too much lower) because most biz-school types don't get that, and tend to think you're full of shit, being passive-aggressive or otherwise deliberately obtuse, or else it doesn't actually matter and they just wanted a happy number to put on a powerpoint and now you're making their life more difficult for no reason.

For that matter they tend to be pretty bad at anything even adjacent to experimental design, but god help you if you point out that the data they're so proud to present to the C-suite next week is, actually, meaningless (rare is the C-suite that'll catch it and call them on it, anyway, so from the perspective of the presenter it's almost beside the point; a disturbing amount of "data driven" leadership is pure fairy dust).

For a good proportion of programmers I'd expect education and professional work experience to have them comfortable with wide estimate ranges being typical and honest for many "90%" estimates, but business-social experience having convinced them, correctly, that honest estimates aren't what a hell of a lot of people actually want and do, "make us appear ignorant or incompetent" (from the book), in actual fact from the perspectives of people who control our budgets and wages.


Yeah, I definitely agree that the business types just don't get it. If you give a wide range, they either think you're messing with them, or are incompetent.

Personally I tend to try to narrow the range as much as possible while keeping my upper bound as fixed as possible, but that doesn't always work either, and I think I still unconsciously try to go too far toward making everyone happy, and underestimate.

I think part of it is just our collective feelings of powerlessness that keeps us in this uncomfortable position. If a significant number of developers were to put their foot down and give real estimates that actually express uncertainty properly, and stick with them, management would start to understand, or at least accept, what's going on. But "getting a bunch of random people to change their behavior all at once" isn't a reliable strategy, so here we are, and here we'll continue to be.


To make things a bit more fair I'd definitely allow that plenty of business-types do get it, but I think enough don't that treating them all like they do before you know them fairly well is probably a bad move, career-wise, especially if you're not senior and important-enough to thrive despite pissing some of them off. Enough either don't understand or don't want honest estimating that the best you can really do is play the game wisely and hope it goes OK, until/unless you develop a rapport with a "good" one. I'd further admit that being one of the "good" ones may not actually be that useful in a business-type, overall, especially so far as their personal career prospects. They're just... to be approached differently.


The quiz asks for a range that will include the correct answer 9/10 times. Not a point estimate. If you don't have the relevant trivia in your head, broaden the range proportionally. I think the intended point is that we're bad at judging probabilities. Getting 100% right (within range) is probably supposed to be as concerning as getting 0% right.


Continue reading, especially the part titled 'How representative is this quiz of real software estimates?', and you'll get to the author's point.


Ah, right, I've always taken that effect in software estimation to be a result of business people hating how wide an actual "90% accurate" estimate is. Not many managers will accept "8 to 30 months" as a 90% estimate at the start of a project with a fairly typical set of unknowns. Especially they seem to really hate ranges that aren't quite a lot narrower than the size of the lower bound. But maybe it's actually driven by the people doing the estimating.

For my part I definitely tend to squeeze my "90%"s down to more like "30-40%" when asked for a "90%", for that reason. I might try out an honest and accurate estimate on someone I kinda know, and suspect won't quietly re-evaluate me as a useless moron or "one of those asshole 'programmer' types who doesn't get business" in response, though.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: