Hacker News new | past | comments | ask | show | jobs | submit login
Coding, Fast and Slow: Developers and the Psychology of Overconfidence (hut8labs.com)
246 points by durumcrustulum on April 23, 2013 | hide | past | favorite | 72 comments



I cannot identify with this article.

Personally I know that my estimates are wrong before and after I make them and I never believed any estimate that I made ... I only make estimates because on every project I worked on, either managers or clients wanted those estimates. So it was for their peace of mind.

On the other hand I always make estimates good enough to have something working, like a proof of concept, because in case the shit hits the fan, you at least want to show the client or the manager that some progress has been made.

This is yet one of those articles that speaks about the pitfalls of the Waterfall model, hinting on the virtues of the various Agile methodologies and it's kind of nauseating. On one hand I feel some arrogance on the part of the writer, on the other hand it reminds me of how awful consultancy work is, because consultancy only works well when you're doing the same thing over and over again.

My favourite development methodology is easy to explain. It's a simple cycle: hack <-> ship. I also lie when giving estimates.


Lying in estimates is pretty much a necessity, there are always a lot of unknowns going in. One of those essential people-management skills is to lie and fudge with enough art so that you don't lose too much face when they turn out wrong.

I never deliver an estimate with any measure of confidence in my words/tone, and I always have a short list of things, made up or no, of things that might push it back.

If someone were to try to nail me down and is expecting me to put my credibility on the line with a hard estimate, I'd say, "let me get back to you on that after I do some research," then promptly forget about them and look for another project. There are enough reasonable people in the world to work with that I don't have to put up with unreasonable ones.


I, too, cannot identify with this article.

Maybe it's the fact that I'm really not overconfident about my abilities; in fact I usually underestimate them. Secondly, I'm quite a good estimator of all things, including project timelines. In fact, I've been asked to make timelines of various projects because I have a good grasp of all the factors.

I think that's the key. You need to understand most of what goes into the project—as well as how long it will take to learn and execute on the things you don't know.

You learn all this from experience doing your work, but I think more importantly, I am good at estimating timeframes because I have a broad knowledge of not only programming, but also human psychology, collaboration, motivation, social factors, and unknowns and doubts. It takes a little bit of the entire human experience to accurately paint a picture of what someone (or a team) is able to do. You need to understand the complex nature of the problem, and then use the facts rather than trusting your gut.

I have a good liberal arts education and fantastic parents to thank for this viewpoint (one of the reasons I still detest the insufferable anti-university movement—if you only learn what you need or want to know, you get a horrible education).

Am I overconfident in my ability to estimate? Probably. But no one cares, I'm usually right.


If you read the linked Kahneman article, it actually addresses your comment. It's like the stock market: some percentage of traders will get lucky, and inevitably they will attribute their success to their judgment.

No, wasting yet more years of your life sitting in classrooms does not confer the ability to predict the future. What does confer that ability, is being lucky enough to be handed a series of projects where the hard parts are relevantly similar to your previous projects and, fortunately, nothing went very wrong on the novel parts. And it's great that it worked out for you that way, but that doesn't mean the OP was wrong.


While some people attribute their luck in the stock market to skill, it is still possible for someone to have actual skill in making money in the stock market. Some high-profile billionaires and numerous quants have demonstrated this. The fact that you made money doesn't mean your income is solely, or even primarily, attributable to luck.

Likewise, it's possible the grandparent has been lucky that the projects they've forecasted aligned well with their areas of knowledge and hit no problematic snags. It's also possible, however, that one can develop the ability to make mostly-correct predictions about project timelines. Being a rarity doesn't mean it's flat-out impossible, or even unbelievably improbable.


I also lie when giving estimates.

This is the only part I disagree with. You can't lie about something you don't know, and it's the nature of an estimate that you cannot possibly know whether it's right... if you did, it wouldn't be an estimate. You may present part or all of an estimate that's just not what you actually believe, but it might still end up being right if the right things go wrong.

Estimates suck. I'm pretty good at estimating, and doing it still sucks; they are never exactly right, for too many reasons to enumerate. Unfortunately, in many situations (e.g. consultancy, as you mention), they are important sales tools... I'd venture to guess I'm not the only small business owner that considers estimates one of the many good reasons to move from a consulting to a product development model once the means are available.


> You can't lie about something you don't know

Sure you can. You can use made-up regulations, bring up something scary-sounding that actually has no bearing, anything that the client can intuitively grasp that isn't, "I don't actually know this particular python library and I know it will take me 2 weeks before I can adapt it to the app."


Actually I never make a good estimation, so i just multiple 3 now. I also believed in hack/ship cycle, but after several rounds i found the product became unmaintainable and it's harder and harder to add new feature. Does this mean I need to improve my coding skill to make this cycle works ?


Developers tend to have trouble recognizing and factoring in their own past failures. They mistakenly assume that the next project will be free of all the little gotchas and time sinks that plagued almost every one of their past projects. I've had developers working for me who had just finished a project that they said would take "a couple of weeks" that took a couple of months or so. They'd turn around at the next little project given to them of similar complexity and with a straight face said, "oh, a couple of weeks". Comedy ensued.

I tend to make pretty good estimates in my own consulting for two reasons: I usually bill on a "not to exceed" basis and I keep really good records of how long things have taken me in the past.

When I size up a project, I break it down into chunks comparable to chunks that I've bite-sized in the past. I look at those past chunks and I add up how long they ACTUALLY took to complete. I read the notes that I took during implementation to freshen my memory on the kinds of troubles that I encountered. I figure that same complexity plus similar work load means similar ACTUAL hours and resist my developer tendency to assume the best case implementation scenario. Also, by looking at past project estimates vs actual work, I've established an amount of inevitable overhead.

I throw all these actual hours chunks into a spreadsheet that adds them up and includes my previously calculated overhead and normally I'm pretty happy with how on-target I end up being.

[edited typos and readability]


I think the estimate I'll give depends on who is asking. When my boss asks how long something will take, I lowball it, because if I gave myself plenty of time in the estimate, it might sound like I'm planning to work really slow. There's not much risk in underestimating, either. It's easier to ask for more time once you are half done with something. The fact that you're already half done puts you in a stronger position.

I think I can estimate much better when I feel free to say any number at all, that I just don't know, or even that the project is not worth doing.

I don't think managers should take estimates at face value, but consider the tone and the context and make their own estimate. Or, perhaps, not ask the employees to do their own estimating at all. It seems like the burden should be on the manager, and he/she may have to be creative. At my last company, they were crazy about making developers estimate every little thing, and it created an oppressive atmosphere that contributed to me leaving. For the reasons in the article, accurate estimating is almost the same as doing the work itself. And the estimate feels like a promise, one made without complete knowledge and in awkward circumstances.

"I tend to make pretty good estimates in my own consulting for two reasons: I usually bill on a "not to exceed" basis and I keep really good records of how long things have taken me in the past."

You have every incentive to judge accurately, because there is tremendous risk in making a commitment with too much uncertainty. You'd be better off not signing a contract than risking a wildly optimistic estimate, so naturally, you're only going to take projects where you can estimate pretty well.


This sounds a lot like Evidence Based Scheduling proposed by Joel Spolsky some time ago:

http://www.joelonsoftware.com/items/2007/10/26.html

The only additional thing he proposes is doing some montecarlo simulations to create a probability distribution of how long will the project take.


"And the problem is that, hidden in the parts you don’t fully understand when you start, there are often these problems that will explode and just utterly screw you."

I completely agree! I also find myself telling my bosses that 80% of the feature takes 20% of the effort/time. Then a wall is hit and the remaining 20% takes 4 times longer (80% of the time).

ADD: And as you go along, the cone of uncertainty hopefully narrows but when you hit one of the exploding "we didn't knows!" your original estimated time/effort is too small. We try to account for these by including some "uncertainty buffer" but then the stake holders get mad when they see these large effort/time values at the start of a project. It seems stake holders don't really want to hear the truth and the possibly ugly, only the sweet sound of nothing going wrong.

Giving those the-stars-align-and-nothing-unforeseen-happens estimates do a disservice to everyone and/but then you have other developers/consultants/etc. competing and vying for work so they try to undermine and and undercut the more realistic estimates and we find ourselves in a downward death spiral similar to trying to compete on price alone.


I tell my bosses that before I can give a solid estimate I need to have two things done first:

1) I've eliminated most (or all) of the unknowns 2) I know how I am going to do the thing I am supposed to do.

Sometimes I know the system and I know how to implement what's asked. But if I'm working in some of the legacy code we have the above 2 tasks easily take about 50% of the total time.


A project I'm currently involved with suffers from a few things... 1) the unknowns never end as teh client keeps changing the freaking requirements! 2) they don't like our estimates and keep asking us to find another implementation to meet their requirements that requires less time and 3) they basically are trying to get us to implement the various solutions while "estimating", so basically we are doing the actual work while "estimating"... the estimate is suppose to be used for determining how much and how the development is to be paid... so we are burning non-billable time estimating (doing "the work") in order to shorten the estimated billable implementation effort. So the calendar date of delivery really isn't changing but how much is going to get billed is decreasing. And, if we try after doing the work "estimating" to say the amount of billable hours is more than the time to now deliver, they're going to say that can't be. Then we'll either have to dig our heels in and saw we did billable work while estimating or not collect on billable effort. It's so messed up. If I was in charge, I'd have fired the customer by now.


If I was in charge, I'd have fired the customer by now.

Some people don't know what bad business is. I find myself worrying less and less about the experience level of developers I have to work with and worrying more about the experience level of clients as clients.


Imagine if Gandalf acted the way managers and clients do with time estimates when he sent Frodo and Sam to Mount Doom.

"You haven't destroyed the One Ring yet? Why not? You said it would be destroyed by Friday."

If you trust your people to carry the ball as far as they can everyday until it is in the end zone you will get better results.

This is why in my consultancy I have stopped giving time estimates except for very small projects. I take 50% upfront and 50% upon completion so our interests are aligned.

The time and stress of estimates are just not worth it in software development.


That's a nice sentiment, but I don't see how you can apply it to any situation that involves estimating a project for a client, based on client's requirements.

I wish the article actually addressed the problem of estimating the time/effort for the whole project. What I'm dying to learn is what one is supposed to do when the client says "I need a system that does X, Y and Z. How long do you think it will take you and how much would you charge for it?"

One thing I'm sure you can't do is say "I don't know, because making accurate long-term estimates is fundamentally impossible. But hey, I know we're going to do a great job, so why don't you just trust us?"


I think what the article is highlighting is that the it needs to do x, y and z for the cost of the implicit budget I have in my head as a client is fundamentally broken and discussions need to address the inherent risk otherwise estimates will continue to be wildly wrong.

It's hard because it takes a mature operator to say no I won't give you a total time estimate for what you just explained to me in 5 minutes.

The context needs to be, crafting software is difficult what's important is that you prioritize your desired features in case delivering those early ones blows out and uses up your budget it's then about hiring developers who you can trust to drive the project professionally and honestly and work hard and not stuff you around. Let's talk about why we think we are the ones worthy off your trust and the sorts of things you should be looking for in hiring a developer.


If the client is asking for an implementation as per your example, you already have a major problem (and are in a race to the bottom as a supplier). The spec should be expressed purely in terms of business outcome (e.g. I need a system that can handle more transactions within the next 3 months). (Not to mention that the most interesting and valuable requirements are usually the ones that the client doesn't yet know about.)

Second problem with this scenario is that if you are charging a fixed amount for a complete system, you are assuming all the risk (and essentially selling a form of product). Generally that's a lousy deal for a client on build cost and maintenance of a bespoke system. Few desired business outcomes are so unique that they require a bespoke solution - so this should really be a very niche business, not the norm at all.


Usually the best reply is along the lines of: Refining the requirements is an iterative process; we'll both learn more on the way about exactly what the system needs to do and how it should do it. So let's break it down into small steps. We'll implement X first, then you pay us for X based on how long it turned out to actually take, then we can move on to Y. If you're unsatisfied at any stage you can walk away and keep everything you've paid for thus far.


I'd be curious where you find these customers that don't care when something gets done. I have yet to find them.


Often they don't care but they ask anyway. Just to keep you on your toes and just to show that they still care about finishing it.


That's not a realistic comparison. More realistic would be if Gandalf said "If the ring is not destroyed by Friday, Sauron will definitely defeat us, and any effort spent on trying to get to Mount Doom will be wasted, and may cost us the chance to defeat Sauron a different way. Given that, Frodo, can you destroy it by Friday?"

Of course, in most shops Gandalf will keep all of that to himself because he hopes that Frodo can get it done sooner (it's just walking, after all) and he doesn't want Frodo to half-ass it, take his time, and not deliver until the last minute. So he'll just ask Frodo when it can be done.


"it's just walking, after all"

I read that in the common dismissive tone that I hear regularly and cringed. I'm not sure it was your intent but when higher-ups try to ascertain the level of complexity/work with little to no knowledge of the problem, these sentences fly.

Having been one of those managers, I understand that they're largely trying to be optimistic and justify their own demands but as a project drags on, it depresses the hell out of me.


You read it exactly right. I've had project managers tell me, in that same tone, that they disagreed with my six month estimate for a project, and said they could do it in a weekend. I tried offering to swap jobs for a while, but they never went for that. I'm not surprised; more often than not I wasn't just the tech lead and architect on my projects, but I did most of the project management too.


> I have stopped giving time estimates except for very small projects. I take 50% upfront

50% of what?


the dough


I think his point is that without a time estimate, you don't know how long it is going to take and thus if you are charging hourly, you don't know what the 100% is that you're taking 50% of. This obviously doesn't matter if you are charging flat rate.


Right. Except that I'd submit that it matters even more when charging a flat rate, as you're bearing all the risk of an underestimate.


I solve this by quoting a price between $xx and $xx, taking 50% of the smaller amount and then if the job takes longer than I estimated I have some room to charge more at the end.


How do you come up with the range?

How do you know how much risk you're taking on that the upper end of your range is not seriously under estimated?


I've just learnt through experience. There is obviously still some risk involved but it's reduced. When I'm working on projects that are similar to ones I've worked on before I use a smaller price range and if I'm working on something totally new to me I will use a larger price range to keep the risk as minimal as I can.


That's an interesting approach, but in fact your range is a time estimate, so it's not quite true that you've stopped doing time estimates, it's just you've stopped telling the cusetomer exactly what they are.


This is a pretty common point of view of novice developers.

They have no idea how to do accurate estimations other than guessing, or giving in to a management target date and not understanding the difference between targets, estimates and commitments.

The next step is nearly always, rather than learn to be a good estimator, to declare that if they don't understand how to do something, then no one does.

The idea that if you haven't bothered to learn how to do something, no one has, is an amazing act of hubris and is characteristic of a certain sort of person who often causes more damage on a project than contributions.

The solution is to either stop giving bad estimates based on nothing, or to learn to do it properly. Giving bad estimates because you don't know how to estimate is a form of engineering malpractice. Declaring that because you are an incompetent estimator and have not bothered to learn how to do it that therefore no one else can estimate either is pure immaturity. It is similar to the principle of object permanence where in Piaget's theory of cognitive development he notes that where infants from birth to around age two don't believe a ball exists once it leaves their field of vision.


It's not just a point of view, it's backed up by studies. Did you actually read the article? Plenty of grizzled veterans back this up anecdotally also.


I agree that studies show that ad hoc, "straight from the hip", and "wild assed guessing" methods don't work to produce accurate estimates.

Not just "plenty of grizzled veterans" report that they don't know how to estimate, but the vast majority of the industry is incompetent and ignorant on estimation, a sign of the deep seated unprofessionalism of the industry.

Neither of these things change the fact that lots of us are able to do accurate estimation and the methods for doing so are well known and have been for decades.


You've twice now stated that you're able to do accurate estimation and that "methods for doing so are well known".

Yet failed to cite any sources to back up this claim. My experience among novice, intermediate and experienced developers has been that while order-of-magnitude estimation is often acheivable. Truly accurate estimation is not. Software is far too 'soft' to fit our expectations every time we try and build something.


Did you learn how to do it properly? How?


Read "Software Estimation: Demystifying the Black Art" by Steve McConnell [1].

[1] http://www.stevemcconnell.com/est.htm


Well, just from skimming some of the pages on Amazon, it seems to back up a lot of the original article's points.

eg. on p25 McConnell lists project outcomes by project size:

   1000 LOC -> 81% on time, 4% late, 2% cancelled
   10,000,000 LOC -> 14% on time, 21% late, 65% cancelled
Smaller project, better estimates.

Also notice that McConnell uses terms like "Approved Product Definition" and "Requirements Complete". It's a rare project, particularly in the consulting world, that can nail down requirements to the degree that you get really accurate estimates. For packaged software products it's doable, for other things, not so much.


Yes, I can and do estimate accurately, as can many others. Estimation is a basic engineering skill.

McConnell's book, linked to by the other contributor, is a sensible introduction to doing reasonably accurate estimates for projects that aren't incredibly huge (10MLOC+). Very accurate estimates (within 5-10%) and estimates of incredibly huge projects are also possible but require specialist estimators with extensive training and practice.


Could perhaps managers accquire this basic engineering skill? Or is there some actual work involved in this and they can't be bothered?


It all totally depends on the task. My previous job was at a web agency. You give me a spec for your website and I could fairly well nail down how long it was going to take if I was doing the work.

I've found that it's important to give a quote (to your managers) that you almost feel uncomfortable with. Not so long ago I had to quote on integrating a 3rd party ad system into a site. A 30 minute job. I'd already spoken to the 3rd party and didn't trust their abilities to supply working js. I quoted a week (even though it seemed completely unreasonable to me). We used it all.

Over time we developed a policy that if we were integrating with an external system we would only do it on a time and materials basis. We might give a guess at how long it might take - but always caveated with a "this is probably wrong" (and we'd totally overestimate to avoid the surprises).

I'm now building new technology with a friend (who's not a developer). I don't really make predictions about how long anything is going to take. At first he found it a bit frustrating that there's so much uncertainty - but after witnessing again and again how long it can take to solve a single issue he's come to accept that's just how it is.

Two very different worlds.


Using three estimates can help. Create optimistic, likely, and pessimistic estimates for each significant development task. Plug the values into the Program Evaluation and Review Technique (pioneered from building Polaris nuclear submarines in the 1950s) to get a reasonable estimate:

t_{expected} = (t_{optimistic} + 4 \times t_{likely} + t_{pessimistic}) / 6

http://www.codecogs.com/latex/eqneditor.php

This also helps indicate what tasks will take the longest, allowing managers to prioritize based on business needs.


We used something a bit like this at IBM years ago. I'm pretty sure we weighted the pessimistic estimate higher than the likely estimate though.


The best approach I've found to estimation based on hourly rate is to always give ranges, even for the smallest tasks, and to not be afraid of giving very wide ranges when appropriate (which is frequently). The low end of the range should be your actual best guess estimate, what you would allot if you didn't use a range. The high end is a worst case scenario that should reflect the complexity of the task and the potential for unforeseen obstacles. "2-8" hours is a perfectly valid range for a task.

This serves a few purposes.

1.) It trains the client to think about the project in a way that is actually consistent with the process of building software.

2.) It lets the client psychologically attach themselves to the low end, the best case scenario, which can be helpful for getting the estimate approved, but doesn't give them grounds to feel that you missed the estimate when reality sets in and things take longer.

3.) It lets developers create estimates much faster, because they don't have to pin themselves to a single (likely to be over-optimistic) number.


You can only get two: 1) Schedule, 2) quality, 3) quantity. So if your client / manager insists on you keeping the schedule you figured out months ago before full information, either quality will have to suffer or you'll have to cut features. There's no other way.


This doesn't necessarily work as well in practice as it sounds like it should.

Faced with the above, a lot of managers will say 'screw the quality, we have to ship all the features on time, we can patch later.' The problem is that when the manager says that, he's thinking in terms of, well of course all the features will still work except maybe in a few 1% edge cases. But in practice to achieve drastic schedule cuts by sacrificing quality, you often end up with code that only works in a few 1% edge cases and fails in 99% of cases. At this stage the manager will get angry and accuse you of trolling or gross negligence.

So I wouldn't actually use the 'pick two' theory; it's too likely to go horribly wrong.


That'd be one horrible manager. I know they exist, but that's a problem I can't fix :).


up to two. you can also have one, or possibly none.


Yes yes, of course. I forgot the "up to" part :).


Most dev estimates are inevitably expressed in ideal hours. Even assuming no potholes, you need to multiply that by a load factor to get an approximation of real hours. Industry standard load factor is between 2-5x, typically around 3x when a team is up to speed. Therefore management need to multiply all estimates by 5 at the start of a project, ramping down to 3, and pray whilst adding some contractual contingency for the potholes.

The corollary of this is that a management KPI can be to reduce load factor down to around 2 - which becomes a concrete measure of their effectiveness at removing impediments. (And yes this means that increasing productivity is a management task and not a developer issue.)


Isn't an estimate wrong practically by definition, since it is an 'approximation'. Or, the other side of the coin, without specified tolerances, all estimates are right. e.i. "This will take 3 weeks +/- 3 years".


Kahneman's work was in calibrated predictions. We can view time estimates as a special domain of calibrated predictions.

Of course, time estimates are uncalibrated. We can fix this by adding some implicit calibration -- I will use 90%, because this is the calibration I have trained myself to feel such that it is 90% accurate.

If I make 100 time estimates (in the form of "Project X is completed at or before date Y") with 90% confidence, and 50% of them are correct, I'm overconfident. If 100% of them are correct, I'm underconfident. Both are bad.

To solve this problem in the real world, I'd bet that most people make wildly underconfident predictions externally, and make wildly overconfident predictions internally.

But yes, you will nearly always be wrong if you make predictions of the form "Project X with be done at unix time Y," and you can nearly always make the prediction "Project X will be done by the year 3000, assuming the organization is still there and has not decided to abandon the project."


Sure, humans suck at estimating. But I think without addressing the purpose(s) of estimating it's easy to rat hole. I think that at the very least we have the following.

1. Estimating work helps determine if the schedule needs to expand or a feature needs to be cut. Sometimes it helps determine a need for more headcount but beware the mythical man month.

2. The process of estimating also unearths project risk. How risky is a particular item? Well, humans also suck at estimating risk so it's hard to say. But at least the risks are stated openly and everyone knows about them.

3. Estimating makes many assumptions explicit. Or at the very least, they're often implied as "large tasks". This may or may not be the same as #2.

I'm not about to make any claims about magical methodologies that will work for every team everywhere and I don't have any Agile training to sell anyone but I think those two goals are worthy ones. And I don't think the actual estimates matter so much as the act of estimating and talking through the assumptions.

In my own experience the problem with estimates isn't that people are bad at making them but that people are too afraid of getting them wrong. Or too afraid to show someone else that they're unsure of anything because we're all supposed to be "experts", right? Unfortunately those are seen as individual weaknesses instead of group strengths. So people game their estimates and cover insecurity with bravado and then we have articles like this. Harrumph.


When optimistic, make an estimate an increase the time-unit (hour -> day -> week -> month). If you are breaking it down and make somewhat accurate estimations ... double it still.


1) After you increased the time-unit - double the result.

2) If you made detailed estimate - triple it (not just double).


In my experience time-based estimates pretty much always wildly inaccurate, but people can get pretty good (good enough for project management) using complexity-based estimates. Benchmark a small task whose complexity the group agrees on and go up in orders of magnitude from there. Then look at velocity over time. You get a pretty accurate picture of your throughput in a high-, medium-, and low-confidence scenario (high confidence we can do this much, medium confidence we can do this much more, etc.).


It's not overconfidence, it's the full realization that all time estimates are bullshit, and the following disrespect for anyone who demands for them.


I think estimation is task dependant.

For instance if you've a well documented API of a competitive company that you know you can access in a familiar RESTful manner vs an established player with a SOAP interface, that has documentation - but it's wrong, and no available libraries for the language you are completing the task in, and a support team that wrong identifies the xml you're sending them as correct, it's going to take an unpredictable time. Even doing a vaguely related task using SOAP a few months previous hasn't enabled me to predict the duration of the unpredictable nature of SOAP, only that it will take longer.

The biggest difference in duration occurs when teaming up and working in a pair with an experienced developer that I've worked with in the past. Even then it's not a predictable duration on how much quicker this will be, or indeed if there's a guarantee of solving a tricky problem. All you can say it's likely that we'll solve it quicker.

Smaller tasks: Build a CRUD around editing one entity, authentication/authorisation for that and admin area. Estimatable.

More complex tasks that can't be broken down further: Unsure.


> I have vivid memories of that time — my self-image had been wrapped up in being “a good programmer”, and here I was just hideously failing. I lost sleep. I had these little panic attack episodes.

I don't understand these people. If I'm in a situation where I am way above my estimate, I just accept that the estimate was wrong and make sure I make constant progress. It will get done when it will get done.


I tend to avoid giving estimates until things are very tightly nailed down. This because I know I am wrong in my estimates: wildly, horribly, terrifically wrong.

And I kind of have a problem giving wrong numbers to someone who will then use these numbers for planning. It causes dishonesty and error to propagate throughout the system.


I've run into this problem and I've gotten much better at estimating. Solution? You should schedule your sprints exactly by degree of difficulty (in descending, not ascending, order). During my design phase I focus on what the hardest problems are and always schedule my sprints to do them first. If you've abstracted away your planned functionality well enough you should be able to isolate that problem and go there first, no matter how much other functionality is intertwined with the feature. That way as you go further along in the project it gets easier, not harder, to meet your deadline.


I remember someone experienced saying somewhere:

"Take your estimate, and multiply it with 8"

So, if you estimate two weeks, multiply that with 8, the estimate becomes two months.

I've actually used this estimate many times, in big and small projects, and it seems to work. At least better than myself underestimating the amount of work and just trying to impress the producer of the project.

Also, some people just want to believe it can be done in those two weeks. Hahaha!


I totally identify with this article.

At my last job our boss would usually double our estimates before giving it to customers. The trick is he wouldn't always double them so you couldn't count on that when making an estimate. Looking back it was probably better then any of the methodologies we tried implementing that never caught on.


My estimates are always off the mark...unless there's a "real" deadline; at which point I go into unconscious mode and get whatever needs to be done, done.

Necessity is the mother of all invention.

If the deadline is a vague ballpark date, then I almost always miss it, making up excuses with each subsequent delay.


It's all fun and beautiful with 4 hours tasks until you get system, where you need 1-2 weeks for every task to get in context. In posts like this i allways miss some practical advice, how to brake those large tasks into small ones.


That comes with experience. It's nearly impossible to teach because the right way to break large tasks into small ones depends on the task. It can always be done, though, because the act of programming itself involves breaking large things into small things, then smaller things, then smaller, ..., then individual lines of code.


Yes, but then there's also the issue that by the time you've analyzed and broken down a large project enough to accurately estimate it, you've also done half the work of planning the system. This doesn't work so well when estimates are unpaid and project approval is far from guaranteed.


My solution to this has been to work a project in two phases. This is for 6-9 month projects. The first phase is billed hourly, to develop the project requirements and milestones. In this phase, I'd break things down as much as possible into the number of days I thought tasks would take and develop a very detailed plan. The client never saw this plan; it was for me to develop the schedule. I'd also keep notes of any potential "gotchas" that came to mind. Ideally, this detailed plan has things broken down into 1-3 day-sized bites. If I think something will be easy, I use 1 day. If I think probably 1 day, I'd put 2. If I think I might run into trouble, I'd put 3 days.

Now go back and look at the gotcha list. What can be done to eliminate these contingencies? Maybe you need to do some investigating or experimenting right now, before doing the schedule, to eliminate the uncertainty (and get paid for this under the hourly contract). If you can't eliminate the uncertainty, you have to keep it as part of the schedule, as an assumption: "Assumes the X library will be available by <date>". This assumption list is crucial, because if someone else doesn't meet their deadline, it lets you off the hook and you can make a schedule adjustment.

When you present the project schedule, have milestones with deliverables at reasonable intervals, like 4-8 weeks, and take payment for each one. This has lots of advantages:

- if the client scraps the project (often happens in big companies), you've been paid for your work.

- if you miss a deliverable, you will know early that there is a problem with your schedule. It's better to tell a client early that there is a problem and make adjustments rather than try to make it up and act like everything's fine

- if you _really_ miss deliverables, the client has time to can your ass and get someone else to do the work

- when you know you have to actually turn something over to a client to prove that you have met a milestone, it's a powerful motivation to stick to the schedule and not get distracted too much.

So, hourly consulting part first, then fixed part. If a client wouldn't agree to the hourly part to develop the schedule, I probably would pass on the project.


I can explain my bad estimates with a theory:

http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


tl;dr




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: