Also be aware of the selection bias when it comes to estimations.
The companies A, B and C all put in estimates for a product based on similar premises.
Company C gets selected because their estimate is the lowest.
This could be because company C is the best and can do the work quicker but it is also possible that company C has just made a low estimate pretty much by chance.
Estimates that are wrong (lower than the actual cost) are more likely to be selected so it always looks like estimates are bad even if all the estimates submitted actually do cluster around the true value
No, I mean, the process of inviting tenders or bids is distinct from estimating.
Bidders will inevitably perform some sort of estimate, even if it's only a wild guess. But even the most rigorous internal estimation process may then be passed to management or sales staff who will use it only as a starting point for the bid.
The estimate didn't change; instead it was an input into the bid.
And in as the grandparent comment demonstrates, buyers often pick the lowest bidder.
The problem is that business documents like a bid, an estimate, a goal, a plan and so on all share similar surface features. So it's easy to mix them up.
The classic is "that estimate is too high. Change the estimate".
Conceptually, an estimate is an immutable object. What you're really being asked for is a new estimate that is closer to a goal.
I recently read Software Estimation (Steve McConnell, author of Code Complete). My main takeaway was to distrust developer intuition and instead build estimates from data. One example is to look at the duration of past projects of "similar size" (still requires some qualitative analysis). Another example is to count the number of distinct "features" or "components," and build an estimate from those. Both of these still require some qualitative analysis. That will be the weak point of any estimate. The goal is to minimize inputs from developer intuition and use as much quantitative analysis as possible when estimating.
My main takeaway from the McConnel's book has been not to provide an estimate as a single point number (say something like it will be done in 5 weeks) but rather as a range (say something like 5-8 weeks), as providing a single point number would give the impression that it is the target date (while it is not). But unfortunately most of the
times all I have got is blank stares when I give estimates as a range and a question why don't you give me the time it takes to finish :(. Wish more number of people who do project management read at least 1st two parts of McConell's book on software estimation.
I ran into the same problem, and eventually the only workable solution was to use ranges within the software development group, but to only communicate the high end of the range to the project management group and executive team. Most of the time our projects came in "under budget" this way, which seems self-serving, and sometimes projects we thought we could complete on-time were shelved as being too-expensive, but whenever we tried to give those groups better information (eg: the range) they always went straight for the low-end and gave that out to marketing and other groups as a committed deadline date.
I don't think Steve McConnell was suggesting that developer intuition was not relevant. In some of the formulas he suggests for estimation he includes a factor for how accurate the team was in estimating previous projects. Ultimately each developer will need to estimate individual features or other small units.
The other big take away from that book is that estimates should not be a single number but rather a range. If the client insists on a fixed price bid then you start with the higher end of your estimated range and add in something for the risk of a fixed price bid.
One important lesson to take from McConnell's book is that you should not rely on a single sort of estimation. Serious estimators will estimate in distinct ways and then compare the outcomes. If they are widely divergent, it's time to investigate why.
This is actually a solved problem in some industries using a technique called "3 point analysis". I've no idea why it hasn't really caught on among developers; it's very useful. Essentially it goes like this:
1. Estimate the very best possible outcome. How long would something take if everything went to plan?
2. Estimate the very worst possible outcome. If everything that can go wrong does, how long will the task take?
3. Estimate the most likely outcome. Knowing what you do about the client, the task, the tech, etc, how long is it likely to take.
(Point 3 is the normal way of estimating.)
The advantage of 3 point analysis is that you can use the difference between the times to estimate risk. If the best and worst outcomes are similar, then there's little risk of things going horribly wrong. If the best and the likely times are similar but the worst is much longer then there's something that can completely screw up - identify what that is and defend against it. If the likely and worst times are similar but the best time is much shorter, identify why that's the case and try to make sure it happens. And so on.
Better yet, by taking the aggregate best times for a set of requirements and comparing it against the worst, you can get a good view of the overall risk of a project.
I kid (sorta). It misses the core issue, why are developers estimates so terrible, 3 terrible estimates don't make a "good" one.
The post hits on some of the points, but IMHO misses a major one. Estimations are based on experience, yet developers are constantly building brand new types of products. If all you built was "blog engines" over and over again, you would get damned good at estimating them. The trick in software is as soon as something needs to be reused over and over again, it is abstracted, put in a library, built into a framework, so you can make your product different (read: build something entirely new) -- and EVERYONE sucks at estimating the cost of building things entirely new.
Think of the (entirely new in 1962) Lunar Module... they thought it would cost at max 350 million. It cost 2.2 billion. Those where very smart people with a major stake in getting the initial number right, but it was just a crazy ass guess because they had NEVER BUILT ONE BEFORE.
You are absolutely right. It's impossible to estimate how long it will take to complete something you have never done before in your life. Unless the new code being written is absolutely trivial, estimates are going to be guesses, at best.
The only half-decent solution I found so far is iterative development. Take a feature and break it down i to big rocks. Further break down the big rocks into little chunks - each that you estimate would take you less than a day to complete.
Only after completing your first "big rock" in this way can you use your new data to gauge your velocity and give an estimate for completing the rest of the work that's worth the paper it's written on.
> Take a feature and break it down i to big rocks. Further break down the big rocks into little chunks - each that you estimate would take you less than a day to complete.
Experimental psychologists have found that decomposing tasks increases estimate accuracy, regardless of how or why the task is decomposed. It's called the "unpacking effect".
Unpacking works great when it's clear what the tasks are going to be. In my experience the bulk of the work does not go into the project or tool itself, i.e. completing the tasks, but rather into resolving issues.
I usually estimate that writing the code is anywhere from 10-20% of completing the project.
Some research shows that 1 kloc of code contains between 15 and 50 defects, and that code reviewing will pick up only 70-90%. This still leaves a lot of bugs. I sometime ago wrote up some meta research on estimating code reviewing: http://specialbrands.net/2011/10/24/code-review-in-scrum-xp-...
> I usually estimate that writing the code is anywhere from 10-20% of completing the project.
McConnell's book has a really excellent table of items that are left out of estimates. Nuts-and-bolts things like "API documentation", "deployment scripts", "status emails" and so on.
> This still leaves a lot of bugs.
Right. The classical software engineering theory is to build multiple quality gates and to study them for defects yielded. I recall that inspection stomps everything else; automatic testing came second.
The problem of time-to-fix is subtly different from time-to-release, though. Software with known defects is regularly in use.
Decomposing as far as you can is a good practice ... you'll still be way off, because you think you know all the problems that you're going to need to solve. So about halfway through the project your list is going to be longer and management is going to be freaking out.
The elephant in the room: Debugging. I don't know how many times I've been asked, "When are you going to find that bug?" Sometimes they are very easy. Sometimes they are very hard. For the really hard bugs, you need to be clever, scientific, determined, methodical and LUCKY. I don't know how to know when I'm going to run into some luck.
> you'll still be way off, because you think you know all the problems that you're going to need to solve
Absolutely; but I think it's reasonable to assume that a more accurate estimate is a more valuable estimate. You can also track estimate error over time and use that to adjust future estimates.
> The elephant in the room: Debugging. I don't know how many times I've been asked, "When are you going to find that bug?"
The problem is that in any sample of n=1, variability is huge. It's like saying "I am going to pick one person out of the global population. How tall is he or she?"
Well of course I can't know that. The sample is too small. But if instead I am told "we are going to pick ten thousand people at random from the global population; how tall are they in total?" then I can use previously-gathered statistics to give a range of estimates that will probably be accurate.
> Estimations are based on experience, yet developers are constantly building brand new types of products. If all you built was "blog engines" over and over again, you would get damned good at estimating them. The trick in software is as soon as something needs to be reused over and over again, it is abstracted, put in a library, built into a framework, so you can make your product different (read: build something entirely new) -- and EVERYONE sucks at estimating the cost of building things entirely new.
That's it. Most, actually all, literature on software estimation I've seem ignores this simple fact. It's feasible to estimate cost and time for building houses, even skyscrapers, if you do that all the time. But how long does it take to climb a never-climbed mountain?
> But how long does it take to climb a never-climbed mountain?
This estimate could be derived in a number of ways. By analogy to other mountains, by creating a statistical model of climbing time based on parameters like gradient and climber skill, or by aggregating expert judgements of the mountain's likely climbing time.
A better example of a difficult-to-estimate task was the Manhattan Project. How long would it take to build the first atomic weapon? That was a truly novel problem, insofar as it would require fundamental physics research to solve.
But almost none of our work is of that nature. To be sure, there are unexpected requirements and complexities, but we are almost never performing fundamental research to achieve our ends.
> but we are almost never performing fundamental research to achieve our ends
"Fundamental research" is a matter of perspective. If a developer does not know Rails, he must perform fundamental research into building a new blog engine. While the unknowns, and variability, are significantly smaller, they are still unknown until he does the research. If she were to trod off the beaten path, even slightly, he could run into a community known, but developer unknown, roadblock/design flaw that didn't show up in basic research. (ex. A Rails blog engine vs a multi-threaded Rails blog engine)
While all estimates are relative to the context, I disagree that fundamental research is. Fundamental research is about discovering or inventing something nobody has ever done before.
What will be discovered or invented is an unknown; it may be unknown that it is unknown. Research is basically an open-ended search process. There's no foreseable end-state to base an estimate on -- though there are terminating conditions we can use to end research (ran out of money, made amazing discovery that will be passed to engineers to commercialise, gathered sufficient data to write publishable paper etc).
Take the linked post's example of AIDS research. How long will it take to cure AIDS? There is simply no way to know. We can produce estimates, but because it is a research problem, any such estimate would have to have wildly broad ranges to be accurate ("somewhere between 10 and 100 years").
In cases of pure research, you can't estimate the end-goal. Until it's over you may not even know that there were new end-goals you were unaware of (eg. 3M's Post It notes).
And even then, large capital projects are routinely wildly off estimates. I think the delta in "how far off we were" just tends to be larger more frequently.
In the Industrial Megaprojects book I quoted elsewhere, the single best predictor of every measure of project performance was "Front End Loading" -- how completely the project has been studied and specified before it is funded. There are at least three FEL stages. Companies that muck up FEL-1 are doomed to failure, companies that muck up FEL-2 or FEL-3 are doomed to poor returns.
You've gotten to the first step, which is to provide 3 estimates. The next step is to break estimates into smaller parts, estimate them individually, then roll it back up to build a pseudo-statistical profile of probable outcomes.
This is the PERT 3-point estimation technique. It's taught in every project management course in the world.
It so happens that almost nobody uses it in practice, because it is a pain in the rear end to do in Excel. It also so happens that I'm working on a tool to make it easy[1].
All I have right now is a landing page, proof-of-concept code and some articles. Oh, and a pile of research papers and books. It's turned out to be a deeper rabbit hole than I expected.
Build your tool as a plug-in for existing popular project management tools like Rally. No one will use yours if they have to enter user stories and tasks in two places.
I would love a tool that could analyze best / middle / worst case estimates for tasks plus account for predecessor relationships between user stories and generate a probability distribution chart of possible release dates.
I'm aiming to make it interoperable with other tools as a later step (first I need to launch the basic product). But it won't necessarily be a software project tool that's first off the rank.
What if the API you are planing on using can't do quite what is needed. But you only find out 2 weeks into the project. That's a worst case but its hard to foresee these things.
that "solution" is, basically, ridiculous and not adopted for good reason.
A quote from Michael O'Church I quite like:
Let's say that you have 20 tasks. Each involves rolling a 10-sided die.
If it's a 1 through 8, wait that number of minutes. If it's a 9, wait 15
minutes. If it's a 10, wait an hour.
How long is this string of tasks going to take? Summing the median time
expectancy, we get a sum 110 minutes, because the median time for a task is
5.5 minutes. The actual expected time to completion is 222 minutes, with 5+
hours not being unreasonable if one rolls a lot of 9's and 10's.
This is an obvious example where summing the median expected time for the
tasks is ridiculous, but it's exactly what people do when they compute time
estimates, even though the reality on the field is that the time-cost
distribution has a lot more weight on the right. (That is, it's more common
for a 6-month project to take 8 months than 4. In statistics-wonk terms, the
distribution is log-normal.)
Software estimates are generally computed (implicitly) by summing the
good-case (25th to 50th percentile) times-to-completion, assuming perfect
parallelism with no communication overhead, and with a tendency for
unexpected tasks, undocumented responsibilities, and bugs to be overlooked
outright.
[1]
The point is, you are unable to know the worst possible outcome. Since it will have huge input into your average, your average is therefore useless. Perhaps the project isn't specced fully enough and, during implementation, a dev discovers an ugly or impossible feature interaction. You now have to either cut features or rearchitect another feature, ie the dice come up 10 in Michael's analogy.
An example from my background: I work on ml code. A learning algorithm was written in parallel with a particular loss function (essentially, this tells you if your estimates are good or bad). The person writing the spec assumed that a different loss function could be dropped in later. Indeed, this is how the math works on paper. But loss function number 2 required a different data layout between the machines. The vision was writing a new loss function should be one or two lines of code dropped into the middle of a hot loop; the reality was it took almost a month of work.
All estimates, in theory, have large and unknowable corner cases. Your problem may require the invention of a new algorithm. Your office may burn to the ground. An asteroid the size of Rhode Island could strike the planet. And so on.
All estimates are uncertain because there is no knowledge of the future. This is true of every profession, of every project.
When we make estimates we have to accept that they are uncertain and that the degree of uncertainty is governed partly by the problem domain.
As part of my research on estimations I've been working through Edward Merrow's book Industrial Megaprojects. It's based in part on a large database of megaprojects that his firm consulted on.
He considers Monte Carlo-based methods to at best pointless, at worst harmful to outcomes (p 324):
The average megaproject cost estimate when Monte Carlo simulation
was used overran by 21 percent, with a standard deviation of 26
percent and with a sharp right skew. When Monte Carlo simulation
was not used, teams were actually more sensitive to basic risks as
they set contingency.
... The use of Monte Carlo simulation has no relationship to success
of megaprojects or any of our other five figures of merit of projects:
cost growth, cost competitiveness, schedule slippage, schedule
competitiveness or production attainment.
Many developers don't believe that management is going to listen to their estimates so they either avoid giving an estimate or they tell management what they want to here.
A friend of mine worked at a place that was 'trying' agile (at the behest of top management) and had them entering estimates for tickets and recording time for them.
My friend heard from his boss that his "estimates were too high" and when he found at that the lead developer wasn't making estimates OR putting time in, he stopped, and started looking for his next job.
A lot of the poor estimates come from not taking time to break out the features and subtasks of the project. Planning. Because clients never want to pay for planning.
We spend about 35% of the budget on planning and estimates. And they're usually fairly close, for projects where we've done similar things in the past.
When I worked at a smaller dev / consulting shop, we'd be lucky if we got 5-10% of the budget to spend on planning. Short sighted to say the least. Projects succeed or fail based on planning.
Of cause the developer is estimating because there are requirements missing. Hanse the word "estimate". Its more so the client doesn't to understand the word estimate.
Every time there's a discussion on estimation the Planning Fallacy is brought up as a kind of trump card.
But it's not. First, psychologists have done research on how to ameliorate it. See the "unpacking effect" paper I linked elsewhere. Kahneman and others have also pushed using "reference classes". In the estimation literature this is called "estimation by analogy" and it is a well-established technique.
Which reveals the next thing about the Planning Fallacy; it comes from the psychological literature. There are at least two other bodies of literature which need to be dealt with before we give up entirely. The first is ordinary project estimation literature; most of the advanced stuff is by people from the Operations Research field. The second is work done by Statisticians working on what they call forecasting which is essentially parametric estimation of timeseries data.
These 3 bodies of literature seem to have evolved largely in isolation. I haven't see OR papers talking about the psych research, I haven't seen the psychologists citing the International Journal of Forecasting and so on.
You can make the argument that this story reveals important features about taking estimation seriously.
1. The author gave a point estimate, instead of a range.
2. The author didn't decompose the task. They took the highest level view and performed a back-of-the-envelope calculation. A closer reading of a differently-scaled map would have revealed many of these delays, giving at least one or two orders-of-magnitude changes in the initial estimate.
3. The author did not seek out historical data or models for this kind of project. Hiking and bushwalking are well-known. There are tables of travel time which can be used to establish a better idea of the outcome. Experienced walkers could have been sought out for their expert judgement of the initial estimate.
It is not as good a parable as people make it out to be. The problem was not an unknown or unknowable problem domain. It was due to ignorance of the basics of estimating.
I don't know, in software projects the country is not just unknown but potentially unknowable; no-one may have walked that kind of terrain before. THEN there are no experts, its useless to predict. And diving deeper into the problem may involve actually solving it.
That's my issue with software planning. I can often DO the damn project in the time it takes to estimate doing it. What responsible course of action can I take then? Investigate but don't save the code, give the estimate, then go back and type it in? Blow smoke, give an estimate for the deep dive but call it the project?
All estimates are wrong, what matters is improving their business value by making them more accurate.
That some estimates will be less accurate due to uncertainty is not by itself a reason to totally abandon estimating.
The generalised case of your argument is that estimation is imperfect in all cases, therefore, we must abandon estimation. This is known as the Nirvana Fallacy: "we have a partial, imperfect solution. Because it is not complete and perfect, it is worthless".
Even modest improvements in estimate accuracy can have enormous value.
By the way, estimating is not planning. An estimate is an estimate. A plan is a plan. They are different things.
Wasn't talking about uncertainty at all, so I'm not sure why that comment. And no, its not really cool to pretend I said something else so you can debunk that.
I'll repeat myself then: the task of diving into a problem to estimate its time is, for many problems, approximately equal to the time to solve the problem. And I honestly have a hard time figuring out a responsible approach in those cases.
> Wasn't talking about uncertainty at all, so I'm not sure why that comment.
Because you wrote:
> in software projects the country is not just unknown but potentially unknowable
Which is uncertainty.
> I'll repeat myself then: the task of diving into a problem to estimate its time is, for many problems, approximately equal to the time to solve the problem.
Non points-scoring questions: does that happen to you often? Can you give an example? How did you know that the time to estimate would be equivalent to time to implement?
I agree that estimates themselves have a cost/benefit ratio and that sometimes it is going to be negative.
I write in C++. Estimating can require prototyping classes and methods; this IS development, often all that is needed to deploy.
E.g. I'm being asked to estimate the time for an audio-monitoring feature to play a tone when someone speaks yet their voice is muted in the space (Sococo Teamspace has spaces where people work together). The voice-level feature is already in place; its being used by the mic-selection dialog. So the whole estimate involves showing the GUI engineer the API he's already using.
I have to fill out forms online, define the 'feature', time-box development and keep this record up-to-date as the work progresses.
OR I just doorbell Tom and say "Tom, use the same API as the mic-selection dialog". In fact, its taken longer to type this message than the fake work I have to do for this.
So that's a degenerate case. Other cases involve changing timers for idle connection probing (done before the request was finished being uttered); investigating silent-participant overhead (which I did during the sprint planning meeting using netmon on the idle participants in that meeting); aggregate audio packets via our media node to reduce router overhead for bursts of UDP.
That last one is illustrative. To make the estimate I reviewed the media send path for the right place to put in the code (half an hour). Then decided the transport layer was ideal place to aggregate packets using a Nagle timer. I identified 5 cases (idle; aggregate packet under construction; oversize packet; normal packet to aggregate; normal packet that blows the aggregate limit). The constructors for packets need changes to allow header extensions to identify the aggregation boundary for unpacking.
That took a couple of hours. Plus the time to enter the tickets and put in the estimates.
The work will take a few minutes, since I have identified everything that will need to be done. The 'estimation' process has dominated the project time. I can be done with the project before the project manager even notices the tickets I've entered into the database!
SO the whole estimation/recording process is some silly circle-jerk to make management feel involved. IT wasted my time, delayed the project and kept me from doing more useful tasks that our customers could really benefit from.
Btw sorry for my snarky tone above; I was in the middle of this sorry process when I resorted to reading HN/commenting to let off steam.
Wow. I don't take it personally. I gather that your internal process requires all changes to have estimates, regardless of task.
Estimates can be performed multiple times. Conceptually what you've done here is performed 3/4s of the total work (study the change, identify change points, perform basic design) before proceeding to estimate.
You've come across the Cone of Uncertainty in miniature: all that extra work substantially reduced uncertainty and so the estimate range was tiny.
But because of the size of the task, you have a clearly pathological case where estimation had a net negative value. This is a good case of where proceeding directly to the work would have been more effective.
I think one thing we don't do enough of is prototyping. When we've built something before, we can usually estimate how long it's going to take this time if the differences are not huge and qualitative. But if we're building something new, or doing something that we've never done before, just thinking and guessing is a terrible way to estimate. We should learn to take the time to try things to see how difficult they are.
That's because in most cases if you build a prototype that works your boss/client doesn't want you to waste time making a better version, they just want the one that works now and doesn't require a hundred more (billable) hours from you.
The classic rule of thumb is to double the estimate and bump up the unit of measure. So an estimate of 1 hour becomes 2 days; 1 day becomes 2 weeks, etc.
I've ran across this topic on 3 occasions in the last week in various forms: online, with a team, and on a LinkedIn board. Wow, must be something in the air.
So here's the short version:
1) Agile relationships with customers boil down to Time and Materials. You're there working sprint-by-sprint doing stuff. Each sprint the customer and the team agrees on what it can do. The team decides, but the customer can always fire them. This isn't estimation, this is just how an Agile engagement is supposed to work.
2) Most Agile teams split estimating into two parts: how difficult it is to deliver, and when it can be done. It's important to understand that by separating these concepts, you're not pressured to make time commitments when spitballing story difficulty. That's a good thing. Decouple those concepts and leave them decoupled.
3) As a trailing indicator, you should use past team performance as an estimate of how the big picture is going to play out. [insert long discussion here about the various ways to do that]
4) None of this is business or contract management. That's another can of worms. Yes, it all gets mixed up sometimes, but there are ways to keep visibility at the right level to let all the players work effectively.
5) None of this will make-up for working in a crappy environment. Sucky situations still suck.
Estimating can be a pain in the ass, but it doesn't have to be. I think we've reached the point where most of the pain is gone if you set the environment up the correct way. (fingers crossed)
Shameless plug: I've got a no-nonsense Agile Tune-Up email series in case anybody's interested. It covers estimation (along with many other Agile team topics) http://bit.ly/15sz0Pl
ADD: As you can tell from the many comments, this is also a topic that has been done to death. There are more methods out there than you can shake a stick at. (Hence my reluctance to dive in with "This is the perfect way to estimate!"). But the key takeaway is this: lots of simple, quick estimates that converge over time beat any kind of complex model done only once. If you only get one thing, that's the thing to understand. We keep trying to "fix" this by creating models, when the real answer is that over time the team learns to estimate each project as a separate entity. The more engaged it is with the problem and the farther along it is, the better the estimates get. It's a funnel. Don't spend a lot of effort trying to polish up the first part of the funnel: rather, make a commitment to using empirical knowledge and re-estimating to provide better and better estimates as you go along. There's a lot more value added to everybody's job that way.
Hell, I don't care if you use KLOC, some kind of FPA, phases of the moon, or your lucky astrology mood watch to estimate, as long as you continue to re-estimate as you go along and your estimates quickly converge on the actual result.
> It's important to understand that by separating these concepts, you're not pressured to make time commitments when spitballing story difficulty. That's a good thing. Decouple those concepts and leave them decoupled.
I was taught by my software engineering professor to estimate software projects by size first and then to transform that into effort, then from effort into cost/schedule. He emphasised, like you do, that they need to be treated separately.
In his day size was projected in kSLOC or function points. These days you'd use story points. It's the same principle, in my opinion.
That technique tends to break down when you have multiple agile teams working through a long, complex dependency chain to deliver a single project. It's also problematic when your customer has to make fixed schedule commitments to their customers based on your work.
Complex programs and dependencies are handled the way complex projects and dependencies are handled: visiblility, simplification, and just-in-time architectural support. Works at the team level, works at the program level.
Fixed calendar dates are also not an issue. Remember, the goal is to separate delivery from scheduling. There's no magic fix to make you automatically hit dates, but you can generate real numbers on what it takes to meet arbitrary calendar dates if that's your situation. You should know, for instance, that 2 more teams need to be spun up within the next week if your group is going to hit the date.
It's a myth that Agile team techniques don't scale out. This was true several years ago but a lot has changed since then. A better way of describing the situation is that there are multiple models for scaling out Agile teams and estimation, many of which have years of real-world traction. We don't have the same number of data points as we do with, say, how well stand-ups map to the average team, but we have data and we have demonstrable traction on the problem. All of the normal project and program management tools are available to Agile teams and programs.
For a discussion of how simple systems can scale out into complex projects, check out my 15-minute video. (Sorry for the additional plug, but it is relevant here) https://vimeo.com/57146799 Also for a review of the various Agile Program management systems out there, https://vimeo.com/64452664
The companies A, B and C all put in estimates for a product based on similar premises.
Company C gets selected because their estimate is the lowest.
This could be because company C is the best and can do the work quicker but it is also possible that company C has just made a low estimate pretty much by chance.
Estimates that are wrong (lower than the actual cost) are more likely to be selected so it always looks like estimates are bad even if all the estimates submitted actually do cluster around the true value