How do we estimate?

This content is syndicated from George Dinwiddie's blog by George Dinwiddie. To view the original post in full, click here.

There have been some web posts and twitter comments lately that suggest some people have a very narrow view of what techniques constitute an estimate. I take a larger view, that any projection of human work into the future is necessarily an approximation, and therefore an estimate.

I often tell people that the abbreviation of “estimate” is “guess.” I do this to remind people that they’re just estimates, not data. When observations and estimates disagree, you’d be prudent to trust the observations. When you don’t yet have any confirming or disproving observations, you should think about how much trust you put into the estimate. And think about how much risk you have if the estimate does not predict reality.

This does not mean, however, that you have to estimate by guessing. There are lots of ways to make an estimate more trustworthy.

Using more people to independently estimate is one common technique and provides a reasonableness check on the result. Wideband delphi techniques further this by then re-estimating until the predictions converge (or stalemate). People have widely adapted James Grenning’s “planning poker” to perform this procedure. In theory, having multiple independent estimates misses fewer important points and gives us a more trustworthy result.

In practice, the various estimates are often less independent than we think. A group that works closely together can often guess what each other are thinking about the kind of work they commonly do. In addition, many times some of the participants telegraph their estimates before others have decided, soiling the independence. A further problem is that variations in skills and abilities give some people an advantage in estimating work aligned to their strengths, but the estimates of those more ignorant in the work are often given equal weight, skewing the results. This is especially true when estimating things that have been broken down to small amounts of work.

Estimating relative to other work is easier for people, and therefore more reliable than estimating in absolute terms. I can look at two similar rocks and guess which one is heavier, or if they’re about the same, without knowing what either one weighs. This is the genesis of “story points.” Once we’ve assigned a value to one piece of work, then we can estimate others as multiples or fractions of that reference. Using affinity grouping, we can gather together all the work items that seem about the same size.

Unfortunately, we often have a harder time seeing the size of development project work than we do of rocks. Using the rock metaphor, we might be trying to compare a chunk of talc with a piece of uranium ore. Apparent size is sometimes deceiving. People also have a tendency to hold onto absolute references. They want their story points to be comparable from team to team, or from year to year. They want to adjust their estimates after the fact so that items that took about the same amount of time are given similar values. “We estimated that as a 2 but it turned out to be a 5.” They try to fix the story points to an absolute time or work reference, and in the process they make them less trustworthy by damaging the reliance on relative estimation.

Estimating based on recent history is an excellent way to improve the reliability of estimates, especially for the short term. The XP practice of Yesterday’s Weather is one example of this. “If we completed 24 story points last iteration, we’ll probably complete about 24 story points this iteration.” Bob Payne and I took a look at some data we had from teams with whom we’d worked and found that we could generally do as well, or better, by just counting the stories instead of estimating them in points. In other words, saying “If we completed 8 stories last iteration so we’ll probably complete about 8 stories this iteration” had about the same predictive power as using story points, and was a lot quicker to calculate. This was true even when the story estimates varied by about an order of magnitude. Others, such as Vasco Duarte, have noticed the same phenomena. Taking the story points out of the equation seems to remove some of the noise in the data, and certainly removes some of the effort required. If you want to get better, use what I call the Abbreviated Fibonnaci Series which has the values of “1″ and “too big.” Split the stories considered too big. You’ll accrue benefits beyond better estimates.

If velocity gives us a frequency measurement in stories per iteration, then it’s inverse is cycle time. Cycle time is the time it takes to complete one story–equivalent to a wavelength measurement. Once a team has some track record, then you can generally expect the these numbers to settle down into something fairly predictable. Because these estimates are based on data, many people are tempted to treat them as data, themselves. Remember, though, the disclaimer of investment managers, “Past performance is no guarantee of future results.” Even if the team has a consistent track record, there may be a black swan or three right around the corner.

Of course, all things are not always equal. Organizations have a distressing tendency to change the makeup of teams, which changes the rate at which the team accomplishes work. The work itself may change, and so may the team’s skill at dealing with the work.

This is just three categories for improving the trustworthiness of estimation. There are many other techniques for estimating. Most have advantages, and all have disadvantages. Even with our best attempts at improving estimates, the true goal is accomplishing the work. Ultimately it’s better to apply energy to that goal rather than chasing after ever better estimation.

Leave a Reply

What is 2 + 1 ?
Please leave these two fields as-is:
Please do this simple sum so I know you are human:)

There are 101 ways to approach anything.
To find the best way, sometimes you need expert help

What People Say

“Kelly revolutionised the way our digital department operated. A true advocate of agile principles, he quickly improved internal communication within our teams and our internal clients by aligning our business and creating a much enhanced sense of transparency in the decisions the business was making. Kelly also introduced a higher sense of empowerment to the development teams...”


“Kelly’s a leading program director with the ability to take charge from day one and keep strong momentum at both a program and project level driving prioritisation, resourcing and budgeting agendas. Kelly operates with an easy-going style and possesses a strong facilitation skill set. From my 5 months experience working with Kelly, I would recommend Kelly to program manage large scale, complex, cross company change programs both from a business and IT perspective.”


“Kelly is an extremely talented and visionary leader. As such he manages to inspire all around him to achieve their best. He is passionate about agile and has a wealth of experience to bring to bear in this area. If you're 'lucky' he might even tell you all about his agile blog. Above all this, Kelly is great fun to work with. He is always relaxed and never gets stressed - and trust me, he had plenty of opportunity here! If you get the chance to work with Kelly, don't pass it up.”


“Kelly is an Agile heavy-weight. He came in to assess my multi-million $ Agile development program which wasn’t delivering the right throughput. He interviewed most of the team and made some key recommendations that, when implemented, showed immediate results. I couldn’t ask for more than that except he’s a really nice guy as well.”


“Kelly and I worked together on a very large project trying to secure a new Insurer client. Kelly had fantastic commercial awareness as well as his technical expertise. Without him I would never had secured this client so I owe a lot to him. He is also a really great guy!”


“Kelly came to the department and has really made a huge impact on how the department communicates, collaborates and generally gets things done. We were already developing in an agile way, but Kelly has brought us even more into alignment with agile and scrum best practices, being eager to share information and willing to work with us to change our processes rather than dictate how things must be done. He is highly knowledgable about agile development (as his active blog proves) but his blog won't show what a friendly and knowledgeable guy he is. I highly recommend Kelly to anyone looking for a CTO or a seminar on agile/scrum practices - you won't be disappointed!”


“Kelly was a great colleague to work with - highly competent, trustworthy and generally a nice bloke.”


“Kelly was engaged as a Program Director on a complex business and technology transformation program for Suncorp Commercial Insurance. Kelly drew on his key capabilities and depth of experience to bring together disparate parties in a harmonised way, ensuring the initiate and concept phases of the program were understood and well formulated. Excellent outcome in a very short time frame. ”


“I worked with Kelly on many projects at IPC and I was always impressed with his approach to all of them, always ensuring the most commercially viable route was taken. He is great at managing relationships and it was always a pleasure working with him.”


“I worked with Kelly whilst at Thoughtworks and found him to be a most inspiring individual, his common-sense approach coupled with a deep understanding of Agile and business makes him an invaluable asset to any organisation. I can't recommend Kelly enough.”


“Kelly was a brilliant CTO and a great support to me in the time we worked together. I owe Kelly a great deal in terms of direction and how to get things done under sometimes difficult circumstances. Thanks Kelly.”