5 reasons why forecasting is hard and 6 things to do about it

Categories Agile

I once interviewed a project manager whose CV claimed that she consistently delivered projects ahead of time and under budget. That is certainly no small achievement, so I asked her what her secret was. She answered, “I take the estimates from the developers and multiply by 5”.

I used that story as a funny anecdote for a long time. In “real life”, who would ever manage to sell something if they charged that much? However, I have since been working on some projects that have made me wonder if she was too conservative!

On paper, Agile forecasting is trivial:

  1. Measure how much the team can get done within a particular timeframe (in Scrum typically a sprint).
  2. Forecast, based on the work remaining and the progress so far. Either how long it will take to complete the desired functionality or far through the backlog you will get by a particular date or.

In practice, however, forecasting is very hard. As the popular quote points out: “Prediction is difficult, especially when dealing with the future”.

Why does forecasting go wrong?

1. We never know less than at the start of the project

There are many reasons why we need to know how long something will take: It worth doing this thing at all? How much should we charge the customer? When can we start this other thing?

Unfortunately, as we haven’t yet started the project, we are forced to base our estimate on a lot of assumptions:

  • The requirements won’t change
  • We will have the people in the team we think we will
  • We’ll be using a particular technology
  • There isn’t hidden complexity, at least not more than we think there is

Etc etc.

If just one of these assumptions are wrong, our forecast will be wrong. These early estimates will be highly unreliable but is still what we often measure the success of our project against.

2. We don’t start on time

When a project finishes later than we thought it would, it will have knock-on effects. In most cases, we will have scheduled another project to start right after that project, which means that the new project will be late from the start. How could we then expect to finish that project by the date we thought?

Further, if we don’t adjust the plan for the next project after that one, that project will be late too. It might even be that it won’t have started by the time we expect it to finish!

3. The velocity can be unpredictable

It doesn’t matter so much whether we measure our velocity as story points per sprint or (similarly-sized, small) user stories per sprint. The idea is still the same: do, measure and forecast.

However, there are still many reasons why we end up fooling ourselves:

  • By definition, estimates are guesses and will never be perfect. If we use story points, something the team estimated as a ‘three’ might well end up taking longer than a ‘five’. That’s normal and to be expected.
  • For our velocity to be consistent, we need a stable team. Often, despite our best attempts, this will not be the case. People leave and join, get ill and go on holiday. Such is life.
  • Dependencies can end up blocking us. If someone we’re depending on doesn’t finish in time, we are likely to be late too!
  • Defects and technical debt from things we’ve delivered before may come back and bite us.

And so on.

4. Things change

One of the main ideas in agile is to let requirements emerge. We will have a fairly good idea at the start what we will be delivering but it might well change as we learn more. That is a good thing! It means we deliver a better product than if we stuck to the original plan.

However, this makes forecasting hard. We estimate based on what we think we will be delivering. If that changes, the estimate will change.

Our desire to find out as much detail as early as possible, in order to provide accurate estimates, will always be in direct conflict with the need to respond to change. Creating a backlog where every item has got a lot of detail will lock down our requirements too soon. Once we’ve already spent all that effort generating requirements, it would feel (and be!) wasteful to throw all those requirements away.

5. People are optimists

Last but not least, people are optimists. I know – horrible people!

Developers tend to overestimate how much they can get done. Often, they are even encouraged to do so. If they give a big estimate, the product owner will be disappointed. If they give a small estimate, the product owner will be happy. Most people prefer to make others happy.

Also, people give in to peer pressure. If you think something might require a lot of work but the others say it will be quick, you don’t want to come across as negative. Even methods like planning poker suffer from this. Sometimes, people seem to focus on picking the “right” number, i.e. what they think the others will pick, rather than coming up with an estimate they think is right.

This optimism bias can also sneak into our supposedly empirical forecastsing. If the forecast looks worse than we were hoping, we may be tempted to say thinks like:

“Oh, that sprint was unusually screwed up, so let’s ignore that number”, or

“Yeah, this is what the forecast says but be should be able to increase the velocity as the team gets up to speed, so don’t worry too much about that”.

As the project progresses, we will learn more and more. Somehow, this tends to lead to estimates growing. For instance, when we break big stories down into smaller ones, the bits always seem to add up to more than the big story they originated from. I have never seen a burn-up chart where the scope doesn’t go up and up until the product owner eventually decides to descope some functionality.

What can we do about it?

1. Don’t give people a perception of the forecast being more reliable than it is

People believe precise numbers much more than rounded numbers. If today is the 2 February and I tell you I will post another blog post on the 1 March at 12:00, you’d expect that’s exactly when it will be available. If I instead tell you I will post it in a month’s time, you’d probably not hold it against me if I didn’t post it until the 5th of March.

Don’t make people put more faith into your numbers than they should by giving them what looks like a precise number.

Likewise, avoid giving dates before you are confident you will be able to start development when you think you will. As we saw above, the previous project might well run over. You are in a much better position forecasting as a number of sprints than an exact date.

2. Learn more before forecasting

If we know the least about our project at the start, how can we possibly make a forecast? Well, we need to find out more!

One trick we can use is to pick a few of your epics and analyse them in more detail, breaking them down into granular stories that we could implement. Estimate these smaller stories (or count them, if you’re a #noestimates kind of person). Then make a guess about the size of the other epics. Might they be roughly the same size?

But how long will the stories and epics take to implement? Well, we should have some idea after our first sprint. We can give you a guess but it’s just a guess.

3. Fix time, not scope

If estimating is so hard and we won’t know how long something takes until we’re done, how could we cost our project or plan our roadmap?

The unfortunate answer is that nothing in Scrum will help us produce better up front forecasts. Sure, once we have started work, the methods we have available will give us some indication. However, if anything changes, the forecast changes. It’s only a picture of how things look right now. The more work we have completed, the more reliable the picture we will get.

The purpose of Scrum is not to produce the best possible estimates. Instead, we’re maximising the value we deliver within the time we have available by starting with the most important features first. Rather than building our product layer by layer, where we’d get nothing of real business value until every layer has been completed, we build our product in thin, vertical slices. Each sprint, we’ve got a potentially shippable product.

This is really important. What this allows us to do is to turn the question around. Rather than asking “How long will it take?”, we can ask “When do we need it by?” or “How much is it worth?”. We create a time box and deliver the most possible value we can within that time box. Sure, we may not deliver every feature we thought but we will deliver the most important ones.

Once we have used up our budget, that’s as far as we got. Let’s move on to the next thing. Or we can keep delivering more of the features if they are more valuable than the next thing.

4. Use a velocity range

The most common forecasting method is probably to use the average velocity. Either since the start of the project or for, for instance, the last 3 sprints. Another option is to use the median velocity. Both these methods are simple but unfortunately not particularly reliable.

Let’s look at a couple of examples to illustrate this:

  • Sprint velocities: 3, 19, 7, 3, 3. The average velocity is 7 but I would be very hesitant forecasting based on this number. In most sprints, we’ve been nowhere near this velocity!
  • Sprint velocities: 3, 10, 10, 2, 10. The median velocity is 10 but that again seems very optimistic. The average is just 7 points so how can we assume we’ll do 10 points per sprint?

A better way is to use a range for the velocity. One tool I often use for this is Mike Cohn’s velocity range calculator. It’s a simple web tool where you enter the velocity for each of your sprints so far (you need at least 5 sprints’ data) and get a range within which you, in theory, could be 90% certain your velocity will be.

We can use this range to forecast a best case and a worst case, again with 90% confidence. It may feel uncomfortable to not be able to give a more precise number than 4 – 11 sprints but if that’s what the data tells us, it’s much more likely to be right than if we say 7 sprints.

After all, it’s better to be roughly right than precisely wrong.

5. Don’t give people the answer they want to hear

As mentioned above, it is easy to fall into the trap of giving someone the answer they want to hear. We prefer to make people happy. Therefore, we need to be very careful to stay honest.

  • We should never assume the velocity will increase as people get more familiar with the project. While this seems to be a fair assumption, quite the opposite could happen. There may be hidden complexity or dependencies we haven’t spotted, leading to the velocity falling.
  • A positive trend in velocity, where it has been going up for a few sprints in a row, is no reason to assume it will continue to increase. Maybe we’ve been able to sort out some fundamental process issues to increase our velocity but to increase it further would be much harder.
  • When we wish the estimates were lower, we need to be careful not to pressure the team or wear them down with repeated estimation sessions until they give us the answer we want.

6. Learn to live with the uncertainty

Finally, we need to get used to the idea that we will never be able to precisely predict how long something will take. The only time we can be 100% certain of the delivery date is when we are done.

Having a precise plan, which we have put a lot of effort into, can work like a comfort blanket and make us feel safe. However, this false sense of control is likely to create more damage than it fixes.

Rather than trying to find ways of creating more accurate estimates and precise forecasts, we need to learn to accept and work with uncertainty:

  • Don’t feel tempted to break down all the stories at the start, even if that might lead to more reliable estimates. As we’ve seen before, this locks down our requirements too soon. There is no point delivering on time if it turns out we’re delivering the wrong thing.
  • Don’t spend time getting a more precise estimate than you need. For example, when it comes to making decisions, if one option is sufficiently better than the other, chances are even very rough estimates will tell you this. If not, more detailed estimates probably won’t help you anyway. In this case, you may want to consider spiking the different options and base your decision on actual data.

After all, trying to produce accurate estimates is very expensive. Let’s focus our efforts on delivering value instead!


What are your thoughts about forecasting?  Have you got any methods to make it easier and more reliable? Please share your thoughts in the comments below.

Magnus Dahlgren – Scrum Master (CSP) and aspiring Agile Coach

Leave a Reply

Your email address will not be published. Required fields are marked *