Creating an environment that enables self-managing teams

Only one out of every seven Scrum teams is self-organising, according to the State of Scrum survey. That is a shockingly low number as it means a lot of organisations miss out on a significant part of the benefits of agile!

“The best architectures, requirements, and designs emerge from self-organizing teams”.

So says the Agile Manifesto. In this blog post, I will look at what this “self-organisation” is about and how to get it going.

Different degrees of self-organisation

Self-organising teams is a fairly wide term and comes in different degrees:

  • Traditional manager-led teams, where a manager assigns tasks and manage the work process are not normally considered to be self-organising. Still, they tend to have a slight degree of self-organisation to them. If only when it comes to how the individuals execute their given tasks.
  • Self-managing teams are not only responsible for completing their tasks but also coordinating between themselves who does what. They are also responsible for monitoring their progress and re-organise the work and adjust their process as necessary, based on that progress or lack thereof.
  • Self-designing teams take the self-organisation to the next level. They choose not only who does what but who is on the team as well. Typically, this starts with a self-selection workshop where the people involved agree what teams are needed, what roles and skills are needed for each team and then choose themselves which team they want to be in. As the work progresses and they learn more, they can later adjust the make-up of the teams as they see fit.
  • Self-governing teams is the highest degree of self-organisation and basically gets rid of all levels of management. No one tells anyone else what to do. Anyone can come up with an idea and invite others to join a new team to deliver it. The vision of the company does not come from a founder or CEO but from the employees themselves. So far, very few companies operate in this way but the Swedish software consultancy company Crisp is one example.

In this article, I will be focusing on self-managing teams. This is the most common form of self-organisation practiced by Scrum teams and is a starting point for any further degree of self-organisation.

Autonomy leads to increased efficiency

Putting the team in charge of their process empowers them to find the ways of working that suits them. After all, they are the people closest to the work. Therefore, they are best positioned to judge what works or not.

However, with power comes also responsibility. It’s no longer good enough to say “I did what I was told, so it’s not my fault” or to blame the process when something fails. If the process isn’t perfect (and it never is!), it is up to the team to improve it.

  • Autonomy creates more motivated team members. Take yourself as an example. What do you prefer? Being told exactly what to do or deciding yourself how to tackle something? Autonomy is one of the most important factors for motivating people, according to Dan Pink in his book Drive – The Surprising Truth About What Motivates Us and the corresponding Ted talk.
  • Motivated team members generate better results. When someone cares about the outcome, they will deliver the work to a higher standard.
  • Better collaboration, learning faster and responding quicker. Again, the team are closest to the action. Having to wait for a manager to generate insights and make decisions will slow them down.

How to establish self-managing teams

We can’t force self-management upon the team. Ordering someone to be self-managing would be self-contradicting to say the least! Instead, we need to create an environment where self-management can happen.

We need several ingredients to create such an environment:

  • Small, stable teams
  • Talented team members
  • A shared goal
  • Appropriate constraints
  • Trust
  • Support
  • Embracing continuous improvement

Let’s look at each of these in more detail!

Small, stable teams

To make the right decisions, people need to have the relevant information. The members of self-managing teams are no different. They need to know what the others on the team are up to. That’s the only way they can organise the work between them in the most effective way.

A self-managing team will fail or succeed based on how well the communication between the team members is working. The bigger the team is, the harder the communication will get. If you work on a team with three other people, you need 15 minutes to talk to each one of your team mates for five minutes. If I work on a team with 20 people, I will need more than an hour and a half to do the same. Just imagine how much harder it would be for me to keep track of what the others are up to!

Similarly, people need to get a chance to get to know each other well enough. When they learn what skills and little quirks the others have is when they will know who, how and when to ask for – or offer – help.

Talented team members

A team made up of very inexperienced team members or people with limited skills will struggle to self-manage. Without the appropriate skills and experience, it is very hard to know what is likely to work and what isn’t.

When the team members don’t have sufficient experience or skills, they need appropriate training and support before they can become self-managing.

However, let me just make very clear that this doesn’t mean that everyone in the team must be highly experienced. We all have a responsibility to make sure those who are new to the industry get a chance to get that valuable experience. A mix of skills and experience can even be beneficial to the team, as those who have been doing something for a long time may be stuck in their tracks and someone with less experience can bring a new perspective.

A shared goal

Self-managing teams are responsible for making many of the tactical decisions. This makes it more important than ever that the product owner has a clear vision for the product and can communicate that vision to the team. It’s simply not good enough for a product owner to keep the vision to themselves and drip-feed requirements to the team. Otherwise, people will end up pulling in different directions.

After all, self-management doesn’t mean that everyone can do whatever they feel like. They still serve a greater purpose. Only through understanding that purpose can team members ensure that their decisions get them closer to the goal.

Appropriate constraints

The team needs clarity which decisions they can or cannot make. A team without clear boundaries can easily overstep their responsibilities without realising. This could easily move them out of sync with the rest of the organisation and the bigger vision. However, perhaps even more likely, a team not knowing what they are allowed to do can end up taking an over-cautious approach, out of fear of making decisions they shouldn’t.

One example of useful constraints may be technical standards to make sure the code from each team work well together. For example, two teams working on the same website choosing different Javascript frameworks would be very wasteful. They would duplicate efforts and the end users would suffer from downloading two libraries.

Another important constraint is for the product owner to clarify the boundaries of the self-management. Which decisions can  the team make themselves about the product and which will the product owner make?

Trust

Self-managing teams must be responsible for their own success. For self-management to not just being a pointless buzzword, the teams need to be allowed to fail.  A team will never become truly self-managing if someone steps in and solves problems for the team whenever they arise. It doesn’t matter whether it is a traditional manager, a scrum master or a product owner. Why should the team bother organising themselves if someone overrules their decisions as soon as there is a potential problem?

The thought of failure may scare both managers and the team itself but luckily, Scrum provides several safety mechanisms:

  • The team monitor their progress towards the sprint goal. In the daily scrum, they can adjust their plan based on how things are going.
  • The product owner should be available to answer questions throughout the sprint and to clarify misunderstandings about the requirements.
  • In the sprint review, the product owner and customer inspect the resulting working software. Even if it is hopelessly wrong, the team should never end up wasting much more than a sprint’s worth of effort.
  • In the sprint retrospective, the team looks at what went well and what didn’t. This way, they improve their way of working and learn from mistakes ahead of the next sprint.

Support

Rome wasn’t built in a day. It would be quite a shock for any team to suddenly be left to their own devices. Just stepping away could create a management vacuum. That could either lead to chaos or to someone like as a senior developer stepping in and taking the role as manager. Neither would result in good self-management.

When the team members are not used to working in self-managing teams, they will need support to get going. This would typically be provided by the Scrum Master (or even an agile coach). However, as we saw above, they mustn’t make decisions for the team. Instead, the scrum master should help the team realise when a decision is needed and help them make the decision themselves.

I have also seen how some people like being told what to do. It makes them feel safe. No longer getting the steer they need can make them feel fear and uncertainty. Consequently, these people will need extra support to find their place in the self-managing team. Ultimately, it may even turn out that they may not be the right people to work in such a team.

Continuous improvement

Finally, a self-organising team will never come up with the perfect solution straight from the start. Neither would a traditional manager! That’s not the point.

Sometimes the team will implement fantastic solutions. Sometimes they will do things that don’t work very well at all. Most likely, the solutions will be somewhere in-between; not a disaster but they could be better.

That’s why self-managing teams need continuous improvement to reach their potential. Self-management is not just something that happens once and we’re done. It takes hard work to get going and to keep it going.

Continuous improvement is never finished

Perhaps surprising to some, there are no “best practices” in agile. If there was, there would be no way we could get any better. What a horrible, defeatist thought that would be!

Since things can always get better, there will always be a way to deliver more, quicker, without compromising on quality. We keep finding ways to make it easier and cheaper to make changes as we find out more about the problem our product is solving.

That’s why we strive for continuous improvement, always looking out for things to change and improve. When it comes to our way of working, we never settle for “good enough”.

Many small changes beat one big change

We have learned to maximise the value of the products we build by delivering them in an iterative and incremental fashion. The same method applies to how we can optimise our way of working too. Small frequent changes are easier to implement and make stick than big revolutionary changes.

While each change is small, they all add up. As the Swedish proverb goes:

Many small streams make a big river

If each 2-week sprint we manage to find a way to deliver just 2% more value the next sprint, we’ll be delivering 60% extra value each sprint after a year. By the end of year 2, we will be delivering 160% more value than when we started!

Continuous improvement comes from the team

In continuous improvement, the team suggest and implement the changes. They have the necessary knowledge about what they are doing to be able to judge what improvements are likely to work or not.

If the managers are uncomfortable giving away this control, it is worth bearing in mind that ownership of the process helps the team feel commitment and a motivated team delivers better products.

Aim for perfection, then take one step at a time

As there is no end state, we need to set our goals very high. No individual change is likely to achieve the goal but successful changes will get us closer to it. This way, the changes will build on each other and get us closer to perfection, one step at a time.

For example, let’s say we identify that one thing preventing us from delivering frequently is the amount of time we need to spend manually testing releases, as there are no automated tests. We are unlikely to be able to go from 0% test coverage to 100% in a sprint.  If we tried, we wouldn’t have time to do anything else than write tests and even then, we probably wouldn’t finish reach the coverage we were aiming for.

Instead, a good first step might be to star adding tests for all the new code we add from now on. Next, we might identify some key areas that are central to the product, or particularly time consuming to test manually, and add tests for these.

The biggest risk is not trying to improve

If we want to improve things, making changes is unavoidable. As Albert Einstein said:

“The definition of insanity is doing the same thing over and over again, but expecting different results”

However, each change we make comes with some amount of risk. Things may improve or they may not. They could even get worse!

If we made a big change, which took several months to implement, it would be very disappointing (and costly!) if it turns out it wasn’t a change for the better. We will have put a lot of time and effort into establishing our new processes. Because we thought this change was the one that would make all our lives better, we also put on hold other changes that in reality may have made a bigger impact.

When we instead make small changes, all the time, the risk of each change is much smaller. We will be able to see much quicker what works and what doesn’t and adjust course as needed.

That’s why continuous improvement is basically trial and error. We will do our best to come up with the changes we believe will make a positive impact. Some things will work but others won’t. By sticking to the changes that were beneficial and abandoning those that didn’t, we will get better.

The kaizen cycle

Another way to refer to continuous improvement is kaizen. This is Japanese and basically means improvement (kai = change, zen = good).

A kaizen cycle consists of four steps:

  1. Plan. Come up with a hypothesis. “If we change x, we will get improvement y”
  2. Do. Conduct an experiment to prove or disprove the hypothesis.
  3. Change. Evaluate the results. Did the change make the impact we were hoping?
  4. Act. Based on the result, refine the experiment and repeat the cycle

Each change we decide to implement is in essence an experiment. We try something for a sprint and if it was useful, or if we’re not sure yet, we keep doing it for another sprint. If it didn’t work, we stop doing it and try something else. Therefore, we should avoid making changes that would be hard or costly to revert. For example, if we think an electronic task board will be better than our old physical one, buying a huge touch screen to give it a go is probably a bad idea.

Changes can be made at any time

In Scrum, we make continuous improvement each sprint through the sprint retrospective. Each sprint, the team discusses what worked well and what could be improved and agree on changes to make in the next sprint.

However, that doesn’t mean that it is only during the retrospective that changes can be made. They can happen at any time!

If something bad happens during the sprint – let’s say we deployed buggy code to the production system and caused a lot of disruption for our users – stop up after the immediate issue has been resolved and look at how to prevent it from happening again. No need to wait for a sprint retrospective!

Also, as the sprint retrospective tends to look at the last sprint, it may be a good idea to have a retrospective that looks at a longer time frame every once in a while. This could be at the end of a big release or phase in our project or whenever you feel the time is right.

Evaluate the changes

Follow up in the next retrospective so see what effect the changes from the previous retrospective have had. Have they made things better?

While it is possible to simply ask “How do you feel that change worked?”, you will get a more objective result if you are able to measure the impact. For instance, if the issue you’re trying to solve is how long you need to wait before the test environment becomes available, measure the time and compare it to the previous sprint.

The important thing here is to look at the relative change. We’re not so much interested in whether we reached the perfect state. We probably didn’t! Rather than asking “Did we avoid introducing any new bugs this sprint?”, ask “Did we introduce fewer bugs this sprint?”. That’s all we need for the change to be successful and worth keeping. We can then build on it in the future to improve things further.

In some cases, it may be necessary to continue another sprint before being able to make a conclusion: “It seems as if it worked quite well but this sprint was a bit special because x happened, so let’s see how it goes next sprint”. That’s fine too!

A couple of final thoughts

Frustratingly, not everything that prevents work from being delivered as efficiently as possible will be within the control of the team. In fact, many of the biggest problems are likely to fall within this category. This can quite easily result in a sense of retrospectives being pointless. We keep raising the same points over and over again and nothing improves. It is therefore important to not limit the continuous improvement to the team. It is the responsibility of the Scrum Master to engage with the management to solve these problems.

Another thing to bear in mind is that if we try to make too many changes at once, it will be hard to know what is successful and what is not. Also, if everything keeps changing, it will be hard to keep track of how things are supposed to be working this week. Therefore, limit the number of changes that happen at once. For example, you may want to agree to only proceed with the top 3 suggestions in your retrospective.

3 steps closer to perfection is a pretty good outcome!

Root cause analysis using cause effect diagrams

Cause effect diagram

A guaranteed way to waste our efforts when trying to fix a problem is to attempt to fix the symptoms rather than the root cause. Sure, we may be able to make the problem go away but as the root cause still is there, the problem is likely to come back. Either, we will encounter the same issue again or some different problem will pop up.

The most well-known method for root cause analysis is the simple but surprisingly effective “5 Whys” method. This method suggests that the root cause of almost any problem can be found through simply asking the following 5 questions:

  1. Why?
  2. Why?
  3. Why?
  4. Why?
  5. Why?

Often, this simple method is all we need to do to get to a cause worth solving. We don’t even need to get that hung up on the number 5. Sometimes, we might get there in 4 “whys”, sometimes it may take six. The point is to dig deeper than what we would otherwise do.

Sometimes, though, a slightly more thorough analysis may be required needed. This is where cause effect diagrams come in. I first came across this method in Henrik Kniberg’s excellent book Lean from the Trenches and have been a fan ever since.

This is a great tool that you can either use on your own (I use it sometimes to help me get my head around problems) or with a group, for example in a sprint retrospective.

How to create cause effect diagrams

The method based on the same idea as “5 whys”, but slightly more refined.

  1. Start by writing the problem you’re trying to solve on a sticky note and place it in the middle of a whiteboard.
  2. To understand more about the problem and to find out whether it is a problem worth solving, ask “So what?”. Why does the problem matter? List a few effects of the problem, place them on the whiteboard and draw arrows from the first sticky note.
  3. Digging into causes of the problem by writing answers to “Why?” on sticky notes and draw arrows.
  4. Repeat, digging deeper into each effect or cause as you see fit.
  5. Identify the root causes you want to address and possible solutions to these.

When we struggle to answer “why?” for something, chances are you’ve encountered a root cause.

Sometimes, we end up with loops in the Cause Effect Diagram. These are vicious circles. Highlight them with double arrows or a red pen. An example:

Cause effect diagram loop

We release infrequently. Therefore, each release ends up containing a lot of user stories. Big releases are scary, so we deploy less frequently.

Breaking self-amplifying circles like this can have a big effect so this may be one area to focus your improvement efforts on.

Conclusion

Cause effect diagrams are a great way to analyse and visualise a problem. It will not only help you find causes but also the effects, to help you understand why – or if – it is important to solve the problem.


Do you have experiences using cause effect diagrams or do you have any other tools you use for root cause analysis? Share your experiences in the comments below!

 

 

#noestimates is just story points done right

No estimates

Estimation has long been a natural part of software development. Therefore, an approach like #noestimates, which gets rid of estimation can seem quite suspicious. After all, there are good reasons why we estimate. We need to know how long something will take, so that we can decide whether it’s worth doing or how much the customer should be paying. We also want to know whether we are on track to deliver by our deadline, so we need to know how much work is left.

What is #noestimates?

#noestimates is an idea aiming to avoid spending time on estimation when simpler methods can provide a similar or even better result. It’s not one specific method but rather a desire to find ways to make decisions and forecasts without estimation.

Some interpret this as simply going by gut feeling but the more accepted approach is to use empirical methods. One of the most prominent proponents for #noestimates is Vasco Duarte. I’ll be basing this article on the method described in his book No Estimates – How to measure project progress without estimates.

His method is very simple:

  1. Do some work
  2. Measure the progress
  3. Forecast, based on the rate of progress

If you’re using story points, you will recognise these steps from what you are already doing. However, compared to using story points, you will notice there is one step missing: estimation. No surprise there as it is a method under the umbrella of #noestimates!

Tracking velocity in a #noestimates world

The lack of story points might make you uncomfortable. After all, how do we measure velocity and make forecasts without them? The answer is simple: rather than measuring the velocity as the number of story points completed per sprint, measure it as the number of stories completed. Duarte calls this the story velocity.

Could this really work?

Well, there are a few principles needed for it to work reliably.

  1. Slice the user stories thinly
  2. Time-box and prioritise
  3. Update the forecast frequently

Let’s look at each one of these in a bit more detail.

1. Slice the user stories thinly

Duarte recommends trying to make each user story no bigger than two days or, preferably, small enough to fit into half a day or a day’s development effort. The quickest way to know if a story is small enough, he suggests, is simply to ask, “Can you have this done by tomorrow?”.

This way, all stories will have roughly the same size, so no estimation will be necessary.

Also, we need to make sure that each story is independent and valuable on its own. This way, we can choose not to implement some stories without impacting the rest of the project.

One thing to note, however, is that breaking a feature into stories is a waste of time if we decide not to implement that feature or if we get feedback that significanlyly changes the feature. Rather than breaking down every feature at the start, Duarte uses a second velocity to forecast when features will be delivered: the feature velocity. This measures how many features we deliver per sprint.

2. Time-box and prioritise

As we have no estimates, we need a way to determine how much things will cost. Duarte suggests using time-boxes for both projects and features.

Time-box the project. Rather than estimating the cost, decide how much time we have available to spend. Order the features in the backlog by value. This way, if not all features can be completed, it will be the least valuable stories that are left out.

Time-box features. Don’t let any feature take longer than a month. Order the stories within features in priority order. If not all stories are completed, either throw the rest away or move them into a new feature and place it in the backlog based on how valuable it is. This way, the feature velocity will be more reliable and we are likely to be able to drop some low priority stories.

3. Update the forecast frequently

As #noestimates is a very lightweight way to produce forecasts, it is easy to do frequently. Every week or sprint we can predict when we are likely to be delivering certain features and what we’re likely to have delivered by the deadline.

This frequent forecasting enables us to manage scope along the way of the project. If we see we’re not on track, we can decide to get rid of low value features or remove low value stories from features.

Hang on, those are all good principles!

The principles above hopefully don’t feel particularly controversial:

  • Small stories is a good idea for many reasons. They will be easier to implement, create a better flow and enable more flexibility and quicker feedback.
  • Prioritising features and stories based on value is hopefully a no-brainer by now.
  • Time-box the project rather than fixing scope allows us to swap features and stories in or out as we learn more, while avoiding expensive death marches.
  • Time-boxing features gives us a checkpoint: should we really keep adding more capabilities to this feature or is it more valuable to move on to the next one?
  • Frequent forecasting and managing scope along the way is how we avoid unpleasant surprises and can make sure we make the decisions we need to, when we need to.

What’s the difference between story points and #noestimates?

While #noestimates is a big mind shift from the traditional upfront estimation in days and hours, it is not a huge change if we are already using story points. It’s rather the natural progression, as we strive to establish a flow of small stories. When every story is so small it is one or two story points big, what’s the point of story points?

Perhaps surprisingly, though, #noestimates doesn’t mean no estimates. There are a lot of assumptions that are very much like those we do when estimating:

  • When breaking down big stories into small stories with the goal for them to be no more than a couple of days big, we’re effectively estimating how much effort each story requires. We may not use estimation cards and it may be quick but it is an estimate nevertheless.
  • When using our feature velocity, we make assumptions about the size of features, namely that they will be roughly the same size as each other (using a time-box to make it more likely they are).
  • When forecasting, we assume that the rate of progress will be consistent (perhaps within some interval with a maximum and a minimum value), which requires all things to remain equal. A lot of things affect the rate of progress. The most obvious one being adding or removing (or even swapping!) team members.

What #noestimates does, rather than eliminating the need for estimates, is to reduce the time spent on estimates.

Which one is better?

I wouldn’t go as far as saying #noestimates is better. Story points work well for many teams and is an empirical method too. However, they are not without challenges.

Even with methods like planning poker, estimation using story points can be time consuming. Discussions like “It’s a 3! No, it’s a 5!” can go on far too long if we let them.

It can also be very hard to fully separate the estimation of size and duration when using story points. Still, after 10 years using story points, I often catch myself thinking “this will take someone most of the sprint so it’s a 5”.

Last but not least, it is easy for story points to get inflated over time, by teams giving stories higher and higher estimates. There is even an incentive for doing so: the velocity will look higher!

With #noestimates, the way to game the story velocity is to create small stories so that we can ship more of them. This means delivering value quicker. That is certainly something I can live with!

Don’t leave the sprint planning meeting without a plan

A sprint planning meeting has two purposes:

  • Agreeing a sprint goal
  • Creating a sprint backlog

I have seen some teams where both these artefacts have been one and the same. The sprint goal is to complete a list of stories. The sprint backlog is that list of stories.

This is certainly one way to do it but not a good one.

I touched on what constitutes a good sprint goal in my previous blog post about preparing for the sprint planning meeting. Today, I will be looking a bit more at the sprint backlog.

There is more to a sprint backlog than user stories

The Scrum Guide describes defines the sprint backlog as “the Product Backlog items selected for this Sprint plus the plan for delivering them”.

What this means is typically a list of user stories together with the tasks needed to deliver them.

Stories are thin, vertical slices of functionality that include all the different types of work that make up working software, such as UX, development and testing. Often, stories will require the involvement of several team members with different skills.

Tasks, on the other hand, are the things the team need to do to deliver a story. Typically, a task should be possible to complete by one person (or a pair). Some examples could be “Create UX assets”, “Write feature files” or “Create database migration”.

Why breaking stories into tasks matters

Creating tasks for the sprint backlog will help us in several different ways:

  • Understanding the work. The task breakdown itself helps the team make sure they understand the story and the work involved. If any requirements are unclear, this is likely to surface during the task breakdown if not before.
  • Ensuring the sprint goal is realistic. If the team realise while doing the task breakdown that there is too much or too little work in the sprint, they can go back to the product owner and suggest alternative approaches or adjust the sprint goal as necessary.
  • Making progress (or the lack thereof) visible to the team. Rather than a story sitting “in progress” for a week, tasks should be moving across the board daily. This allows the team to better see how their work is progressing and adjust their plan when problems arise.
  • Enabling collaboration on stories. When the tasks making up each story are visible on the task board, it is much easier for team members to share the work. Rather than each person having “their own” story, people can easier see what they can do to help meet the goal. Stand-up updates move from being “Yesterday, I worked on this story and today I will do the same” to “Yesterday, I created the view so someone should be able to pick up the test automation. Today, I will be doing the CSS.”

How to break a story into tasks

What the tasks will be will obviously vary from story to story and it is hard to give any hard and fast rules.  A good goal to aim for is for a task to be between 4 – 8 hours. Keep asking “what do we need to do to do that?” until the tasks are small enough.

That is, small enough but not too small. Trying to identify every single little task in the sprint planning is likely to be a waste of time. No matter how hard we try, we won’t think of everything we need to do. Better then, as Mike Cohn suggests, to aim to identify about two thirds of the tasks and make sure to set aside time in the sprint to deal with the additional things that come up.

We don’t have to break down every story in the sprint planning meeting

Some teams only break down the first few stories in the sprint planning and then do a mini-kickoff when getting close to running out of broken down stories during the sprint.

This can be a good way to prevent the sprint planning meeting flor getting too long. Another benefit is that it reduces the risk of forgetting the details between when a story was broken down and when it is brought into progress.

The downside is that it makes it a bit harder to judge which stories are likely to fit into the sprint. For the stories towards the bottom of the sprint backlog, that might not matter so much.

Estimate the tasks – or don’t

Some find estimating each task in hours useful. Adding up all the hours can be a good sanity check to see if the work fits into the sprint. If doing this, don’t assume a two-week sprint consists of ten eight-hour days. We can almost treat the hours as another velocity: “We test to get through 100 hours of work each sprint”.

A quicker alternative than estimating each task, particularly if tasks are roughly the same size, is to just count the number of tasks.

Whichever method we choose, we will still be able to create a sprint burn-down if we want to. It will show the number of hours remaining or the number of tasks remaining. No big deal.

It’s impossible to everything that will happen in a sprint

Even if a sprint is a short time, we will not be able to predict everything that will happen. Maybe, the production system will break so that we need to deal with that. People might get ill. Upper management might take the whole team out for a couple of hours through calling an all-hands meeting. Or things might just take longer than we thought they would.

Therefore, trying to optimise a sprint down to the last hour is likely to fail. Ironically, it will even make it less likely to meet the sprint goal.

Use simple tools to manage the sprint backlog

When each task takes half a day to a day for a person to complete, we will end up with a lot of tasks in each sprint. What’s the best way to manage them? This is down to personal preference. Just make sure it’s the team managing their own tasks rather than the PO or Scrum Master micromanaging the work in the sprint.

The important thing is that it should be easy for the team members to add, remove and change tasks during the sprint as they learn more. If the tool we choose makes that hard, it’s probably not the most suitable tool for the task(s)!

A successful sprint planning meeting is down to the preparation

Have you ever been in a sprint planning meeting that sucked? I have. Admittedly, I have found myself running a few too!

The worst sprint planning meetings I’ve experienced have all been down to a lack of preparation. In this blog post I will look at how the right amount of preparation makes sprint planning much less painful.

Give the backlog sufficient attention before the sprint planning

The product backlog needs a bit of tender love and care before the sprint planning. Not only will this make the sprint planning meeting run a lot quicker and smoother. It will also make it much more likely that we’re able to bring in the top items from the backlog into the sprint.

The most common way to do this is of cause to schedule a backlog grooming meeting during the sprint to look at what’s coming up in the next sprint or beyond. By scheduling such a meeting a week before the sprint planning meeting, there will be a bit of time to resolve any outstanding questions.

Involve the team in the preparation

It shouldn’t be down to just the product owner or business analyst to identify all the requirements and – even worse – create the solutions. There is a lot of expertise in the team, so make the best use of that. If we don’t, there is a good chance that what seemed like a perfectly good idea will be shot down in the sprint planning.

Exactly how we do this can take many forms and we need to find out what works for our team. One option is to include everyone in the grooming. As a bonus, people will be well familiar with the story by the time it gets to sprint planning.  Another is to use a smaller group of people and maybe rotate the responsibility in the team. A third is to do some preparation first in a small group and then share the findings. We need to experiment to see what works best!

Make sure the stories are ready – but don’t take it too far!

What we strive for is to make sure we know enough about the story to be able to complete it in a sprint. Exactly what this means will vary from story to story, depending on how complicated it is.

In many cases, a short bullet point list of the acceptance criteria (and an estimate to make sure the story is not too big – break it down further if it is!) will be all we need. However, when the story is more significant, a bit more detail may be beneficial. This could for instance include basic sketches of the user flow or even mockups of a few key screens. Preparation could also include some technical investigations to understand what needs doing, to surface any dependencies etc.

It is a good idea to prepare a few more stories than is likely to fit into the next sprint. This way, if it for some reason turns out we are unable to start the top few stories, we will have other stories ready to pick up in the sprint planning.

However, make sure to not over-do the upfront specification work. The goal is to find out just enough so that the rest can be worked out in the sprint. For instance, requiring complete, signed off UX designs before something is brought into a sprint is an unfailing way to limit the agility in the sprint.

Have a good idea about the sprint goal before the sprint planning

Some teams consider their sprint goal to be the list of user stories they agree to bring into the sprint. While this is one way to identify what to do, it’s usually not the best one.

A good sprint goal helps create focus and make it clear why we are doing what we are doing. It will also allow a bit of flexibility. One such goal might be “Create a first version of the shopping basket, to allow a round of user testing”. It’s clear what we’re doing and why. If we start running out of time in the sprint, we can adjust the stories in the sprint, while still achieving the goal. “Do we really need discounts right now?

Trying to construct such a goal from a random selection of stories can be very difficult. “Complete all the stories in the sprint” doesn’t count as a sprint goal! One way to try to avoid mixed bag goals like these is for the product owner to have a good think about what would be a good outcome of the sprint beforehand. What is the most valuable thing we can do right now? All those other things, while they might be urgent, are they really that important?

This doesn’t go to say that the product owner gets to decide on their own what the sprint goal is. Only the team can decide how much work to bring into the sprint. The sprint goal agreed between the product owner and the team at the end of the planning meeting may therefore end up being something slightly (or sometimes completely!) different than the initial goal.

Make sure to have the details at hand in the planning meeting

One small but oh so important bit of preparation is to make sure that the outputs from the discussions we’ve had so far are readily available to look at in the sprint planning meeting. I prefer simple tools, so I like to fold print outs and sketches and attach them with a paper clip to the index card of the story they relate to. The equivalent when using an issue tracking system like Jira is to link or attach all relevant bits to the ticket.

.Whichever way we choose, avoiding digging around on network drives and wikis will make the sprint planning meeting run a lot smoother.

Final words

The Scrum Guide allows a generous eight-hour timebox for planning a one-month sprint. That is enough time for most people to rather poke a fork in their eye!

With the right preparation, the sprint planning meeting can and should be reasonably quick.

Mainly, all we want to do in the meeting is to agree a goal and create our sprint backlog. Most of the stories will already be well-formed and estimated. We’ll also have made sure we have all the necessary information at hand to clearly and quickly explain each story. Unavoidably, there will be some adjustments needed, not least following the sprint review, but hopefully nothing too taxing.

I realised while writing this blog post that I must have been in about 300 sprint planning meetings by now. In my experience, two hours tends to be about right for a two-week sprint. One hour for going through the stories and then another hour for the task breakdown. Don’t take my word for it, though. Experiment and see how much time you need!

Determining value using Cost of Delay

Using Cost of Delay to determine value

This short blog post is the last one of three this week in which I explore methods for determining which features are valuable.

In the previous parts, we covered the following:

  • Value Poker – a technique for estimating the relative value between features
  • Impact Mapping – a visual planning technique that helps us prioritise based on how each feature contributes to our objective

Arguably, both these techniques are largely subjective. We make assumptions based on intuition, saying “this is more valuable than that”. Hopefully, we then validate these assumptions as early as possible during the build.

In this part, we’ll be looking at how to quantify the value more objectively: the Cost of Delay.

What is Cost of Delay?

The idea behind Cost of Delay is to calculate the impact different delivery dates have on the financial bottom line. For instance, this analysis is useful when determining in which order to tackle our deliverables to maximise value.

Why use this method?

“Urgent” and “valuable” are two very different things. Just because something is more urgent than something else doesn’t necessarily mean that we should do it first – or at all! By calculating the Cost of Delay, in monetary terms, we can more objectively weight alternatives against each. Either to allow us to decide which option to pursue or the order in which to tackle them.

How to use it

The first when using Cost of Delay is to create a model describing how the value of each of our options is affected  by the delivery date. We then use these models to compare different scenarios.

Cost of Delay is typically made up of the business value of the feature, the decay of that value over time and the value  of information discovery (e.g. avoiding the risk of additional costs).

Depending on what it is we’re analysing, the Cost of Delay will look different:

Cost of Delay

  1. Long life-cycle, peak unaffected by delay – For example, let’s say that migrating to a new hosting provider will save us £500 per month. Delaying this migration will cost us £500 each month we delay.
  2. Long life-cycle, peak affected by delay – In some cases, a delay will limit how much revenue we will be able to make. For example, a competitor beating us to market might mean we won’t be able to get as many customers as we otherwise would.
  3. Short life-cycle, peak affected by delay – When the opportunity has got a short lifetime, a delay will significantly limit how much revenue we are able to make. A website selling merchandise for the Rio 2016 Olympics would have missed much of its potential revenue if it didn’t launch until the second week of the competition. After the closing ceremony, there would probably be little point launching at all.

An example how to calculate the Cost of Delay

Let’s say we have an old legacy system which has a monthly support cost of £3,000 and that are deciding between two options:

  1. Spend 3 months to migrate the functionality to using our new system, where the support cost would be £1,000 per month. This would save us £2,000 per month.
  2. Keep using the old system a while longer. Instead, spend 5 months building a mobile app, which will give us £3,000 additional revenue per month once finished.
Option Time to build Cost of Delay (£ / month)
Option 1 3 months £2,000
Option 2 5 months £3,000

 

Which one of these should we build first? Let’s do the maths:

  • A first, then B (delay B by 3 months): Cost of delay = 3 * £3,000 = £9,000
  • B first, then A (delay A by 5 months): Cost of delay = 5 * £2,000 = £10,000

Let’s do B first!

Final thoughts

This is just a very brief introduction to Cost of Delay. I recommend further reading before putting this method into practice.

I was first introduced to the concept of Cost of Delay in a Adventures With Agile meetup with Don Reinertsen, which is available on YouTube. This is a good watch if you want to learn more!

The obvious challenge with this method is to create the models to describe the Cost of Delay. Don’t let this put you off, though. As with the methods discussed in previous posts, the process itself is a useful exercise to learn more about what value means for our product.


Have you been using Cost of Delay? Was it useful? Please share your experiences in the comments below.

Determining value using impact mapping

Impact mapping

This short blog post is the second of three this week where I describe methods for determining the value of features.

In the previous part, we looked at Value Poker, a method for working out the relative value of features.

In this part, we’ll be looking at impact mapping.

What is impact mapping?

Impact mapping is a visual method that helps us take a step back from the features and think about what we’re actually trying to achieve. Starting from the big goal, we create what is effectively a mind map of roles (users and others) and what they can do to help us (or prevent us!) reaching the goal. The final step is to identify the features that would allow this to happen.

Why use this method?

The impact map, and the discussions we have while creating and maintaining it, helps us understand which features will contribute the most to our goal. Importantly, the map also makes visible the assumptions we make (“Social sharing tools will increase the number of new users”), which allows us to design experiments to verify these assumptions.

Impact mapping is a very collaborative method where we can involve both stakeholders and representatives for the development team. The result is a big picture (quite literally!) that allows us to understand how each feature contributes to our goal. This way, we can prioritise features against each other, based on their impact. This map, which will evolve over time, will help us improve our roadmap decisions and the ordering of our backlog.

How to use it

Typically, the impact map is built up in a workshop with the relevant stakeholders and representatives for the team.

In the session, everyone collaborates to identify the following, one level at a time:

  1. Goal – This is the most important bit. What is the one big goal we’re trying to achieve? Make sure this really is a goal (objective), rather than a deliverable. For instance, a goal could be “6 million weekly signed in users”, whereas “New sign in system” is not a goal in itself.
  2. Actors – Who are the users of our product? Who else is impacted? In the example above, we might list “Signed In Users” but also “Guest Users” as well as “Customer Services”.
  3. Impacts – How can the actors we have identified help us achieve the goal? How can they prevent us? How else are they impacted? Some examples: Signed in users may help us by inviting friends or visiting more frequently (so that we can class them as “weekly”). Customer Services might be impacted through additional support requests.
  4. Deliverables – What can we deliver to achieve / mitigate the impacts? For example, we might encourage users to invite friends by us adding social media share buttons or a referral system (maybe offering them some kind of award). Note that not all deliverables have to be in the shape of features to build. In our example, we may consider recruiting additional support staff.

From impact map to backlog

If stretched for time or if your impact map grows very big, you may want to use a method like dot voting to identify the top few most important items on a level and limit the exploration in the next level to those.

An impact map allows us to have important discussions, both while building the map and off the back of the results. Some examples:

  • Features that don’t fit into our impact map, and therefore don’t contribute to our goal, can be descoped. This saves us from spending effort on building them.
  • Depending on how important we consider an impact to be, the deliverables contributing to that impact can be moved up or down the backlog.
  • Where multiple features contribute to the same impact, we can discuss which feature is likely to contribute the most. Let’s prioritise that feature!

Final thoughts

The usefulness of impact maps stretches far beyond prioritising features against each other. The exercise helps us come up the features in the first place. The way in which we do this ensures that each and every one of them contribute to the goal we’re trying to achieve. Last but not least, an impact map highlights the assumptions we make, so that we can verify them.

All good stuff!

For more reading about impact mapping, see the book Impact Mapping by Gojko Adzic.


What’s your experience with impact mapping? Please share your thoughts in the comments below.

Determining value using value poker

Value poker

The product owner is responsible for making sure the product backlog is ordered in a way that maximises value. The goal is to enable the team to deliver as much value as possible, as early as possible.

However, this can be easier said than done. How do we determine what is more valuable than something else? Often, we will be comparing apples and pears. What is more important: simplifying the checkout process or adding the possibility to subscribe to newsletters?

This week, in three short blog post, I will be looking at different ways to try to determine this value.

Still, after 10 years as a Scrum Master, I know of no perfect way of doing this. Nevertheless, it is incredibly important that we give it our best shot. Nothing will improve our effectiveness as much as building what’s valuable and not what’s not.

Let’s get started with the subject of this first post, value poker.

What is value poker?

Very similar to planning poker, value poker is a “game” where each participant vote by showing a number out of a deck of cards. The difference from planning poker being that we’re estimating relative value rather than relative effort.

Why use this method?

Value poker is a quick and even rather fun way to estimate the value of features. It allows all the relevant stakeholders to get involved, while avoiding lengthy discussions. The main idea is that estimating the relative value between features is much easier than trying to estimate value in absolute numbers.

The value poker method can be particularly useful when the Product Owner has several different stakeholders to manage, each with their own priorities.

How to use it

Choose an appropriate timebox (2 hours?) to give enough time to spend just a few minutes on each item. You can use an egg timer to keep time for each item if necessary.

Invite all the relevant stakeholders. For a bit of extra fun, you can make the seating arrangement resemble the BBC programme Dragons Den, with the stakeholders sat in chairs next to each other at the front of the room.

Some use special estimation cards for value poker, with numbers 100, 200, 300, 500, 800, 1200, 2000 and 3000. The only purpose of the bigger numbers, compared to normal story point estimation cards is, it seems, to make the stakeholders feel more important! Normal story point planning poker cards (1, 2, 3, 5, 8 etc) should work just as well.

A value poker session

Typically, the product owner facilitates the session and the format is as follows:

  1. Establish a baseline for the value estimates by using a well-understood feature as reference and declaring, for example, that this feature is worth 300.
  2. Present the first feature and explain how it contributes to the product’s objectives. Typically, the Product Owner would present each item but in cases where ideas come from different people, people could present “their own” features. Take care not to mention any estimates of required effort; this session is purely focused on value.
  3. Answer any questions the participants have.
  4. On the count of 3, all participants simultaneously reveal their value estimates, compared to the reference feature. The more value they think the feature is, the bigger the number. If all participants are showing the same number, this is your value estimate.
  5. If there are differences in numbers, let the people with the highest and lowest numbers explain why they chose the numbers they did. Then let everyone re-estimate based on this information.
  6. If still no consensus, either rinse and repeat until there is, or simply calculate the average. In the latter case, the number doesn’t necessarily have to be one that’s available in the deck of cards. Just make sure to agree what rules apply before starting the game!

Final thoughts

The value poker method can be a good way to ensure the value for each feature is based on the perspective of the business. However, we must not forget that the Product Owner is always ultimately responsible for, and therefore has the final say about, the ordering of the backlog.


Have you tried value poker? Did you find it useful? Please share your experiences in the comments below!

 

 

Choosing the right metrics for continuous improvement

Agile metrics

I have come to a realisation lately that I’ve left a useful tool lying at the bottom of my agile toolbox without using it as much as I should. This tool is metrics.

Sure, I’ve been measuring velocity and have been using it to forecast delivery. From time to time, I’ve also dug deep into Excel exports from Jira to support some of my observations (“On average, we need two sprints to complete every story”).

What I haven’t been doing though is picking a good, basic set of metrics and tracking them over time as part of our continuous improvement.

Therefore, writing this blog post serves as much as a way to gather my own thoughts about metrics as sharing the result on the blog.

What’s the point of metrics?

“Scrum is founded on empirical process control theory” says the Scrum Guide. All this grand statement really means is that rather than just purely guessing, we base our decisions on observations. We experiment and we learn.

To help with these observations and to see how we’re improving over time (or not!), we measure things. Metrics are useful for several different purposes:

  • Forecasting – Based on progress so far, what are we likely to finish by when? Frequently updating our forecasts and keeping them realistic allows us to adjust our plans by removing, adding or reordering backlog items as needed.
  • Product performance – How well is our product doing? Are the features we’re rolling out making the impact we intended them to? If not, let’s experiment and see how we can improve!
  • Continuous improvement – Where do we see opportunities to improve our way of working? And once we’ve made changes, are we seeing the improvement we were hoping for?

In this article, I will be primarily looking at the third point.

How do we make good use of metrics as part of our continuous improvement?

1. Pick metrics that matter and are within the team’s control

Let’s get a couple of obvious but important points out of the way first.

Firstly, for a metric to be useful as a tool for continuous improvement, it needs to be within the team’s control to affect the metric. There is no point tracking something where nothing we do will make any difference. The exception, of cause, is if having this data can help us convince someone outside of the team why something needs to change.

Secondly, there needs to be a direction we want to move the metric in, either up or down (or maybe we want it to stay the same, without moving). Making the metric move the right way must have a positive impact. Otherwise, what’s the use of the metric?

For example, “Number of lines of code added in sprint” is rarely a useful metric. There is no good or bad direction for this metric to move. A high value could either indicate that we have done a lot of work or that we’ve written bad code with a lot of duplication. As long as the application does what it needs to do, the number of lines of code going down would be a good thing!

2. Don’t imposed the metrics on the team

Good metrics trigger discussions within the team. Together, the team members identify problems or opportunities and decide how to address them. For this to happen, the metrics need to be understood and accepted by the team.

Therefore, when identifying metrics, ask the team. Explain what the purpose of metrics is and let them decide what to measure. This will make sure the metrics are genuinely useful, rather than some box ticking exercise that keeps the Scrum Master busy.

3. Make the metrics visible

For the metrics to have the desired effect, they need to be visible to the team and easy to interpret. A simple line or bar chart on a whiteboard is likely to have a much bigger effect than a spreadsheet stored on a wiki or network drive somewhere.

Also, make sure to not just show absolute numbers. A line chart going up or down, comparing this sprint’s measurement to the previous sprints will give a much better understanding of whether we’re going in the right direction.

4. Don’t measure too many things

It is easy to get carried away and start tracking a lot of things. Particularly if we’ve got a shiny tool to do it for us or a Scrum Master who loves Microsoft Excel. However, if we’ve got 14 metrics moving in the right direction, we might pay less attention to the 15th. That’s a real shame if that’s the one that affects our bottom line. Therefore, settling for a small number of metrics makes clear what’s important.

Rather than trying to identify everything that we can measure, we should agree some basic metrics to make sure we’re doing alright in the areas that are important to us. Also, we need to make sure that these metrics are cheap and easy to collect. It needs to be possible to collect them at least once per sprint. Otherwise, the feedback loops will be too long and we won’t be able to do anything before it is too late.

5. … but have more than one metric

So, just a few metrics are better than a lot of them; does that mean using just one single metric is even better? Probably not.

Having additional metrics, such as adding one to measure quality (e.g. the number of bugs) can help us make sure we don’t make the wrong trade-offs without realising.

If looking at one metric in isolation, chances are we will be able to improve it quite easily but at what cost?

6. Add further metrics when you need them

Rather than trying to identify up front everything that may ever become useful, start with a small set of metrics. Then add more when needed.

For example, let’s say that we’ve concluded that our infrequent releases are preventing us from delivering value to the users as quickly as we could. Let’s track the number of days between releases (or even releases per day!).

On the other hand, if we’re already releasing frequently – or have bigger problems to address – let’s not bother with this metric.

7. Don’t encourage people go game the metrics

If our metrics start being used in ways they weren’t intended, the they will lose their reliability.

One such unintended way is if they start being used to measure performance, e.g. through comparing teams to each other. Another is when management request the team to improveme the metrics, for example by linking a bonus to reaching high velocity. Or, indeed, explaining in no uncertain terms what might happen if they don’t!

The quickest and easiest way increase velocity is to increase the estimates. If people get a bonus for making bigger estimates, how can we possibly trust our velocity from then on?

8. Measure to improve the system, not the performance of individuals

Even worse than team performance metrics are individual metrics and goals. For instance, let’s say we create a leader board for who on the team completes the most backlog items. Maybe this information is even fed into their performance reviews.

The problem with this approach is that it moves the focus away from the team doing their work together as effectively as possible. Each person will feel the need to optimise their work. We get sub-optimisation at the cost of the team’s total performance. Why spend time solving the complicated but important problems that will deliver the most value? It’s much quicker and easier to just do the simpler items in the sprint.

Some possible metrics to choose from

With all the above in mind, the following list may be a starting point to pick a few metrics from.

Area Why measure? Some possible metrics
Productivity Are the changes we’re making to our ways of working having a positive effect?

(and, obviously, forecasting)

Release burn up / burn down

Velocity (or the number of items completed per sprint if using #noestimates)

Started vs finished items per day

Cycle time in days

Cumulative flow diagram

Sustainability Can we keep working as we are for a long period of time? Team happiness, such as a Spotify style team health check or something more lightweight
Quality Are we introducing a lot of bugs? Might we be taking shortcuts? Total number of open bugs

Number of bugs created / resolved during sprint

Value

 

Velocity is one thing but do we spend our efforts on the right things, making a difference? Value delivered (measure impact or ask PO / customer)

Focus factor (how big proportion of the points we complete are part of our big goal and how much is maintenance etc?

Responsiveness Being agile means we can act quickly. Lead time (how long does it take us from when we have an idea (create a user story) until it is completed?
Predictability If our throughput varies a lot, forecasting will be hard, as will identifying trends to judge the effectiveness of our continuous improvement. Unplanned work (items brought in during the sprint)

Velocity variation

 

A useful source while writing this article was the presentation Data Driven Coaching by Troy Magennis – well worth a look!


What metrics do you use and what impact have they had on how you do things? Share your experience in the comments below!