What can we do with all this history?

In Kanban, behaviour changing data is key. We will visualise absolutely everything we do, and track it diligently. We do this so that we can use real-world examples to enable us to give accurate, t


allthedata allthedata

In Kanban, behaviour changing data is key. We will visualise absolutely everything we do, and track it diligently. We do this so that we can use real-world examples to enable us to give accurate, tangible forecasts for our projects, and identify bottlenecks and inefficiencies so we can continuously improve.

Here at Red Badger, we have, for the past few years, recommended to our clients that they use Kanban. Some take to it more readily than others. While we had previously transitioned businesses from the more traditional Agile model of Scrum in to Kanban, Fortnum & Mason was the first project where we were using Kanban from day one. Their confidence in our expertise allowed us to build a strong foundation for a project that is still going strong, over 2 years after it first began.

With those two years comes a hell of a lot of data. We have released code into production 317 times since the project began, and in the last year alone we have shipped over 300 user stories. So your first thought would be that our forecasts must now be alarmingly accurate, right?

Wrong. Because maths is hard.

As it turns out, too much data can be just as worthless as too little, so how do you figure out where to draw the line?

Kanban: The Basics

For the uninitiated, Kanban is an Agile framework focused on the “flow” of work. Rather than prescribing  the sprints and ceremonies used in the more traditional Scrum methodology, Kanban is all about facilitating the team to reach a cadence that allows them to deliver continuously and consistently.

There are many ways to forecast within the Kanban framework, but here at Red Badger we utilise Little’s Law, illustrated below.

Littles_law Littles_law

This formula can also be switched around to allow you to calculate one of the three variables using your historical data, thus providing a forecast that often proves much more accurate than the estimation process of Scrum.

How Much is Too Much?

It’s never going to be clear when you first start, but your data will always let you know when it is becoming less useful. The most common way that this manifests is when a notable variable change does not result in a shift of your averages. For instance, a change in team size should, after a couple of weeks, start showing an affect on your average Throughput and Lead Time. However, after reducing the team size from 6 devs to 4, we noticed that even after 6 weeks, our Throughput was remaining steady.

It quickly became clear that the sheer volume of data meant that we had hit an average that was no longer affected by outliers. This is covered within the Central Limit Theorem, which states:

given certain conditions, the arithmetic mean of a sufficiently large number of iterates of independentrandom variables, each with a well-defined expected value and well-defined variance, will be approximately normally distributed, regardless of the underlying distribution.

As a consequence of this, we noticed difficulty in forecasting using our data in its current form. It’s always a bad sign when you run a forecast past your team and they laugh at it because it’s so ridiculous. Always heed the laughter of a developer.

Making the Most of History

You have all that data, but it isn’t helping you. So what can you do?

  • Create a moving average - The reason your averages aren’t changing is because there is simply too much data for several outlier weeks to affect it. So instead, make the window at which you calculate your averages narrower. Take a ten week period (or 8, or 4, it’s definitely worth mucking around with different lengths of time), and base your averages off that. Keep the period the same, always working back the same number of weeks from your current data point. This allows those big variable changes to reflect in your data far more quickly, giving you a better overall view of the world.
  • Compartmentalise - split your project into milestones and create an average from each section. Work backwards from the single task level back up to the “epic” level. This creates a less granular, but still well defined, datapoint average of each piece of functionality you have delivered. This is good for projects which have clearly defined goals or milestones and a team size/skillset that remains constant , but perhaps less so where the flow of work is more to do with business as usual.
  • Start from scratch - This should only be done in the most dire of circumstances. 9 times out of 10 all your data needs is a little love and attention. Occasionally, however, the data you have may be representing your project so badly that you should archive it for posterity, and start from scratch. You’ll have those same early project wobbles that affect your data, but sometimes a full refresh is exactly what you need to bring the project back to a meaningful place.

The list above is by no means exhaustive, and by and large the main thing to remember is that as a Project Manager, what you track and how you track it will constantly evolve and change. There is no such thing as a “perfect” process, only one that is well-tended to and respected by the team using it.

Also, maths is hard.

You can only get good process when you've got good people. Come be a good person with us by checking out our vacancies!

Similar posts

Are you looking to build a digital capability?