A real life tale about using numbers, data and charts as a helpful tool to managers.
By Diego Sierra, COO at Digital Beings
We’ve all been there. Be it from your client or from your boss, many times you face that awkward statement: “I don’t think we are doing a great job” or “the team is not as productive as it should be”. You feel like your team is working fine (or not) and you feel it in your guts. That feeling is probably right. If there’s one thing I’ve learned after many years in the IT is that there’s nothing better than what your stomach tells you when it comes to figuring out whether you are in the right path or not. But, wouldn’t it be great if you could demonstrate that empirically and sustained by rock solid data? Of course it would! And that’s where metrics come up as the ultimate answer to all of those unwanted statements.
Tons of articles and papers have been written about metrics, so we won’t get into just listing metrics, what their purpose is and how to calculate them. Instead, we will share a real life experience of a true project and how metrics helped us manage the execution in a controlled way resulting in a better overall performance as well as a more predictive project.
Setting the scenario
The project goal was to rebuild a web platform that supports the business of an online tour operator that sells trips to multiple destinations leveraging social experiences to create a community of loyal travelers that gets together to organize adventures all across the world.
When setting up the project, we defined the KPIs that we wanted to track to measure our team’s performance. This is sometimes skipped in the early stages of the project, when all your energy is usually focused on getting things rolling as fast as possible, and that’s probably the first mistake most Project Managers do. You need to make sure the way you will measure productivity is set from the get go, this way you can:
- Define the metrics to use and, just as important, which ones NOT to use.
- Start collecting data from day one.
Just by using a SCRUM-driven tool for managing tasks, JIRA, Redmine or Trello to name a few, you will gain access to the standard agile metrics like Sprint Burndown chart, Velocity or Cumulative Flow diagram. We won’t be focusing on those out-of-the-box metrics, instead, we will elaborate on two tailor made metrics we defined for specific things we were looking to track:
- Estimate Precision –> determine how accurate our team’s estimates are.
- Learning Curve –> calculate when a new member of the team was fully ramped up.
This was specifically important in our case because the client was extremely date-oriented. Thus, as a manager, I wanted to make sure the schedules we committed to as a team were realistic and achievable and keep an eye on the trend of estimates gaps to ensure our team was getting better at predictability. We came up with this report which shows the distribution of tickets per developer (John, Mary, Peter) in a scale of -20% to +20% deviation between the original estimate and actual effort per task.
Peter was being really predictable, for most of cases he was hitting his estimates or even beating them. Caution alert: it’s not good to have an engineer that regularly delivers work under the original estimate, may be a sign of someone buffering a lot and being too conservative.
At this point we had an average % of deviation for the whole team of +22%.
Mary and John had to pick up if we compared them to Peter (our baseline), as a result we took the obvious action to make Peter double check on Mary and John’s estimates which ultimately led us to become much better at the overall percent of deviation, reducing the average % of deviation to +13%, which was pretty good for our context.
Another challenge the project had for us was related to reducing any ramp up time to a minimum. Even though we try to avoid rotating people across projects, it happens more often than we’d like. In this case, in a team of 5 developers assigned for a period of around 6 months, chances were sooner or later we would have to bring new guys to the team. So we defined a rule, if the new folk didn’t reach the pace of their partners within 1 Sprint, we would simply re-assign him/her to something else and try to bring back the ramped person (i.e if the original team member was sent to help put out a fire somewhere else). Again, we needed to measure this as precise as possible and define a way to measure learning curve time. Tickets were classified in a complexity scale from 1 to 5 and, as early as the very first Sprint, we defined the baseline taking the ponderated number of tickets solved per developer (number of tickets closed x ticket complexity) . The following chart shows the metric we used to see how “ready” a new dev was in comparison to the rest. You can see that Jack joined the team in Sprint 4 replacing John, and he was able to pick it up pretty soon ending the Sprint with a similar ponerated value than the rest. He even managed to increase his productivity and became one of the most effective member on the team. Good for him!
When thinking about metrics it’s very important to follow a clear management goal and be blunt in determining what we want to accomplish by measuring a certain aspect of the operations. Be it “increase team predictability”, “avoid long learning curves” or any other ultimate purpose, the data you collect and reports you produce need to be totally in line with that spirit. You want to avoid getting yourself into what I call the “dashboard disease”, when you have a nice report full of charts and stats that you really don’t use! Most of the projects I’ve executed needed no more than 3 core metrics to ensure success and a close monitoring. So, next time you open your management tool and see all those beautiful (yet unused) trend charts, pies and bars, take a few moment to simply rethink which of those are reflecting valuable data that is directly linked to the most important business goals you are trying to address.