I’d Rather Be Coding: Gathering Metrics

Monday Jul 1st 2013

The fact that so few development teams want to gather metrics on what they are doing has always baffled me. The simple answer is that they’d “rather be coding.”

By Nate McKie, CTO, Asynchrony

For a long time, I’ve wanted to write about activities that are a critical part of Agile and Lean, but somehow manage to slip by the wayside even by the most competent development teams.  While most teams try to do their best in managing the various tasks associated with software development and project management, sometimes team leads and members need a reminder about what is important to the success of the program.

The fact that so few development teams want to gather metrics on what they are doing has always baffled me.  Aren’t these people the same ones who calculate the most profitable ways to get gold on World of Warcraft, destroy others at fantasy sports with their predictive algorithms, and optimize their route to work through careful tracking and Monte Carlo simulations?  Why wouldn’t they want to analyze every facet of how they perform as a team, to make sure they always make the best choices and don’t waste time and effort?

The simple answer is that they’d “rather be coding.”  Gathering metrics takes thought and effort, and ultimately they’ve got deadlines and customer priorities to think about.  Does the customer really want them to spend time on metrics anyway? Wouldn’t they be angry if they were crunching numbers instead of adding that next new feature?

The essential problem is this:

1.      No team is perfect.  There is always room to improve.

2.      Improvement brings greater productivity, fewer errors, and/or more predictability (all aspects highly valued by the customer).

3.      Without baselines and measurements, there is no way to know whether or not improvement has actually taken place.

Without metrics, even the best team who really wants to improve is left with “gut feelings” and “well, nothing bad has happened yet” to determine whether or not a change in their process, tools, or methods has actually added value.  I would maintain that this is a real waste of time. If your team is feeling pain, and then makes a change to deal with that pain but doesn’t measure anything to evaluate it before and after the change, then they are effectively trying out different rain dances every time one they are using doesn’t bring rain.  There is no way for them to know whether or not what they are doing is actually solving the problem, and they are using macro-level effects (that could have unrelated causes) to tell them when to change again.

Deep down, teams know this. However, there are several “fear factors” involved with metrics:

  • How do we know what we are measuring is the right metric?  What if we spend a lot of time gathering a measurement and it doesn’t tell us the right information?
  • What if we start getting evaluated by the measurements we create?
  • What if the measurement tells us something we don’t want to hear?

It’s important for teams to follow a process when a measurement is constructed, to make sure that it is not just a waste of time.  There are five important questions that need to be answered every time the team decides to gather a metric:

1. What question are we trying to answer with this measurement? Decide what it is you need to know. Are you trying to determine if your quality is slipping? Is it a concern for how long features take to develop? Is it an observed slow response time by the customer?  It’s helpful to relate the problem back to a risk to your project, which usually comes down to either time (deadline being met) or money (budget being exceeded).  Getting down to the root cause of the issue helps clarify what needs to be observed.

2. How will this measurement be gathered? How is it that the team will start to gather this data?  Will it need to be done manually, or is there a tool that can do it for you?  Also, this is the time to determine whether or not the measurement is actually worth doing; if you need to spend two weeks building a tracking system to be able to effectively obtain the data, perhaps the benefit of having it is not great enough to overcome that investment.  In that case, try to find a simpler method that gathers a lower fidelity of data that can still be useful.

3. How will the gathered data be stored? It’s not enough to gather the data; in almost every case, what you care about is the trend of data, the way it changes, and not the information on a particular day.  How will you reliably store this information so that you can access it later when you want to analyze it? Hopefully, you have a tool that takes care of this for you, but if you don’t, even putting it up on a chart on the wall that everyone can see can add value.

4. How will the data answer your question? You should have an understanding - before you start - of how and when you will analyze the data to answer the original question.  For a trend, determine what trend direction is “good” or “bad.” Decide when the data will be reviewed, and by whom (again, maybe it’s an automated process that does this review). Should action be taken when the data hits a certain “bad” level (call a meeting, send an email, break a build)?  Should you have times when you evaluate whether or not the measurement still makes sense to gather?

5. How do we know these decisions are being acted on? It’s not enough to make the decisions, you need to follow up to make sure the actions decided on are being executed.  Ask someone outside the project to hold you accountable.  Announce what you’re doing to a group of peers who will expect to see results.  Make it a defined part of your agenda at meetings and demos.  Otherwise, you really are wasting your time.

(Coincidentally, these five are the same questions that CMMI would have you ask yourselves in the Measurement and Analysis process area.  So, it’s not just coming from me.)


Getting good measurements isn’t easy.  You need to work to plan the measurement, you need to work to take the measurement, you need to work to analyze and report the results.  However, without measurements, your team is truly “flying blind,” and without putting thought into the measurements to be performed, you’re just relying on “blind faith.” Take the time to perform metrics the way you take time to design your code, and the insight that it brings you will pay dividends.


Nate McKie is cofounder and CTO of Asynchrony, an IT consulting firm in St. Louis, Missouri. In his role, Nate drives the technical aspects of the company and teaches agile techniques to clients. For more information on related topics, visit www.asynchrony.com.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved