A few words about meaningful metrics.
When implementing a change of working a while back in my development team my boss of the time said, “Well, okay, but I want you to show me that your changes are making a difference”. What’s the metric for better software? I knew all about the dangers of measuring things like lines of code — that wouldn’t do anyone any good.
So I opted for something low-tech. I asked people. I asked team members and wider stakeholders once fortnight: “How are we doing?” I broke it down into ten questions, varying depending on the role of person and over time I could chart our progress. We went from around 30% satisfaction to 80% in four months and then plateaued. The plateau gnawed away at me, but by that time interest had moved on — the team was doing a great job and there were more pressing problems elsewhere.
I subsequently spoke to a programme manager who used a similar, but simpler, technique. He asked: “How would you score us out of five this month?” And then the follow-up question, if it wasn’t full marks… “What would we need to do to make it five?”
I much prefer this version, which is simpler, more direct, and there’s more clarity on what to do about the results. It reminds me of the guerilla approach to lean product development: putting a mock-up in front of someone in Starbucks and (eventually) asking “Would you buy this product? What would it take for you to buy it?”
I’ve used lots of other metrics for different things relating to performance, output and productivity: bugs resolved per week; lines of code per class, and so on. But I don’t think I’ve ever found any for this kind of thing which are so simple and connect the work and the output so effectively.
I’ve stumbled upon the topic of Software Metrics a couple of days ago, while going through the questions in the ScrumAlliance CSP Application.
One question in particular caught my attention:
“How were the benefits measured and communicated?”
As you said, there are many metrics out there that we could use in order to “measure” the benefits.
But it seems to me that really none of them are able to analyze the whole system, but tend to look rather at its sub-parts, and that’s why I see them as being more or less misleading.
One does not have to dig very deep in order to find the “right” metric that proves his point of view, whichever that one might be, good or bad.
The low-tech technique you’ve described above is among my favorites methods as well.
I like it mainly because it is simple and it looks at this complex system in a holistic way.
And really, who else beside the team members and the stakeholders know better “how are we doing”?
Still, I find it very difficult to make others (read: upper management) understand and really buy into the outcomes of such a survey. Some regard it as being only anecdotal and think of it being very soft, because it relies on subjective opinions, rather than relying on hard data like number of defects or test code coverage.
Have you ever had such issues?
If yes, how did you dealt with them?
Hi Dan. Yes, I have come across such issues, and I think the trick is to (a) make sure the questions are meaningful, (b) have the metrics follow from the questions.
Regards (b), be aware that my low-tech survey followed on directly from a director asking me to demonstrate that the team was improving. If I’d have gone in with that survey technique but the director had really wanted to improve revenue then it would have been the wrong tool to apply.
So that takes us back to (a), making sure the questions are meaningful. It’s important that when we’re asked to measure a change that there is a purpose to the measurement, and that we can perceive a difference in the world with and without that change successfully applied. As long as we can perceive a difference then we can describe it and measure it.
As a rule of thumb I’d say software metrics are less important than real-world metrics (if that’s the right thing to call them). Software has to be about something. Brilliantly designed software that doesn’t help the business is less useful than poorly designed software that does.
Someone like an Engineering Director will probably be concerned with software metrics. In that case code coverage and method sizes are perhaps meaningful. But there’s no meaning beyond the engineering department. Those metrics are pointless to the Commercial Director.