Sometimes a piece of analysis work can be very open-ended, or at least there is a danger that it might go on too long. Examples I’ve come across are reviewing some old code, going out to understand a process, implementing a spike, looking at a competitor, etc. It’s not that such activities are inheritently bad, nor that they are the wrong activity in this particuar instance, but very often they need a good deal of focus.
In “How to Measure Anything”, Douglas Hubbard faces a similar problem when seeking to measure seemingly-intangible things. He says that we cannot measure perfectly, so how do we know how much measurement is sufficient? To help focus things he ask a powerful question
What is the decision this measurement is supposed to support?
By answering this question we are better able to answer two further questions: (i) How much of an is answer sufficient? And (ii) How might we approach it?
I find Hubbard’s question is valuable not just for measuring, but for all kinds of potentially open-ended assignments like the ones mentioned earlier.
For example, suppose we are about to embark on a new piece of work building on top of some old code, and someone suggests that we first review the old code before launching into the work. We may vary our approach depending on what decision we want to support. If the decision is about whether we should bring in some additional skills to the team, then we would need to look at the code to see if it uses any critical technologies we’re not familiar with. Once we’ve established that then we’ve done our job. If the decision is about how we should set expections on delivery dates, then we need to assess the code for its overall maintainability, but only in so far as it will influence our estimation (a simple scale of “good code” to “horrible code” might be sufficient). If we’re expected to get on with the new piece of work regardless, then stopping to look at the old code won’t change anything, and we may as well not do it.
Hubbard also frames his question in slightly different ways, but they all centre on asking what tangible difference we expect to make from our analysis. I’ve used it many times to reduce a potentially lengthy piece of work to something that gets results much more effectively than it might otherwise.