In many development teams I work with there is a “definition of done”. This a checklist (explicit or implicit) of things that need to have happened before we can say that a user story is marked as done. This might include various kinds of testing, demo’ing and (ideally) deployment.
But that only works during the development phase, where the output is quite tangible. Much product development also has a discovery phase, which is about early investigation rather than product build. Here, the output is much less tangible—it’s typically insight and understanding. Those things will be presented as tangible artifacts, of course, such as documents, posters or PowerPoint presentations, but what constitutes “good” is less obvious.
The problem also applies to investigation work in the middle of a project. In the past I’ve written about my natural aversion to “investigation cards”, because I feel they’re often used as an excuse to avoid getting into the difficult work. I use an iterative approach to tackle discovery work methodically. But for either scenario we need to know when to stop, because we’ll never reach a state of perfect information. We need a “definition of enough” for investigation or discovery work, just as we have a “definition of done” for build work.
I can think of three ways to form a “definition of enough”:
- Timeboxing is by far the most common approach. This is used in both one-off investigation cards, where a piece of work is timeboxed to a handful of days, and it’s used in (larger) discovery phases, which are typically timeboxed to 4-8 weeks in the GDS model. This is simple, and I find in practice it generally works for everyone.
- Using a confidence threshold is more sophisticated. This might be more appropriate for a piece of work within a discovery phase. Here, we say “We are very unsure about whether or not hypothesis X is true; we will work until we are 80% certain one way or the other.” The 80% figure is a threshold we have chosen—it’s not total certainty, but (we have decided) that will give us enough confidence. “Hypothesis X” may be anything approrpriate, such as the wording of a customer value proposition, or a choice of technology.
- We could also work just to make a decision. This might be seen as a variant of using an uncertainty threshold, but not always. It requires being clear about what decision we have to make, and then working until we have confidence about it. Identifying the key decision is the thinking behind Douglas Hubbard’s approach to measuring.
For any particular piece of work we still need to fill in the details: How long is our timebox? What is our confidence threshold? What decision do we have to make? But hopefully this provides a useful framework to start with.