When we shift from talking about so-called “risks” to talking about uncertainty our conversations become more realistic. It can also offer us more control of something that might otherwise seem out of our reach. In this blog post I want to make the distinction explicit between so-called “risks” and uncertainty.
The title of this post is courtesy of Johanna Rothman, who tweeted the idea a couple of weeks ago. It’s true she was summarising what I was saying at Agile Cambridge, but she captured it more neatly than I did. Not only did her tweet get a bit of retweet love, but I’ve spoken to others, too, who saw an earlier, similar, presentation and said that this was the primary lesson they took away.
But what does this distinction really mean?
One way I’ve found to think about it is in answering a slightly different question. When I stood up and spoke about this someone in the group said, “Isn’t this about probabilities?” Well, yes, it is. But there are two very different ways to use probabilities. And those two ways distinguish so-called “risks” from uncertainties.
The first way to use probabilities is to talk about “risk events”. This is the typical kind of thing that goes into a risk register. For example, a company selling a product which uses personal data might say “A security breach could cause severe brand damage.” If this was put into a risk register the people would typically ask themselves “What is the likelihood of this?” and “How much impact would it have?”
The “likelihood” question is one of probability. It’s usual to assign a percentage to the security breach taking place. The security breach is the “risk event”. They would also usually then assign a value to the consequential brand damage (the impact).
For the sake of argument, let’s say they assign the values 4% and £1m. In other words, there is a 4% chance of suffering £1m of brand damage (e.g. lost sales) due to a security breach.
This is the traditional way of working with probabilities, especially if you’re using a risk register. Perhaps you can see the problem. There may be a 4% chance of suffering £1m of damage. But that’s for a very specific level of security breach, one that incurs £1m of damage. What about a very minor breach? Or a severe-but-not-catastrophic breach? Or one that occurs in an isolated area of the product? Those other events are excluded, because we’re only looking at the probability of a specific event.
(And as an aside, on the subject of that £1m… Is that exactly £1m? Over £1m? Up to £1m? £1m +/- £100,000….?)
The second way of using probability is to use a probability distribution. This is a curve that represents a range of outcomes rather than a single event, and it allows us to express that there may a bias towards one end of the spectrum or the other. In our running example we can say things like “there’s a 4% chance of a breach costing £1m or more, but there’s a 30% chance of a breach costing anything up to £200,000”.
By thinking about a probability distribution (rather than a percentage) we are looking at something much more fluid. It ranges from one end of a spectrum to another. This is the shift away from a risk—a single thing, that is a (risk) event—and much more towards broader uncertainty.
Not only is this more realistic, but it allows us to think about the problem in a more rounded way. In our running example we shift away from worrying about the mysterious catastrophic thing that will lead to £1m of losses, and instead start to understand that security is wide-ranging issue that is influenced by many different factors, from the very small to the very large. This can open up more points of control.
When I spoke to Johanna about this she said she visualised the hurricane-shaped cone of uncertainty. The cone of uncertainty says that project estimations may be out by a factor of—maybe—four in the early stages, but that range will narrow as the team gets more hands-on experience with the development and the problem domain. The width of the cone is the range of uncertainty, and the fact that it changes over time is because we get it more under control as we get more experience with the project.
So that is the difference between so-called “risks” and uncertainty, as seen through the lens of probability. When we make the shift to uncertainty we see things as being more fluid, more realistic, and it can offer us more points of control.
You are very kind to credit me. You actually said the words! I do like this idea that the further out the risk is, the less we know to manage/mitigate it. We still need to be aware of it, but it’s more difficult to manage it. If we think of the “known unknowns” and the “unknown unknowns” as in the Cynefin model, we might have a better idea about what to do about risks.