by Matthew Leitch (www.WorkingInUncertainty.co.uk) and Nik Silver, August 2016
Some people deal with risk and uncertainty in their lives better than others. Survey evidence suggests only a weak correlation between good judgment in one kind of risky situation and similarly-good judgement in another kind, but still there is a correlation. In this article we will explore how people do it best. We will call people who behave like this “risk experts”.
- Introductory story
- Some benefits of risk expertise
- A general skill or a trait?
- Illustrative scenarios through a management cycle
- The risk expert’s mindset
- What could you do differently?
In the world of management consultancy, selling services and delivering successfully on those sales is fraught with uncertainty. Attempts to make a sale don’t always come off, and there are many pitfalls along the way to delivering something that helps the client without too much pain.
One of the authors used to work for a partner at a leading consultancy who was excellent at following up sales leads with clients. Because this partner was cautious, his colleagues assumed he was a negative, poor salesman, yet he hardly ever failed to get work from these leads. Additionally, the partner never got his people into a project that turned into a nightmare. The client would say what they wanted and this partner would say “Mmmm, the trouble is….”, then start talking them through various things that might happen or be discovered once we started work. After a few minutes of this, clients were happy to agree to his counter-suggestions and clearly felt they were dealing with someone who knew what he was talking about and could be trusted.
You might think that this would be standard behaviour for a leading firm but it was not. Other partners would often say “yes” to anything a client asked for, with results that ranged from not getting the work, to getting work that was immensely stressful, to getting work but having a client who would not pay because the project had turned out to be useless.
This partner was a risk expert in following up sales leads. He regularly encountered a situation with great uncertainty (in the sale and in the delivery) and got results that were superior to those of his colleagues. Looking back, we can see he did this by:
- thinking ahead and taking action now to avoid nasty problems later;
- being open with his clients about potential problems, so they trusted him more;
- being open with his clients about potential problems so they were more prepared for them later; and
- proposing alternative plans that dealt with the inevitable uncertainties more effectively.
This is one kind of uncertain situation and there are many others. Developing risk expertise is beneficial for most people so, after describing some of the specific benefits, we will go through a sequence of situations that cover a management cycle.
Here is how a risk expert might benefit in particular situations. As we mentioned at the start, few people are expert in all situations, but you probably know one or two people who seem to enjoy many of these advantages.
|Situation||Advantages for risk expert|
|Unexpected change of circumstances that makes a favourable course of action easier/more desirable than usual.||
|Unexpected turn of events that makes a planned or desired course of action more difficult in some way.||
|Receipt of, or possession of, incomplete or inconsistent information.||
|Receipt of information which is subsequently found to be incorrect or misleading.||
|An apparently stable situation.||
|In competitive situations, such as games and negotiations.||
You might use some of the following words for a person who consistently enjoys the advantages in the table above: lucky, resourceful, entrepreneurial, charmed, wily, and so on.
Some of these words suggest forces at work which are outside the control of the individual, and which one is therefore helpless to influence. But we believe that to a significant degree these are skills which can be learned.
What we might see as bold or entrepreneurial behaviour will sometimes be the result of knowing how to do ambitious things in a relatively safe way. What looks like luck may be the result of skilfully stacking the odds in your favour. What looks like resourcefulness may just be the result of listening to and acting on early feedback.
We suspect that most people who are experts in dealing with uncertainty have that ability only in particular types of situation – probably the ones they have most experience with. However, there may be others whose expertise extends over so many types of situation that they seem to have an almost complete mastery of risk. In Matthew Leitch’s scenario-based tests of handling uncertainty (try them here: http://www.workinginuncertainty.co.uk/theme_individual.shtml), a few people seem to be persistently good and a few others are persistently bad.
To explore the behaviour and beliefs of risk experts it will help to consider some realistic scenarios in which uncertainty is important. These scenarios are in a rough chronological order, from dealing with new information to reviewing past work. In each scenario we will briefly consider an inappropriate response and then see how a risk expert might behave instead.
Imagine you are in a meeting and someone says “We’re getting a lot of complaints about the product.” How do you respond?
The wrong way is to take that information at face value and begin to question why there are complaints and move rapidly on to assigning blame and issuing instructions to prevent further problems.
The expert recognizes that very little management information is entirely reliable and a statement like this needs to be dealt with skilfully to avoid confusion and wasted time. How many is “a lot”? What period of time is involved? Why were the comments classed as complaints? What exactly were the complaints about? How does this level of complaints compare with the past? Does the person reporting the complaints have some kind of ulterior motive for making the claim?
Not only is there measurement uncertainty around the current rate of complaints, but there is also uncertainty as to how important the current rate is. Perhaps the rate is very low and mostly driven by a few unreasonable or mistaken customers. Perhaps, just occasionally, several of those complaints occur at the same time, purely by chance, creating an apparent rise. Perhaps the level of complaints is high nearly all the time, so “a lot” of complaints is normal.
And yet, despite all these reasons for caution, there might be something new and important going on. Just because the risk expert thinks the information is not entirely convincing it would be a mistake to behave as if nothing had been said about complaints at all.
Follow-up questions are needed to clarify these points, if they are not already obvious from the context. These might show that the information so far is not reliable. The follow-up to that is probably to do something that perhaps begins to look at what people might be complaining about, combined with some actions to get better information.
Someone who isn’t a risk expert might be perfectly capable of responding to the original challenge, but we can see that if the information isn’t exactly what it seems to be then they are likely to get into a muddle. There is likely to be unarticulated confusion about the nature and impact of these (apparent) complaints, and ultimately this would probably lead to an unsatisfactory conclusion taking too much time and effort.
By contrast we can see that the risk expert would:
- recognize vague information (e.g. “a lot”) and seek to make it specific, so that conversations remain focused and confusion is avoided;
- put any information in context (e.g. how present levels compare to past levels), so that the significance of the problem is clearer;
- understand the nature of randomness, so that people don’t over-react to coincidence; and
- be able to recognize a possible underlying problem (e.g. the difficulty of getting reliable information) to address and avoid such problems in future.
In short, the risk expert would recognize the uncertainty and respond skilfully.
Imagine you are chairing a meeting and two colleagues are arguing determinedly about their competing diagnoses of the following problem: a partner-organization is grumbling and it is looking increasingly as if they want to pull out of a consortium in which we are hoping to bid for a contract. Your colleagues are debating their alternative theories as to what the cause is and their theories point to different strategies to keep the partner involved. The argument goes round and round but without making much progress. You need to sort this out.
Assuming you care about the outcome (rather than just wanting a quiet life) the wrong way to deal with this is to push them to reach some kind of consensus. It also won’t help much to referee their debate to try to give them an equal chance to advocate their position. What is needed is objectivity and truth, not advocacy.
The risk-smart response to this is to realise that their unresolved disagreement is a sign that probably neither of them knows for sure what is wrong.
Having recognized the uncertainty, what is the right way to respond? The first priority is to clarify what it is we are trying to understand and decide, and then plan to get more information that can help with this problem. Trying to decide the diagnosis and solution without more information is unlikely to be successful, however politely the debate continues. If something is agreed it will probably be based on shaky foundations.
For example, did the partner-organization actually say they have taken steps to pull out, or did they just float it as a possibility? Who said that? Is there something happening to them perhaps unconnected to the bid that may be changing their views?
Another important point is that the two views that have been debated so far may not be the only possibilities. A long argument can blind us to alternatives. The risk expert will want to think more widely about possible causes, probably multiple causes, looking all the time for possible actions that could change the outcomes.
A conversation about “causes” can sometimes confuse the search for possible actions with the search for who is to blame. Finding someone responsible for a problem and punishing them in some way, or pressing them to sort out the problem because they caused it, is a valid way of solving some problems, but it is unlikely to be the only action that is worthwhile.
A really expert evaluation of alternative diagnoses involves considering each piece of evidence against all the possible diagnoses, and identifying those diagnoses that are favoured by the evidence more than the others.
In this scenario we can see that the risk expert would:
- identify and eliminate any confusion or ambiguity in understanding the situation, to ensure that everyone was trying to explain the same issue;
- establish more facts of and around the issue, to break the deadlock and reach a better diagnosis of the situation;
- recognize that often a problem will have multiple causes, as that may provide clues to useful solutions;
- understand that establishing blame and punishment can get in the way of establishing facts and finding the best solution; and
- not rush to match evidence with any particular diagnosis, in case an alternative and better explanation emerges with further thought and analysis.
This may seem like a lot of thinking, but the risk expert would do it deftly, achieving progress in a meeting that would otherwise be tedious and frustrating.
This time imagine you are in charge of a project to open a new terminal building for a major international airport. The building itself is being completed as part of another project but you need to make sure that all the equipment needed is installed and working, everyone knows what to do when customers start using the terminal, and that the various airlines and flights that transfer from other terminals to the new one do so without disruption to passengers.
The wrong way to do this is to rely completely on very detailed planning, intensive training, and contractual obligations, then open the terminal at 100% of its planned capacity on day one. In short, if you think real life will unfold according to what you put in your plan then you will be disappointed.
The risk expert’s approach to this problem is to mix detailed planning, training, and agreements with imagination and a lot of testing and trialling. A key part of the expert’s approach is realistic trials with rigorous identification of any emerging problems, followed by a chance to change procedures appropriately. And they will go round this loop several times.
Even more important is to open the terminal with a low level of traffic and gradually transfer in more flights as the people and systems improve their performance. This is more or less the same testing-and-trialling pattern, but with real business.
From this we can see that a risk expert would:
- recognize that no plan can anticipate everything except in the simplest situations;
- mix testing and trialling into their plans, as that helps surface the kinds of unexpected things that real life might throw at us;
- understand that tests are intended to highlight shortcomings in our plans, and therefore ensure there is time to study the results and change procedures accordingly;
- recognize that the plan-test-study-change cycle may need to happen several times;
- ramp up operations slowly, in order to introduce more real-world experience at a manageable rate; and
- understand that a slow ramp-up also requires scope to study the results and change procedures accordingly.
Planning is an immensely rich area for risk expertise and one simple example cannot illustrate it all. In general, the idea of making a single, fixed plan towards a fixed goal, on the basis of one view of future events, is delusional in most real-world situations. In the above example the risk expert would build a plan with graduated learning steps and would deliver incrementally (in this case ramping up) rather than rely on just one delivery at the end. Other planning approaches include building more encompassing models of our objectives, and considering multiple futures.
While forming plans, one of the important things to think about is what to measure and monitor. This is another area where a risk expert can shine.
Imagine that you are in charge of a programme to tackle the problem of HIV and AIDS on a Caribbean island. How would you measure the progress of your programme?
The wrong way would be to assume that the programme you have created will achieve all the results you expect and planned for. Under pressure to prove your success you might try to gather data showing the number of people with HIV, with AIDS, and dying of AIDS annually. You might think that was enough.
The risk expert would know immediately that this kind of measurement is not enough. True, these measures relate to the ultimate aims of the programme and may be the essential proof everyone wants, but relying on this information alone will provide feedback far too slowly to be of much use in the short term. Also, those outcomes are influenced by other things (as well as the programme), such as changes in tourism, economic changes, the price of condoms, and publicity about HIV from other sources. Discerning the influence of your programme may be difficult.
The attitude of measuring just to confirm expected outcomes is complacent and likely to lead to slow, weak responses to information received.
The risk expert’s approach is to look for a combination of immediate, direct measures of progress and longer term measures of ultimate results. Many things can be measured fairly easily from day to day that will give helpful feedback. How many people are visiting clinics, and how does this compare to what happened previously? How many condoms have been given out, and to how many people? How many new contacts have been made? How much did people know about how HIV is spread before being given an information leaflet and how much do they know now?
A programme like this relies on finding actions that work. The way to find those is to try things and measure the immediate and subsequent impacts.
The risk expert knows this, and a lot more besides, about measurements that are unreliable or imprecise, as well as when to measure and how to coordinate measurement with deliveries.
From this example we can see that the risk expert would:
- recognize that there is uncertainty about what actions will be truly effective and so would measure things to help find what works and what doesn’t;
- know that measurements which provide faster feedback are essential for allowing us to act faster;
- know that measurements which are less influenced by external factors are more robust and therefore more helpful at influencing our decision-making;
- include some measures of the ultimate outcomes of work along with more immediate measures of actions taken and outputs produced; and
- capture information about the situation before the new work started, to provide a baseline for comparison.
The purpose of this measurement is, of course, to take action in response to new information, which brings us to the activity of monitoring progress..
Let’s stay with the HIV/AIDS programme and assume that – following the discussion above – we start with some sensible indicators. How should we monitor progress?
The wrong way is to monitor infrequently and without ever questioning the plan or our understanding of how the world works. The wrong way is to assume that any disappointments are the result of your employees not doing what they were supposed to do, and that therefore the solution is to incentivise them to get back on track (since they are to blame).
The risk expert’s way recognizes that learning is important throughout a programme and that all thinking about it is, potentially, to be revised in light of new information or ideas. Even the objectives of a programme might need to be changed (with appropriate agreement of course) if a new understanding is reached about how to tackle the problem and what is achievable with the resources available.
It’s not just a question of investigating the things that did not go according to plan. Often the most important thing is to understand why some things have gone better than expected. They may reveal a tactic that is really effective and could become one of the main tactics of the programme as resources shift towards it and away from those that have not worked so well.
Monitoring will be frequent and regular, but also often in response to events. It will be driven by rich information gathered specifically to shed light on important uncertainties.
So when it comes to monitoring progress we can see that a risk expert would:
- believe that a plan is just a starting point, and should always be capable of being revised;
- understand that new information will be revealed as a result of the plan becoming operational, and would be open to it influencing any change of plan;
- recognize that this new information may spark new ideas, which may in turn influence changes in our plan;
- be prepared for the objectives themselves to change if evidence suggests that our earlier assumptions were mistaken;
- be on the look-out for unexpected positives, as these can help us improve the plan further;
- monitor regularly and frequently to respond and learn quickly, but also respond to important events when they happen; and
- modify measures used over time, and create most measurements specifically to shed light on some specific areas of uncertainty.
Sometimes the problem is not what information to seek out, but rather that there is too much information, and it is not clear how to use it to make the best decision.
Imagine it is Saturday morning and you are in your local supermarket trying to buy biscuits to offer some rich friends who will be visiting for tea. You are thinking of buying two or three packs of biscuits and spending more than you usually do on yourself.
This simple task is bristling with risk and uncertainty and yet most people would handle it with some skill. It is hard to imagine anyone going completely wrong here, so let’s just think about the ultimate shopping decision-maker to draw out some lessons.
With perhaps thirty different products to choose from, “buying two or three packs” means choosing between 4,495 bundles of biscuits. (That’s 30×29/2 = 435 bundles with two packs plus 30x29x28/(3×2) = 4,060 bundles with three packs.) Clearly it is too much to consider every possibility, so the first tactic is to cut down the options using some heuristics.
The expert shopper first eliminates the products that are too cheap or aimed at children only. Let’s imagine that leaves only 15 products.
A further scan eliminates unfamiliar products whose pack does not have a photograph or window allowing the biscuits to be seen. This is a risk-driven heuristic. Let’s imagine it leaves only 8 products.
Individual evaluation of price and attractiveness produces a shortlist of 4 products. Now there are only 10 bundles of products to choose between – a dramatic improvement over 4,495.
The final touch is to choose a bundle of varied biscuits to cover a wider variety of tastes. This, too, is a risk-driven tactic. We can imagine that the outstanding candidate with this tactic might be a combination of some organic, chocolate covered biscuits, some chunkier cookies, and some thin, spiced biscuits that look imported.
Our shopping-risk expert would not worry too much about the precise final combination. They would recognize that they have reduced the field to roughly the best 1%, and if the success of afternoon tea is going to be meaningfully influenced any more, then it won’t be from further refinement of their biscuit selection.
This kind of rapid, almost instinctive use of tactics to simplify a complex choice (too many packs) and cover a range of unknown possibilities (the tastes of the visiting friends) is everyday risk management.
Let’s recap, then, on what this risk expert would do. They would:
- recognize that they don’t need to find a single best solution;
- understand that a solution just has to be good enough;
- use quick, simple ideas reduce the problem to a manageable size;
- recognize that the options available have multiple dimensions (price, target market, packaging, etc.) and they would exploit these for reducing the size of the problem; and
- choose items to respond to uncertainty about requirements (in this case our visitors’ tastes).
Having taken a decision it is often necessary to communicate to someone else.
For this next scenario, imagine you have just been dealing with a tricky issue and as a result need to communicate a revised policy to about thirty people by email.
The wrong way to do that would be to write the email, send it out, and forget about it, in the belief that if anyone fails to follow the new policy then they can be reprimanded in the usual way.
The problem with this approach is the inconvenient fact that many of those 30 people, if they read your email at all, will not understand it in the way you intended. If you test one of your emails by asking someone from the intended audience to read it and then explain it to you, then you will probably be surprised and a bit disappointed.
The risk expert knows this and conducts that test for important communications. In addition, the expert would check people’s understanding once the email has been sent. They would do this by talking to those whose understanding is particularly important or who are considered reasonably representative of the whole group.
This may seem a minor skill, but the Charge of the Light Brigade was an unnecessary slaughter caused directly by an ambiguous order.
Back in our office we can see that a risk expert would:
- understand that words can be ambiguous, and misinterpretation can lead to problems;
- not be too proud to check their work with someone; and
- after they have communicated their decision widely, check for and address any further misinterpretation that may have escaped an earlier check.
Having taken and communicated a decision, let’s go back to evaluating performance, which is an everyday activity but also an annual ritual in many large organizations.
Do you look forward to the next round of annual performance appraisals at your place of work? Probably not. This dismal ritual has been oppressing employees for many years so it is good to hear that some leading companies are finally dropping it.
It would take several pages to describe in detail the wrong way to evaluate performance, but for our purposes we just need to focus on those aspects relevant to uncertainty.
With this in mind, the wrong way to evaluate performance is against initial targets, regardless of how things might have changed since the targets were set, and regardless of how diligently and skilfully a person has behaved. This approach is particularly inappropriate when a person’s results are not entirely within their control, which is true for many people.
The risk expert’s approach is to first be wary of setting goals against criteria that might realistically change throughout the year. For example, it might be unwise to set targets for delivering milestones in a project plan, as external demands might change how the project runs, and good plans should be open to change anyway.
Next, they would also consider those events and behaviours that occurred since any targets were set. Then they would evaluate the results against the latest views on what outcomes are valuable (e.g. revised project or company objectives). This tactic helps to limit gaming of targets because, if the way people are assessed might change during the year, it is dangerous for them to try to exploit weaknesses in any targets set at the start of a year. Their best bet is to try sincerely to understand what is in the best interests of the organization and work towards that.
In summary, the risk expert would:
- understand that any expectations set at the start of the year are likely to be overtaken by events;
- consider how the individual acted in the light of actual events, expected or otherwise, and value results against the latest view of what is valuable (e.g. current objectives); and
- let the individual know that they will be evaluated on this basis. This ensures that changed circumstances do not undermine appropriate behaviours.
From these examples you can see that the risk expert is not a rigid bureaucrat who believes there is a right way in every situation. The risk expert does not believe you can know everything at the outset and that the success is just a matter of exercising control to ensure that everything happens as planned.
Instead, the risk expert is more humble, recognizing that the world is too complex and powerful for such a rigid approach. The expert knows that we don’t know everything at the outset and that we will have to learn along the way. The expert is keen to get more information, and yet able to move ahead anyway, using a variety of tactics to combine progress with learning.
The expert recognizes that surprises and change are inevitable in almost all situations and gets ready to deal with them in a controlled, almost routine way.
The risk expert is open minded, often considering alternative truths or futures, and is neither optimistic nor pessimistic.
Reading these examples you probably felt you recognized that many, but not all, the actions were those of the risk expert. You probably realize there were also actions that you agreed with but probably would not have thought to use if you were actually in that situation.
If you are one of those who found that most of the risk expert’s actions seemed wrong then it may be due to the work you have experienced. Do you work in highly controlled conditions, such as within a modern factory that is largely automated? Or perhaps you work somewhere that exists within a very stable and uncompetitive niche, and therefore has not needed to change much for years?
If you would like to improve your skill in dealing with risk and uncertainty, the first thing to do is probably to think about a situation you often deal with where uncertainty plays a big part. This might be from work or home. Is it similar to one of the situations above? Do any of those tactics or perspectives seem suitable to try in that situation?
Of course, real life is rarely as neat as the scenarios we described above. Maybe some elements of your real life situation reflect different parts of different scenarios above. Look at all the tactics or perspectives and see if any seem appropriate to your situation. Think through how you might use different tactics and perspectives, and try one or two in safe situations. See what happens. Build on your successes.
This article is also available on Matthew’s site.