“For what it’s worth, a good streak doesn’t jinx you, and a bad one, unfortunately, does not mean better luck is in store.” — Leonard Mlodinow
When someone tells you that they took a calculated risk, they almost certainly are not speaking the truth, to you or to themselves. That’s because there were almost certainly no calculations at all, just feelings stacked upon internal biases.
The role our feelings and biases play in our estimates of risk are hard to overstate. It’s a whole field of study for psychologists, many of whom have made entire careers of looking into the fallacies that go into our estimation of risk.
Estimating Risk
Risk is the product of consequence severity and likelihood. To estimate risk, or better yet, to calculate risk, requires that we do a good job of estimating or calculating consequence severity AND that we do a good job of estimating or calculating likelihood.
Consequence severity—how bad things will be when a hazardous event occurs, is something we are pretty good at estimating, at least within an order of magnitude. While we can use detailed models to estimate consequence severity with greater precision, the variables are usually so, well, variable, that getting within an order of magnitude is really all we can hope for. And people’s sense of how bad things are likely to be is usually that good without the detailed models.
On the other hand, likelihood—the frequency at which events occur or probability at which events will occur within a defined span of time—is something we are terrible at estimating. Why? It’s psychological.
The Gambler’s Fallacy vs. Recency Bias
Psychologists have described a couple of psychological biases to explain why we are so bad at estimating likelihood.
The Gambler’s Fallacy is the idea that after a string of bad luck, good luck is due, or conversely, after a string of good luck, bad luck is due. Despite the laws of probability, not because of them, we want to believe that random systems will shift probabilities in order to catch up and achieve “balance”.
Recency Bias, on the other hand gives more weight to recent events than to events further in the past. If it happened recently, then we want to believe it happens all the time. If it hasn’t happened recently or has not happened in our own experience, then we want to believe it never happens. When gamblers succumb to recency bias, psychologist refer to that as the “hot-hand fallacy.”
Relevant Time Frames
Another problem that we face is that our time frames are not especially relevant. We all have a very good sense of what “once a year” means. We’ve heard since we were children that holidays and our birthdays “only come once a year”. Likewise, we have a good sense of what “once a decade” means. It’s something we’ve experienced. At about “once a century”, we start to lose any personal sense of what that means.
What about “once a millennium”? Who really has a feel for what having something happen once every thousand years means? Or for that matter, if we want to think in terms of fleets, once a year in a fleet of one thousand. We can do the calculations, but do we have a feel for it? The Vikings began the colonization of Greenland a thousand years ago, if that helps.
Most tolerable risk criteria, however, are looking for hazardous events to happen, not once every thousand years, but once every 10,000 years or once every 100,000 years or for some organizations, once every million years. None of us have a feel for that. For context, ten thousand years ago, people were first introducing cultivation into the Mesopotamia. And 100,000 years ago? Does it help to know that is when humans first began gathering wild grains for food?
Why LOPA is So Valuable
When it comes to consequence severity, our feelings—our sense of things—is often good enough. To complete the risk assessment, though, we also need a sense of likelihood. And for that, our feelings will fail us. So, instead of relying on our feeling about likelihood, we need to do some calculations. We need an estimate based on calculations.
That is what Layer of Protection Analysis does for us. It’s not that it objectively determines likelihood, but that it forces us to document our sense of things and assign numbers that we can defend and that others can challenge. It forces us to do the math that go into determining a “calculated risk”.
Don’t Guess
How likely? Your guess is as good as your psychological biases. No better. So, don’t guess. Don’t trust your feelings about likelihood. Because most of us aren’t even aware of our biases (or we would compensate for them) it is important that we do the math, calculate the risk. We’ll make better risk decisions because of it.
This is a great explanation for why quantitative or at least semi-quantitative methods for assessing risk are better than the qualitative alternative that comes from selecting a likelihood from a risk matrix. Great stuff, Mike.