# The Problem with Probability

If you’re the kind of person who became an actuary, the chances are [1] you’re the kind of person who likes to think in terms of probabilities, priors, posteriors and so on. For many of us, the tools of probability and

statistics feel very natural, even logical, to use just about everywhere. It is my belief that this can sometimes lead us astray. I’m hopeful that this article will provide food for thought, even if you don’t agree by the end of it.

## Determinism

I’d wager that you have meditated on determinism at some point – that is, the notion that the future is fully determined by the past. Philosophers, scientists, theologians, and artists alike have all contemplated this in their various ways for centuries, if not millennia, and it is a common theme in popular works of fiction.

On the one hand, determinism seems impossible. We all *feel* as though we have a genuine choice about what we do next. If that feeling is valid to any extent, how can the future be determined? On the other hand, our best scientific theories seem to leave no space for anything other than straightforward cause-and-effect. All effects (including, presumably, decisions you make) have prior causes. Those causes were themselves the effects of other causes, and so on, all the way back to the dawn of time.

This notion is so deeply embedded in modern science that it is very difficult to conceive of an alternative. What *would* a scientific theory (or any theory for that matter) look like *other than* a story about cause and effect? There are options, of course, but they tend to strike the modern mind as mystical or antiquated. For instance, the ancient Greek philosophers enjoyed explaining phenomena in terms of them seeking out an effect, rather than being driven from behind by causes, in an endeavour called *teleology*. [2]

A more modern approach to undermining determinism is to look for unavoidable sources of chaos in nature – for instance in the unavoidable randomness we find in quantum theory. Although different interpretations of quantum theory exist, they all concede that there are some measurements which will always appear truly random to individual observers. This is a strange source of comfort to some people, who can use this uncertainty to underwrite a worldview where probability is everywhere.

## Probability and Knowledge

When probability theory was first developed by Blaise Pascal, it was used as a tool to win games of dice. It was more-or-less how we think of it today, where the number of outcomes were counted, and all outcomes were assumed to be equally likely to occur. The probabilities that were calculated represented *long-run frequencies of deterministic physical processes. *The laws of motion are clear and leave no space for randomness, but the forces involved are chaotic enough to render the outcomes unpredictable to us and our gambling opponents. Quantum theory or not, probability theory becomes a useful fiction, even in this fully deterministic setting.

In the century following Pascal’s work, Laplace championed a broader interpretation of probability (which confusingly gets referred to as Bayesianism, even though there is no clear link to Bayes). Under Laplace’s view, probabilities can (and should) be used to model ignorance of any kind – not just chaotic physical processes or quantum spookiness.

Practicalities aside, this is in some ways a very logical natural extension of probability theory. Even just by pivoting from one game of chance to another, we can find value in this interpretation. For example, in the game of Poker, two players each know their own cards but not each other’s. The probabilities they calculate for the next card drawn from the deck will be different as a result, and personal. Each player subjectively holds extra information about the world which can help them arrive at more accurate predictions about an unknown.

There is of course a fact of the matter about what card comes next from the deck, and it is fully determined by the deck’s initial configuration and the shuffle. It is not at all subjective, and there is no room for any quantum-style “true” randomness. The probability (or credence in the Bayesian view) we calculated for the next card is not telling us something about the physical deck of cards sitting in front of us, but rather telling us something about what odds a ‘rational agent’ should accept in a bet if they don’t want to lose money in the long run, in the context of their specific ignorance.

Using probability theory for this purpose has some nice advantages. One important one is that – even if the credences you use are completely preposterous – you will never be vulnerable to arbitrage so long as you follow the laws of probability. You may well be in for a big loss by being *wrong*, but nobody will be able to make a “risk-free” profit against you by pitting your own inconsistent beliefs against each other.

But we must remember that we *can* be wrong, and a Bayesian picture does precious little to protect us from that. Even more importantly, we must remember that whether we are right or wrong itself must have a credence – after all, who’s to say we didn’t just get unlucky? In some special cases, we can make theoretical arguments that probabilities correspond to frequencies of physical events, and in those situations we can check our predictions against what we see. We can never be sure that we were right, but we can be sure that we were *probably* right.

But in other cases, we cannot even go that far. To illustrate: what is the probability that somebody proves the Goldbach Conjecture in the 21^{st} century?

I have described all the probabilities we have discussed so far as useful fictions, and yet this one seems different. I submit that the hitherto useful fiction of probability has just become profoundly useless when met with this question, and no longer corresponds to anything, objective, subjective, quantum or whatever else. Whether Goldbach’s Conjecture is proved or not may be fully determined, or it may not. But either way, we can never possibly know the answer, or even pretend to. We can wager about this question, but we will never have any way of distinguishing a well-placed bet from a hopeless one.

Bayesianism as a tool to combat ignorance of any kind tends to slide quietly into domains like this without complaint. In my view, we should be careful to consider whether this makes sense, case-by-case. Although Bayesians have identified the subjective, probabilistic element to knowledge claims in general, this still only make sense when the content is about an underlying *physical* process which we have a good theoretical reason to approximate as random.

With a fair die, we do not know with enough precision any of the variables we would need to predict an outcome, nor the initial state of the die as it leaves someone’s hand. The symmetry of a fair die is as good a reason as any to suppose that all the outcomes can be thought of as equally likely to happen for any given throw.

With cards: the deck can be in any one of a very large (but finite) number of configurations. We do not know exactly what configuration it was in before a shuffle, nor what a shuffle does to that configuration. If we gain access to further information – say by being shown some cards already drawn from the deck – we can use that to say some things about what *must have happened* in the shuffling process, and thereby update our knowledge *about what the shuffle did*.

But when it comes to proving mathematical theorems, there is no place for probabilities to enter *except* our ignorance. The theorem may be true or false, and it may or may not be provable. These facts are not probabilistic or even physical in nature. Tempting though it may be to assume we can begin with priors of 50-50 to reflect the two outcomes, there is no reason to think this is rational (unlike with the dice or the cards), and there is no evidence that we can use to update those priors. The question simply does not have a probabilistic answer. Bayes can help us avoid arbitrage, but little else.

## Actuaries

Situations like that can sometimes sneak up on us, especially since we don’t have special words or mathematics to tell a useful probability from a useless one. I’ve seen uses of probability theory which look extremely sensible, and others which I found more dubious, a few of which we will now discuss.

Mortality rates are in my view an excellent example of probability done right. There are many reasons why a person could die at any time, but the number of ways tends to increase with age and covary with other characteristics we can identify. We could of course be wrong at any time, and we could even be systematically wrong since we never know whether we are on the brink of some life-saving technology or, to take a random example, a global pandemic. However, we are not *merely* gambling based on subjective whim – we have good reasons to think that there will be long periods of fairly stable mortality regimes, within which we can model and estimate mortality rates in just the ways that we do.

When it comes to economics we have a slightly murkier picture. Movements in the returns on equities are widely thought to follow a random walk – we may see disputes about the exact form and parameters of the process, but we have good theories about why equities move the way they do and why we can safely treat them as random.

On the other hand, we have interest rate movements. Prevailing market interest rates are driven largely by the base rate set at the central bank, which is decided at a committee. We do not have any good reason to suppose that the outputs are random at all, let alone independent and identically distributed from one committee to the next. As such, we have no theoretical framework to even begin to assess the accuracy of our guesses. Perhaps a convincing argument can be made that central bankers dislike large movements, prefer the status quo, and don’t like putting the interest rate lower than zero. Even if all of this is true and gives us a vague sense of security, we must recognise that this approach to modelling is not merely less certain than the others – it is qualitatively different.

For my last example, I’ll pick on a soft target from the office. When actuaries get involved in project management, they seem to consistently want to model time-to-completion as a random variable. I’ve seen them choose distributions, estimate parameters, measure error margins, adjust for bias, and generally just show off. In my view they’ve really gotten carried away – shoving probability theory into a knowledge gap and just hoping for the best. Particularly when tasks involve real creativity – like solving a novel problem – it is impossible in principle to predict when it will be completed [3]. Once again, even if we pretend probability theory is useful here, we have no robust reason to think any two tasks are similar enough to be informative about the other one, and therefore no way of ever knowing if any of our predictions were rational.

So, what is the alternative in these cases? Well, we’ve come full circle back to the roots of probability – gambling. Probabilities are only useful insofar as they help us to gamble [4], and that is true regardless of how you interpret probability. And it is important to realise – *we aren’t always forced to gamble.* If we don’t know what could happen and we’re uncomfortable with one or more of the possibilities, more often than not we can take prior action to mitigate the consequences, regardless of how likely we think that outcome is.

If we don’t know what will happen with interest rates, we can hedge away the risk of uncomfortable movements. If we don’t know when a project will complete, we can continuously review progress, change scope, manage expectations, and have backup plans so that an acceptable outcome is achieved on time, even if it was not what we originally planned. We should not neglect mitigations just because we think something is unlikely to happen – especially if our intuition is the only reason we think so.

## Wrapping Up

Probability theory is unquestionably a powerful tool, but from what I’ve seen in my relatively short career, it tends to be the hammer that makes everything look like a nail. Particularly for analytically minded people who came up through STEM, it can be difficult to resist the temptation to use it to stand in for any gap left by insufficient knowledge. Hopefully this article has helped you to appreciate that “probability theory” is not a monolith, and its interpretation and efficacy depend hugely on the setting in which it is employed.

[1] Hopefully by the end, you’ll find this comment funny.

[2] This form of explanation is still common today informally. For example, we’ll often say things like “the gene wants to survive” when talking about evolution. Here the dynamics of a gene through time are explained not in terms of causes and effects, but in terms of an agent moving intentionally from an undesirable state to a desirable one. However, this is not present in the rigorous theory.

[3] For such tasks, estimating the time-to-completion requires knowing how to complete the task. However, for this kind of task, knowing how to complete the task *is the bulk of the task*, so if it could be predicted, it would already be mostly done, and we’d be talking about an entirely different task.

[4] Even if we have to expand the definition of gambling a bit to include any decisive action in the face of uncertainty.

##### Joseph Barnett

May 2024