X Close

Are economists really this stupid? Mervyn King's new book insists that we shouldn't use numbers when forecasting

Former Governor of the Bank of England, Mervyn King, has co-written an alarming book about risk and how to calculate it. Credit: Oli Scarff - Pool/Getty

Former Governor of the Bank of England, Mervyn King, has co-written an alarming book about risk and how to calculate it. Credit: Oli Scarff - Pool/Getty




February 26, 2020   9 mins

Sometimes you read something which chills you to the bone. The new book Radical Uncertainty: Decision-making for an unknowable future, by Mervyn King and John Kay, did that. Not, it should be said, on purpose, or even directly. But the implication of its existence – that these authors thought that this book needed to be written – means there is something terribly wrong somewhere. 

It added salience to a line I had read recently and which has stuck with me: “If people seem slightly stupid, they’re probably just stupid. But if they seem colossally and inexplicably stupid, you probably differ in some kind of basic assumption so fundamental that you didn’t realise you were assuming it.”

This is roughly how I feel when I read King and Kay, who are the former governor of the Bank of England and a respected professor of economics at Oxford respectively. They’re obviously not stupid. But it seemed amazingly stupid. It’s as though we are looking at the world from such different angles, that I find it hard to accept we’re seeing the same things.

K&K’s fundamental thesis, as I understand it, is that we ought to be less willing to use numbers to forecast the future; we should instead be more open to the fact that the future is radically unknowable. The examples they use range from Barack Obama having to decide whether or not Osama bin Laden was in the Abbottabad house that CIA intelligence placed him in, to the asteroid that killed the dinosaurs, via the 2007 financial crisis. Their central idea is that, instead of saying “I think there is a 75% probability of X”, we should simply say “I don’t know” whether X will happen, or that they think it is “likely”.

Advertisements

This seems so mad to me that I have decided that we must be talking at cross-purposes.

By way of analogy, imagine a book by two very senior doctors, and clearly aimed at other doctors. Imagine its central thesis was something like “Doctors ought to stop saying ‘Drug trials can tell you what drug works best.’ After all, many drug trials have problems, such as low sample size or funding bias, and some get the wrong answer. And even when they get the right answer, many drugs work on some patients but not on others. If asked whether a drug works, instead of relying on drug trials, doctors should be more willing to say ‘I don’t know’.”

It’s not that anything in there is wrong. But if those senior doctors think that other doctors genuinely believe that drug trials are perfect, and that if a drug trial finds that a drug works it means it works forever on every patient, then that is terrifying. Something has gone terribly wrong with medicine if doctors are rigidly applying “this RCT says Happyapeline works for depression, therefore I shall give it to every patient who arrives saying they have depression, no further questions asked”.

And yet K&K, who presumably know better than me what economists and hedge-fund managers act like, would seem to think that those people are doing the equivalent thing: believing that they can put absolute faith in obviously simple rules.

The book has plenty of interesting stuff, for the non-economist, about the difference between economics and, say, physics. Rocket science seems difficult, but it’s actually pretty simple: you can predict the actions of planets for millions of years in advance, or guide a space probe through billions of miles of space, with equations largely unchanged since Newton’s days. 

Partly that’s because there are fewer moving parts, but it’s also because the rules stay the same. The astronomers’ actions will not significantly affect the movements of the planets; their beliefs about those planets will make no difference at all. They could publish a paper about the orbit of Venus, and Venus will sail on, unperturbed – whereas an economist publishing a paper about the expected growth of the economy can affect that economy. If they can predict the movement of the pound against the dollar, they can make an awful lot of money out of it. The law of gravity is stationary; the laws of economics are reflexive, and can be affected by our beliefs about those laws.

But according to K&K, economists (and others) fail to realise this. They put too much weight on their models; they mistake model for reality. That is how the CFO of Goldman Sachs, David Viniar, was able to say “We were seeing things that were 25-standard deviation moves, several days in a row,” about the credit crunch. Standard deviation, or “sigma”, is the measure of how far from the norm something is. Given one “event” a day, you ought to expect 2-sigma events a bit less than once a month; 4-sigma events, every century or so. You would expect 7-sigma days about once every three billion years.

You would expect to see 25-sigma days, bluntly, never. If your model says you’ve seen one, let alone several, in any context, then your model is wrong.

This is all fair and good and true. “All models are wrong,” as the statistician George Box said – as, indeed, K&K quote him as saying – ”but some are useful.” Don’t ever mistake your model for reality.

So don’t ditch the models, be humble with them; include wide error bars, and space within the model for unknown things to happen. And, of course, make sure you can afford to lose whatever you’re betting, even if the bet seems a good one (which Viniar, Goldman Sachs et al signally failed to do). I think this is roughly what it means to be “antifragile” and “robust” and “resilient”, terms K&K throw around a lot. Expect things to be unpredictable. Make room for what they call “radical uncertainty”.

But K&K’s prescription goes further than that. Instead of putting figures on things, they say, forecasters should instead be willing to ask “What is going on here?” and willing to say “I don’t know”. Obama and the Abbottabad decision is an example they return to again and again.

His advisers were giving him percentage likelihoods that the man in the house was bin Laden, ranging from 95% to about 30%. K&K praise him for, instead of combining all those probability judgments, simply making a decision. Obama is quoted as saying “it’s 50/50”, but K&K say he did not mean “probability = 0.5”. Either he’s in the house or he’s not; you don’t know; it’s radically uncertain, so there’s no point putting numbers on it. Instead, you say “I don’t know.”

But: of course you can put numbers on these things! We can say things like, there are about 32 million households in Pakistan. So if we are confident that bin Laden is in Pakistan and we were to pick one at random, we’d have about a one in 32 million chance of getting the right house. That is, if you like, the base rate of bin Ladens in Pakistani households. We can adjust that number up with new information; sightings, web chatter, or suspicious movements of goods, say. You use judgment and probabilistic reasoning to put numbers on your intuition. 

I’m being simplistic, but that’s the basic idea of Philip Tetlock’s Good Judgment Project (GJP), which I’ve written about before, and which is kind of famous at the moment because Andrew Sabisky did well enough at it to earn the title “superforecaster”. The GJP superforecaster prediction teams systematically outperform CIA analysts and others at forecasting things like “Will Scott Morrison cease to be prime minister of Australia before 1 January 2021?” or “When will total cases of Wuhan coronavirus (COVID-19) in California reach 25 or more?”

They do it by putting numbers on things: say, 70% likely or 27% likely. And then they make loads of predictions, and they see if the 70% predictions come in 70% of the time, and so on. If they come in less often, they’re overconfident; if they come in more often, they’re underconfident. 

Tetlock’s work is briefly mentioned in Radical Uncertainty, but only to be dismissed. They say that these predictions are only predicting well-defined, falsifiable events – “Who will win the 2020 Bolivian presidential election?” – and so cannot allow for radical uncertainty, for the possibility that some totally other thing will happen. These techniques, say K&K, cannot put useful numbers on big, difficult questions. ‘The sensible answer to the question ‘Will there be another global financial crisis in the next 10 years?’,” they say, “is ‘I don’t know.’”

But this seems straightforwardly wrong. For a start, GJP-style use of numbers can allow for radical uncertainty. The GJP’s Bolivian presidency predictions are: Jeanine Áñez, 27% probability; Luis Arce, 31%; Luis Camacho 16%; Carlos Mesa 23%; and “Someone else or another outcome, 3%”. They explicitly include in their model the possibility that something totally other will happen. Perhaps the space for uncertainty should be bigger; that’s an empirical question. But it’s there.

If, as K&K seem to be suggesting, the only rational response to radical uncertainty is to say “I don’t know,” then they should refuse to put a number on any outcome: maybe Jeanine Áñez will win; maybe Luis Arce will; or maybe a robot dinosaur and the ghost of Freddie Mercury will materialise in the senate building and expound a new theory of government via the medium of interpretive dance. We just don’t know.

And these sorts of predictive methods do work on big questions like “Will there be another financial crisis?” The methods are agnostic to the size of the question, although the greater the complexity the greater the uncertainty, of course, and you have to be able to define the question (but if you can’t define the question, then what are you even asking?). You might start by examining the base rate of global financial crises per decade – depending on your definition, there have probably been two in the past century, so you might start with a base rate of 20%. And from there, you look at the fundamentals relevant to the current situation, and adjust your estimate with that. 

Tetlock’s work seems to show, empirically, that this method is more effective than alternatives. He was inspired to start the GJP partly because of the preponderance of “​vague verbiage​” among predictions by politicians and pundits; “a recession seems likely”, OK, but is “likely” a 10% chance or a 90% chance? It becomes impossible to say when someone is wrong if they can hide behind unclear terms. So it’s darkly amusing to see K&K explicitly recommending a return to vague verbiage: to say “it is likely” rather than “it’s 60% likely”, or “it seems probable” rather than “there is a 75% chance”.

I also struggled to determine what K&K suggested we should do instead. They give lots of examples of people using models and numbers badly, but if you want to determine how much money to save for a pension, or what percentage of GDP to spend on social care, you need numbers. They quote Keynes, approvingly, on how economics is not physics and how you need intuition and judgment and domain knowledge, not just rules. Which is all true, and I’m a sucker for anyone quoting Keynes. But if their prescription is “have really good intuition and judgment and domain knowledge” – basically, “be really good at this” – then I’m not sure how helpful that is.

I’d be very happy if their suggestion was simply that, empirically, people are too keen on spuriously precise numbers, and ought to be more humble with their forecasts. That would make sense. But the repeatedly stated message of this book is that we shouldn’t use numbers at all. Instead, they say again and again and again, we should ask “what is going on here?” and say “I don’t know”.

As I said, it is vanishingly unlikely that K&K are as stupid as this book seems to me. I tried to think about what the assumptions that we didn’t share might be, and the one that seemed likeliest was that we simply hang out in very different circles.

I see a lot of journalists and pundits making predictions about the future. And the problem is rarely that they are too willing to make robust, falsifiable predictions with explicit numbers. The problem is they talk a load of guff about X being likely, and then when X doesn’t happen, just quietly pretend they never said anything. If they were forced to give percentage estimates for things it might make them face up to how little they actually get right.

K&K, on the other hand, presumably hang around with economists and finance types, the sorts who collateralised all their subprime mortgage debt in the early 2000s and felt very pleased with themselves. Could it be that lots of them really do build statistical models of the world which say “lots of poor people in the Midwest defaulting on their mortgage at the same time is a once-in-several-trillion-lifetimes-of-the-universe event”, and think, “OK, sounds legit”? Could it be that if these masters of the universe were writing a forecast of the Bolivian presidential campaign, their percentages for Áñez, Arce, Camacho and Mesa would add up to exactly 100, with no room for the unforeseeable at all? Maybe K&K are speaking to those people, and they’re common enough in finance and economics to be the norm rather than the exception. 

It would be terrifying to learn that doctors actually think that “drug trial finds positive result” means “this drug works for all patients in all circumstances”. Similarly, if K&K are right in thinking that economists and finance types need to be told that their models are not 100% precise representations of reality – if they think they never need to put a term for “maybe this model is wrong” into their model, and are just happy to bet all our money on them definitely not having forgotten to carry the 2 at some point in their big spreadsheet – then that, too, is completely terrifying. If K&K are right that this book needed to be written, then: holy shit.

It doesn’t make me want to stop using percentage probabilities. It does, however, make me think that, if the financial system really is full of maniacs like that, then the chance of a massive financial crisis in the next decade is a lot higher than 20%.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

3 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
jndlewis92
JL
jndlewis92
4 years ago

Incentives matter: 1) the author is strictly an observer (not meant to be disparaging) 2) statisticians justify their existence by the collection of data and creation of models; these models are not intended to be drivers of risk but explanations of it 3) practitioners, like Mervyn, take risk and bear the consequences of their actions (financial, reputational or otherwise).

As a “financial services professional” directly involved in risk-taking the purpose of a model is stress-test your thesis not to predict the future (which is impossible). It’s about testing margin of safety a/k/a appropriately quantifying key drivers and simulating how these change in different scenarios (as extreme or benign as you deem fit).

The fundamental issue with all models is they are backward looking and often cannot disentangle correlation vs causation. Entire fields of study are devoted to improving statistical techniques in this area but unfortunately where real-time public policy or capital allocation is concerned the effects only become apparent ex-post. As such, I have great respect and sympathy for Mervyn’s admission of his limits as a policymaker.

mlipkin
ML
mlipkin
4 years ago

Explains a lot. Massive over-interpretation of statistical models.

Doing statistics properly requires integrity and probably humans only come up to the mark when constantly smacked in the face by physical reality, hence the ‘hard sciences’ actually work.

Does the idea that individual mortgage defaults are entirely independent from each other make sense? No, but “we are making lots of money right now so we must be right” is probably standard finance thinking.

As long as the cash is flowing in there will be massive over-interpretation of their models, until it isn’t, then they get bailed out by the taxpayer.

The 30%-95% probability interval given to Obama r.e. likelihood of Osama bin Ladens presence in the house is probably a Bayesian posterior credible interval and not a particularly useful way of communicating information – even to someone with Obama’s massive brain, so him reducing it to something he can imagine (which, note, is within the credible interval) is fine – nothing to see here. (this example also turns up in a moronic economist article).
K&Ks delight in saying “I don’t know” is just an exercise in enjoying power. Their underlings are probably not allowed to say this, but they can sit in their comfy chairs saying “I don’t know” and everyone thinks they are being very profound, just because they have reached the top of the tree. But someone somewhere might have to make a decision – whatever the outcome – K&K are not to blame

James P
JP
James P
1 year ago

While I agree that tossing out the numbers in favour of “I don’t know” seems stupid and lazy, I do think a massive increase in scepticism about the sort of models used for policy development in the areas of climate science or pandemic projections is more than warranted.