Forecasting and the Experts We’ve Had Enough Of

The pandemic, the climate emergency and the US presidential election have thrust experts and their forecasts into the frontline of politics in divided societies. The view that we have 'had enough of experts' has become widespread. The original quote was misleadingly attributed to Michael Gove during the Brexit referendum. He was actually trying to say:

"I think the people in this country have had enough of experts... from organisations with acronyms saying that they know what is best and getting it consistently wrong."

The experts he was talking about were economic forecasters. "Consistently getting it wrong" may have been exaggerated but his argument is one which academics and everyone who works in transport would do well to heed.

Amongst the many uses and misuses of expert opinion, forecasts hold a particular fascination for politicians and the media. Politicians want to reduce the risks of their decisions and deflect any blame if they make a wrong call. For the media, and many of its readers or viewers, expert forecasts fulfil a similar function to horoscopes. When forecasts turn out to be 'wrong' or 'inaccurate', the media and politicians often tell the public that experts of one kind or another, must have made mistakes, so public scepticism is hardly surprising. In my view, the biggest mistake is attempting to forecast the future in the first place. With a few limited exceptions, we need to ween ourselves and our political masters off the hallucinatory drug of forecasting.

"I know of no way of judging of the future but by the past" said Patrick Henry in 1775, and that maxim has provided the basis of all forecasting, outside of science fiction or astrology, ever since. That means that however sophisticated the methods, a forecast of the future will only be valid if relationships observed in the past continue to apply in the future. New combinations of variables may produce new outcomes, but all forecasts are based on data or observations from the past.

Those obvious statements reveal the fundamental weakness in all attempts to forecast human behaviour. Human beings, unlike atoms or molecules, have free will; they can always choose to behave differently in the future than they did in the past. The relationships between individual actions, social influences and material circumstances are complex. Experimental evidence has shown that putting people in a group, where everyone can see everyone else's behaviour, can lead to unpredictable outcomes. New circumstances may trigger new behaviours, creating relationships between variables which have never been observed before. This is what I call a disrupting change.

The main defence of forecasting is that in practice big groups of people often behave in ways which appear to be predictable over time, and that is often true – until a disrupting change occurs. Some striking examples have included: the end of the Baby Boom in the late 1960s, the recession of 2008-9 and the decline in rates of driving per person since the late 1990s. A recent analysis of trip rates performed for the DfT reveals how a disrupting change occurred between 2002 and 2012. The authors used a "year" variable to model unexplained changes in trip rates over time. They concluded that "the majority of observed trends in trip rates remain unexplained" – adding additional variables made no difference.

Disrupting changes are often followed by critical questioning: why did so few 'experts' anticipate such fundamental changes? A few mavericks who made a correct forecast can always be identified afterwards, but how would anyone have known which mavericks were going to be right, in advance?

Following the logic above, no forecasting model, however sophisticated, would be able to predict a disrupting change. This means that forecasts will be most accurate where they are least needed – in relatively stable circumstances.

So do forecasts have any use in transport decision-making? Possibly, in a more limited way. The assumption that past relationships will continue into the future may be reasonable when modelling the immediate local impacts of a new road, for example, but it is not reasonable for national traffic forecasts, or GDP forecasts stretching years into the future. It is also unreasonable to base project appraisal on long-term traffic forecasts – the 60 years required by WebTAG is likely to be as accurate as consulting the stars. (I am not a fan of cost benefit analysis for several reasons, as I have explained before.)

You may be thinking: OK, but if we scrapped most forecasting how else could we make decisions? This is a question which some of my colleagues have worked on before. Phil Goodwin has proposed an alternative method for identifying nascent trends, which might (or might not) trigger a disrupting change (such as 'peak car'). Glenn Lyons has written about decision-making under uncertainty, and I noticed as I was writing this article, that he is about to chair this online event, which will debate some of these issues.

I will finish with an appeal to any of you who might find yourselves (as I have on occasions) pressed by politicians or journalists to predict the future – resist if you can, and try to explain why none of us can predict those disrupting changes. To combat public hostility to experts, we all need to be honest about the limits of our expertise.

Steve Melia is a Senior Lecturer in Transport and Planning at the University of the West of England. His next book, Roads Runways and Resistance – from the Newbury Bypass to Extinction Rebellion, will be published by Pluto Press in January.