Why We Use an Informal Checklist Before Believing Forecasts
People make decisions based on forecasts and most forecasts suck. We analyse more than 200 forecasts a year for clients. With this number to look at, we use an informal checklist on forecasters and their forecasts to see if we’ve got alarm bells ringing, before we spend a whole lot of time analysing the workmanship and predictions of any particular forecaster’s crystal ball.
Applying the Checklist to a High Profile Forecast
For a bit of dark fun we’ve used our checklist to make an informal and external assessment of Professor Neil Ferguson’s Covid-19 death forecasting model and his continual high profile and influential predictions.
As most people know, the large number of potential Covid deaths projected by his team were the main influencer of lockdowns in the UK and US. His team modelled Covid deaths in 3 scenarios of non pharmaceutical interventions (NPIs): do nothing (followed by very few western countries, most notably Sweden, and a handful of US States and counties), mitigation (roughly followed by most western European countries and most US States), and suppression (followed by a handful of countries, most notably China).
Here’s how we rate this modeller and his model.
Imperial College Epidemiological Forecasting Model – External Assessment
I guess it’s obvious from this example that we wouldn’t place any reliance on any forecasts coming out of this model or by this modeller.
If it Fails the Checklist – Throw it Out
Looking more broadly, it’s not always possible to do all of these tests but doing a handful should help you know whether to trust a source enough to have confidence in its projections, whether you should have little alarm bells ringing and investigate further, or if you should just throw it out and use something else.
References
Friedman et al. Predictive performance of international COVID-19 mortality forecasting models, 2020
Ioannadis et al. Forecasting for COVID-19 has failed. Int Journal of Forecasting, 2020