Kesten Green and Scott Armstrong propose that simplicity in forecasting requires (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision makers.

Their review of studies comparing simple and complex methods found 97 comparisons in 32 papers. None of the papers provided a balance of evidence suggesting that complexity improves forecast accuracy. Complexity increased forecast error by 27 percent on average in the 25 papers that included quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple.*

The findings are published in:

Green, K. C. & Armstrong, J. S. (2015). Simple versus complex forecasting: The evidence. Journal of Business Research, 1678-1685. 

Despite the evidence favoring simplicity, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers' plans; and (3) forecasters' clients may be reassured by incomprehensibility.

  • A working paper version of the Green and Armstrong paper is available in full text here.
  • A draft questionnaire that can be used to derive a measure of the simplicity of the forecasting procedures from the understanding of would-be forecast user is the Forecasting Simplicity Questionnaire.
  • A spreadsheet summarising the studies examined by Green and Armstrong is available here.

 In addition to providing original research findings on the effect of simplicity in forecasting, the published article provides an introduction to a special issue of the Journal of Business Research on the topic of Simplicity versus Complexity in Forecasting. The corrected Table of Contents for the Special is available here; it includes links to the individual papers.

TOC: Special Issue on Simple versus Complex Forecasting  


Application of the Forecasting Simplicity Questionnaire at the International Symposium on Forecasting 2015, Riverside

Kesten Green presented a paper by himself and Scott Armstrong illustrating the application of the Forecasting Simplicity Questionnaire using the example of climate forecasting. Their paper was titled "Are forecasting methods too complex?"

  • The abstract and slides for "Are IPCC climate-forecasting methods too complex?" are available from ResearchDirect here.
  • An accompanying flyer is available from here.


*Crone and Nikolopoulos (of Crone et al. 2011) expressed their reservations regarding the authors' interpretation of their NN3 competition main findings, and provided the following statement to explain their position: 

"Crone, Hibon and Nikolopoulos (2011) report the setup and findings of the NN3 competition for computationally intensive methods, an empirical forecasting competition along the lines of the M-competitions; as such the competition was exploratory in nature and there was no specific expectations prespecified in respect to which complex method is expected to perform better. The study indicated that the ex-post top-10 performers indicated some progress in accuracy, but not quite enough to confirm a breakthrough for NNs in the view of Chatfield's (1993), and that the best neural networks and complex statistical models that competed in NN3 were at par in accuracy with simpler models."

A spreadsheet showing Green and Armstrong's analysis of the Crone et al. (2011) findings is available here.



Page created 1 November 2014.

Updated 28 November 2015.