Scott Armstrong and Kesten Green are making a last call for help with their paper, "Forecasting methods and principles: Evidence-based checklists". They are ambitious for the paper: by the use of checklists, it is intended to make scientific forecasting accessible to all researchers, practitioners, clients, and other stakeholders who care about forecast accuracy.
Here is the abstract of the paper:
Problem: Few practitioners or academics use findings from nearly a century of experimental research that would allow them to substantially reduce forecast errors. In order to improve forecasting practice, this paper develops evidence-based guidance in a form that is easy for forecasters and decision-makers to access, understand, and use: checklists.
Methods: Meta-analyses of experimental research on forecasting were used to identify the principles and methods that lead to accurate out-of-sample forecasts. Cited authors were contacted to check that summaries of their research were correct. Checklists to help forecasters and their clients practice evidence-based forecasting were then developed from the research findings. Finally, appeals to identify errors of omission or commission in the analyses and summaries of research findings were sent to leading researchers.
Findings: Seventeen simple forecasting methods can between them be used to provide accurate forecasts for diverse problems. Knowledge on forecasting is summarized in the form of five checklists with guidance on the selection of the most suitable methods for the problem, and their implementation.
Originality: Three of the five checklists—addressing (1) evidence-based methods, (2) regression analysis, and (3) assessing uncertainty—are new. A fourth—the Golden Rule checklist—has been improved. The fifth—the Simple Forecasting checklist (Occam's Razor)—remains the same.
Usefulness: Forecasters can use the checklists as tools to reduce forecast errors—often by more than one-half—compared to those of forecasts from commonly used methods. Scientists can use the checklists to devise valid tests of the predictive validity of their hypotheses. Finally, clients and other interested parties can use the checklists to determine whether forecasts were derived using evidence-based procedures and can, therefore, be trusted.
The Unscaled Mean Bounded Relative Absolute Error (UMBRAE) is a new way to measure forecast errors proposed, and well supported in, Chen, Twycross, and Garibaldi (2017). "A new accuracy measure based on bounded relative error for time series forecasting". The new measure appears to be a promising alternative, and is certainly worthy of further comparative research. Some analysts may want to continue using the RAE until further testing is done. We suggest using both measures in the meantime.
Don Miller and Dan Williams have recently re-posted spreadsheets and X-12 specifications that can be used to implement the seasonal damping method they proposed in "Shrinkage Estimators Of Time Series Seasonal Factors And Their Effect On Forecasting Accuracy," International Journal of Forecasting 19(4): 669-684, and "Damping seasonal factors: Shrinkage estimators for the X-12-ARIMA program," International Journal of Forecasting 20(4): 529-549.
Damping is needed to counteract excessive seasonal variation produced by classical decomposition and X-12, which is an artifact of the random noise in the data.These articles show that a small but significant reduction in forecast error is obtained when using seasonal damping. The benefit is more pronounced when seasonal factors are determined through Classical Decomposition, but remains significant when X-12 is used.
The spreadsheet software for seasonal factor damping is available from the Software Page.
Scott Armstrong and Kesten Green are seeking suggestions of relevant experimental evidence that they have overlooked in their new working paper, "Demand Forecasting II: Evidence-based methods and checklists". They describe the problem that the paper addresses as follows:
Decision makers in the public and private sectors would benefit from more accurate forecasts of demand for goods and services. Most forecasting practitioners are unaware of discoveries from experimental research over the past half-century that can be used to reduce errors dramatically, often by more than half. The objective of this paper is to improve demand forecasting practice by providing forecasting knowledge to forecasters and decision makers in a form that is easy for them to use.
The paper is available from ResearchGate, here.