The initiative and sponsorship for the improvements we have made to the site came from past and present members of the International Institute of Forecasters Board. We are grateful to the Board for their practical help and encouragement. As a result of the changes, there have been twice as many visits to the site on each of the days since the changeover.

On a dangerous journey from Troy to Ithaca, what would you like the captain of your boat to look like?

In a paper published in Science, Antonakis and Dalgas show that judgments based on political candidates’ faces were reliable forecasts of election outcomes. What is new in the study is that judgments made by little children (from 5-13 years old) were as predictive as those made by adults. more…

The wisdom of many in one mind: Improving individual judgments with dialectical bootstrapping

As a forecaster, you will recognize the following two situations:

—The experts you consulted made contradictory predictions.

—Depending on which statistical models, modeling assumptions, or data sets you use, your forecasts differ.

What should you do with these contradictory, yet plausible forecasts? A time-proven solution is to mechanically average the differing predictions. As long as the errors of the predictions are at least somewhat independent, the average will be consistently more accurate than an individual prediction (and sometimes the average will be better than the best prediction). But what if you cannot construct a statistical model and can only ask a single expert? Using a technique called “dialectical bootstrapping” Herzog and Hertwig (2009) have demonstrated that the power of averaging somewhat contradictory predictions can be applied to quantitative judgments made by a single person.

Try it for yourself: What will be the US inflation rate for the last quarter of 2009? First, make your best guess and write it down. Second, temporarily assume that your first prediction is off the mark. Think about a few reasons why that could be. Based on this new perspective, make a second, rival (“dialectical”) estimate and write it down. Finally, use the average of both estimates as your prediction.

In a study on quantitative estimates (e.g., “In what year was electricity discovered?”), the authors showed that this simple technique improves accuracy because it elicits two somewhat independent estimates. They observed similar results in a study in which people twice predicted the representation of the Swiss political parties that would result from the 2007 election, when participants were asked to make predictions both from their own perspective and from that of a dissimilar other person. Vacillating between forecasts can be agonizing. But, as dialectical bootstrapping illustrates, being of two minds can also work to one’s advantage.

Herzog, S. M., & Hertwig, R. (2009). The wisdom of many in one mind: Improving individual judgments with dialectical bootstrapping. Psychological Science, 20, 231-237. Working paper available <>

Forecasting to feature at the Decision Sciences Institute 40th Annual Meeting on November 14-17, 2009 in New Orleans. Abstracts due by 1 May.


Call for Papers: Decision Sciences Institute, 40th Annual Meeting on November 14-17, 2009 in New Orleans, LA USA (1 April 2009)

In agreement with the Decision Sciences Institute (DSI) program chair there is an effort to organize a mini-track in Forecasting within the Statistics Track. If you are willing to present a paper during the meeting we can structure sessions (minimum of 2 papers and maximum of 8), or a tutorial. Agreeing to present the paper is obligatory. Deadline to turn in an abstract is May 1, 2009. If interested please contact Benito Flores at This email address is being protected from spambots. You need JavaScript enabled to view it..

Decision Sciences Institute (DSI) provides a forum for disseminating knowledge and advancing the science and practice of decision making in organizations. DSI supports the advancement of high-quality research and sponsors an annual meeting for discussing new developments and generating new ideas.

The Forecasting Method Selection Tree is a convenient and effective tool that helps forecasters to use evidence-based forecasting principles to select the best methods for their problems. We have made some important changes to the Selection Tree to bring it up-to-date with current knowledge.

The changes include, first, reclassifying Delphi as a method that can be appropriate for eliciting experts' judgmental forecasts when large changes are expected; second, eliminating game theory and data mining as methods that have not been shown to produce valid forecasts; and, third, completely reworking the section of the Tree that relates to causal methods to include more conditions and, most importantly, the versatile Index method.

We found that visitors were having problems with the sticky pop-up boxes that provide detail on the conditions and methods and so we have switched this feature off for the time being. We are revising the pop-up boxes to fit with the changes we have made to the Tree and when we have done this we will re-implement the pop-ups as a simple roll-over feature.

Finally we are changing the Methodology Tree to reflect the changes we have made to the Selection Tree. Pop-ups are also turned off on the Methodology Tree while we work on the changes.