Generalists in Foresight

  • Sharebar
Tim Mack

Tim Mack

In 1984, political scientist Philip Tetlock observed at a meeting of the U.S. National Research Council’s Committee on American Soviet Relationship how contradictory many authoritative predictions the participants held on the future of the Cold War. As well, he noted their dismissal of the opinions of equally qualified colleagues on the committee. Accordingly, he began a 20-year project to track political and economic forecasts from 284 experts (each averaging more than 12 years of senior experience in their specialties) and ultimately amassed a total of 82,361 probability estimates concerning the future.

What Tetlock found was a high percentage of assessments that were partially or completely inaccurate, regardless of the authors’ specialty, years of experience, or access to classified information. This was especially striking when experts declared that specific future events were impossible or nearly impossible (15% nevertheless occurred) or when they were a “sure thing” (25% of which failed to occur). At the end of the study, Tetlock concluded, “There is often a curiously inverse relationship between how well forecasters thought they were doing and how well they did.”

However, there was a smaller group within the study who had much better results than those invested in a specific arena of expertise. These individuals weighted the viewpoints of their colleagues and adopted elements they found persuasive, and thus they were able to integrate apparently contradictory worldviews.

This approach was in stark contrast to a well-publicized, long-term debate between economist Julian Simon and biologist Paul Ehrlich over the future of the human population and its probable food supply. Ehrlich strongly held that population growth would outrun global food supply in the near future, while Simon held that innovative technology and new agricultural techniques would continue to solve problems as they arose. As time went on, their debate amassed massive data in their separate fields of expertise, and they both ignored other disciplines.

A similar study of what Tetlock came to call “hedgehogs”—those who dig deeper into their areas of expertise without considering other factors—concerned 22 international banks and a decade of annual predictions of dollar-to-euro exchange rates. Over that period, these banks (including Barclays, Citigroup, and JPMorgan Chase & Co.) often missed even the direction of change, and 60% of the time the actual exchange rates fell outside the range of all 22 banks’ annual forecasts.

After Tetlock published his results in 2005, the U.S. Intelligence Advanced Research Projects Activity (IARPA) invited him in 2011 to join four-year prediction tournament of five teams. Tetlock and his collaborator (and wife), psychologist Barbara Mellers, took an innovative approach to team building. Instead of recruiting experts, they issued an open call for volunteers, whom they then screened. This resulted in a team of 3,200 individuals (out of which they identified a smaller group of those they called “foxes,” with wide interests and high curiosity but no special expertise). These foxes were organized into 12-person teams, which regularly outperformed a group of experienced intelligence analysts with access to classified data.

And as in the earlier Tetlock study, the hedgehogs rarely changed (and often excused) their positions when encountering unexpected outcomes, while foxes just as often adjusted their thinking about that issue. This set me remembering a study futurist Joseph F. Coates led for the Small Business Administration about 25 years ago, where a team of futurists were then called back five years later to refresh their forecasts. A few of those team members actually repeated in detail those same forecasts, but the more interesting participants reviewed each prediction, noting which ones were off the mark and why they felt the error had occurred (including reasons such as insufficient data and incorrect assumptions). In my way of thinking, this second approach proved more useful to both the client and the rest of the team members.

As a result, I concur with the opinion that the best forecasters view their own ideas as hypotheses in need of testing. When they are wrong, analyzing why becomes part of the forecasters’ ongoing learning process. The Tetlock studies reinforce my own ideas of weak signals being best identified and understood by teams of collaborators who cross disciplines and view their colleagues as research assets to be utilized rather than an audience to be convinced.

This does not mean that generalists always make the best forecasters (and nor do specialists). But it is convincing evidence that the open-minded and curious are likely to produce the best forecasting track records.

Further Reading

The Peculiar Blindness of Experts” by David Epstein, The Atlantic (June 2019).
Superforecasting: The Art and Science of Predicting by Philip E. Tetlock and Dan Gardner. Wharton Digital Press, 2015.

Timothy C. Mack is managing principal of AAI Foresight Inc. and former president of the World Future Society (2004-2014). Contact him at tcmack333@gmail.com.