Superforecasting - Critical summary review - Philip E. Tetlock
×

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

I WANT IT! 🤙
70% OFF

Operation Rescue is underway: 70% OFF on 12Min Premium!

New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!

1220 reads ·  0 average rating ·  0 reviews

Superforecasting - critical summary review

Superforecasting Critical summary review Start your free trial
Science and Economics

This microbook is a summary/original review based on the book: Superforecasting: The Art and Science of Prediction

Available for: Read online, read in our mobile apps for iPhone/Android and send in PDF/EPUB/MOBI to Amazon Kindle.

ISBN: 9780804136716

Publisher: Crown

Critical summary review

Whether you’re thinking about changing jobs or making an investment, about buying a house or popping the question – you constantly make decisions about your present based on how you expect your future should unfold. In that way, you are a forecaster. We all are.

Some, however, do this for a job and are seemingly capable of predicting everything from the outcomes of presidential elections to the effects of market crashes. In “Superforecasting,” Philip E. Tetlock and Dan Gardner explain how they do this – and analyze whether we should trust them.

So, get ready to discover the secrets of the art of forecasting and learn how you can improve your predictions.

Scientific determinism: the unbearable lightness of predicting

Imagine a person walking with a certain speed toward a certain destination. Would you be able to predict how long it would take this person to reach such destination? Well, maybe not precisely, but there’s a good chance your guess won’t be way off, right? Now, say you know the exact distance between this person’s current location and the endpoint of the journey, and you have a sophisticated tool that can accurately measure the average velocity from afar. It’s not even a guess anymore, is it? Since time equals distance divided by speed (t = d/s), you can now give a pretty precise answer to the introductory question. The accuracy of your prediction doesn’t change the fact that it is still a prediction: you’ve just succeeded in foretelling the future using nothing but a few figures readily available to you in the present.

Thinking along these lines, scientists had supposed for centuries that the more we know about the present, the more predictable the future should be. And in 1814, the French mathematician and astronomer Pierre-Simon Laplace became the first one to articulate this form of “causal or scientific determinism” in a remarkable article titled “A Philosophical Essay on Probabilities.” 

“We may regard the present state of the universe as the effect of its past and the cause of its future,” he wrote in it. “An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” Laplace called his imaginary entity a demon, which would haunt statistics and probability studies for the next century and a half.

The flap of a butterfly’s wings: the birth of the chaos theory

Then, in 1972, an American meteorologist by the name of Edward Lorenz wrote an arrestingly titled paper (“Predictability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?”) that shifted scientific opinion in the opposite direction. Quite by accident, he discovered a decade before that even the tiniest data entry variations in computer simulations of weather patterns – like rounding down 0.506127 to 0.506 – could produce dramatically different long-term forecasts. “Two states differing by imperceptible amounts,” he wrote already in 1963, “may eventually evolve into two considerably different states.” And that sentence served as the foundation of something now known as chaos theory.

To understand it better, take the formation of clouds, for example. As you know from school, they all form when water vapor coalesces around dust particles. However, what shape would a cloud take depends on so many complex and continually evolving feedback interactions among the water droplets that it is impossible to predict how a particular cloud would develop at any moment whatsoever. It’s not that you can’t try to eliminate all errors – it’s that you’d inevitably create them by measuring. And it’s not that the evolution of a cloud is random – it is, for lack of better words, unique and unrepeatable, mainly because it is highly sensitive to the complicated initial conditions. Change anything in them, however minor, and you’ll get an enormously different outcome.

So, in principle – to use Lorenz’s more famous illustration of chaos theory – “a lone butterfly in Brazil could flap its wings and set off a tornado in Texas, even though swarms of other Brazilian butterflies could flap frantically their whole lives and never cause a noticeable gust a few miles away.” Most systems are just too complex to be predictable, and some minor events that might not have any impact in all circumstances before may lead to vastly different outcomes in the present conditions.

Illusions of knowledge: street vendors and black swans

To understand Lorenz’s “butterfly though experiment” better, take, for example, the real-world case of Mohamed Bouazizi, a 26-year-old Tunisian street vendor who decided to set himself on fire after being humiliated by the corrupted police on December 17, 2010. 

At the time, Tunisia had so many problems that the self-immolation of an extremely poor street vendor (there are countless like that in the Arab world) should have meant nothing in particular to nobody. And yet – it did: the event started the Arab Spring and ousted several dictators in Egypt, Libya, Syria, Jordan, Kuwait, and Bahrain in the end. Bouazizi wasn’t the first one to be harassed by the police nor was the first to set himself on fire in protest. but he was, to use a common phrase, the straw that broke the camel’s back. We don’t know why, because there is nothing to compare to this event. And that is one of the major problems of forecasting: how do you predict a precedent?

Well, simply put, you don’t – at least according to Nassim Nicholas Taleb’s exceptional book, “The Black Swan.” If you were a European living four centuries ago and had seen all the swans on the continent, there was no way you could guess that black swans were perching their wings somewhere in the swamps of Australia. Just as well, no matter how much information you had about the past and the world’s immediate present in 2011, there was no way you could have predicted the September 11 attacks – for the simple reason that nothing like that ever happened before to equip you with the tools for a correct prediction. Rare events are, by definition, improbable: otherwise, they wouldn’t be rare, would they?

Measuring predictions: the Brier score and the Good Judgment Project

Now, according to Taleb, black swan events – and they alone – determine the course of history. “History and societies do not crawl,” he wrote paraphrasing Goethe. “They make jumps.” If that is true, admit Tetlock and Gardner, then forecasting is really not something that people should be bothered with: no one can predict black swan events accurately.

Fortunately, this is obviously not the case. “Look at the inch-worm advance in life expectancy,” they note. “Or consider that an average of 1% annual global economic growth in the 19th century and 2% in the 20th turned the squalor of the 18th century and all the centuries that preceded it into the unprecedented wealth of the 21st. History does sometimes jump. But it also crawls, and slow, incremental change can be profoundly important.”

“The Black Swan” was published in 2007 and, in this very same year, the U.S. Intelligence Community assembled IARPA, a multibillion-dollar agency, firmly believing that the future is much more predictable than Taleb would like us to believe. In the meantime, 20,000 intelligence analysts “assessing everything from minute puzzles to major events such as the likelihood of an Israeli sneak attack on Iranian nuclear facilities or the departure of Greece from the eurozone.”

One way IARPA tried to more accurately predict important world events was through a tournament in which “five scientific teams led by top researchers in the field competed against each other to generate accurate forecasts on the sorts of tough questions intelligence analysts deal with every day.”

How did the agency measure their scores? Easy: by using the official forecasting scoring system. Apparently, that exists. The math behind it has been around ever since 1950 when it was developed by Glenn W. Brier – which is why the results are usually called Brier scores. Put simply, “Brier scores measure the distance between what you forecast and what actually happened. So, Brier scores are like golf scores: lower is better. Perfection is 0. A hedged fifty-fifty call, or random guessing in the aggregate, will produce a Brier score of 0.5. A forecast that is wrong to the greatest possible extent – saying there is a 100% chance that something will happen and it doesn’t, every time – scores a disastrous 2.0, as far from The Truth as it is possible to get.”

One of the original five teams at the IARPA tournament was the Good Judgment Project (GJP), a project co-created by none other than Tetlock himself, one of the authors of “Superforecasting.” In the first year, GJP beat the official control group by 60%. In the second year, they got even better, beating not only the control group by 78% but also its university-affiliated competitors by surprisingly hefty margins. GJP even outperformed professional intelligence analysts with access to classified data! Needless to say, after only two years – the tournament was supposed to last four – the GJP was doing so much better than its academic competitors that IARPA dropped all the other teams.

Ten commandments for aspiring superforecasters

How did they do it, you wonder? Believe it or not, mostly by adhering to the following Ten Commandments:

  1. Triage. You wouldn’t have been able to guess the winner of the 1952 presidential election in 1940. So, don’t bother predicting black swan events. Focus on questions where your hard work is likely to pay off. 
  2. Break seemingly intractable problems into tractable sub-problems. This is called Fermiizing, after physicist Enrico Fermi, who was supposedly capable of estimating, with incredible precision, things like the number of pianos in Chicago with very little previous info. And this is precisely how Peter Backus, a lonely guy in London, guesstimated the number of potential female partners in his vicinity. He started with the population of the city (6 million) and then winnowed that number down by the proportion of women in the population (50%), the proportion of singles (50%), the proportion in the right age range (20%), the proportion of university graduates (26%), the proportion he would find attractive (5%), the proportion likely to find him attractive (5%), and, finally, the proportion likely to be compatible with him (about 10%). The final number (in case you’re wondering) turned out to be 26.
  3. Strike the right balance between inside and outside views. Nothing is 100% unique, not even black swans: so, look for comparable events and even one-of-a-kind examples.
  4. Strike the right balance between under- and overreacting to evidence. More effort in finding information doesn’t make that information worthier than the data standing right in front of you. Be impartial and update your beliefs as you examine new evidence.
  5. Look for the clashing causal forces at work in each problem. Hear the other side: truth is always somewhere in the middle, so forecasting is usually synthetical. It requires integrating multiple perspectives and “reconciling irreducibly subjective judgments.”
  6. Strive to distinguish as many degrees of doubt as the problem permits but no more. Nothing is certain – but, just as well, nothing is impossible either. So, “don’t reserve rigorous reasoning for trivial pursuits”: what you should be concerned with is likeliness, not certainty.
  7. Strike the right balance between under- and overconfidence, between prudence and decisiveness. Neither rush to judgment nor dawdle too long around that “maybe.”
  8. Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases. Don’t adjust your measurements after the fact: embrace your past errors and learn how not to repeat them. 
  9. Bring out the best in others and let others bring out the best in you. Superforecasting is a team effort, so “master the fine arts of team management.”
  10. Master the error-balancing bicycle. Learning requires doing, and doing, in the case of forecasting, inevitably results in errors. Balance them, because too few of those lead to overconfidence, and too many might question your expertise.
  11. Don’t treat commandments as commandments. In a world where nothing is certain or exactly repeatable, the best you can do is using guidelines. So, treat these commandments as such.

Final Notes

Dubbed “the most important scientific study… on prediction” by The Bloomberg View, “Superforecasting” is brimful of thought-provoking scientific discoveries and unforgettably entertaining tales.

Truly, an essential reading.

12min Tip

“Foresight isn’t a mysterious gift bestowed at birth,” Tetlock and Gardner write. “It is the product of particular ways of thinking, of gathering information, of updating beliefs.” Learn and cultivate these habits of thought. They can be very helpful.

Sign up and read for free!

By signing up, you will get a free 7-day Trial to enjoy everything that 12min has to offer.

Who wrote the book?

Philip E. Tetlock is a Canadian American writer and the Annenberg University Professor of Psychology and Management at the University of Pennsylvania. He is the author of several books, such as “Expert Political Judgment,” “Counterfactual Thought Experiments in... (Read more)

Start learning more with 12min

6 Milllion

Total downloads

4.8 Rating

on Apple Store and Google Play

91%

of 12min users improve their reading habits

A small investment for an amazing opportunity

Grow exponentially with the access to powerful insights from over 2,500 nonfiction microbooks.

Today

Start enjoying 12min's extensive library

Day 5

Don't worry, we'll send you a reminder that your free trial expires soon

Day 7

Free Trial ends here

Get 7-day unlimited access. With 12min, start learning today and invest in yourself for just USD $4.14 per month. Cancel before the trial ends and you won't be charged.

Start your free trial

More than 70,000 5-star reviews

Start your free trial

12min in the media