this Wall Street Journal article from 2010 by Jonah Lehrer, author of How We Decide. It seemed so relevant to today, I had to share it:
an American tradition: In the final weeks before an election, the
airwaves are saturated with pundits and their bold predictions. This
time around, they might be forecasting a decade of tea-party dominance,
or the imminent comeback of the Democrats or a return to recession in
the face of political deadlock. And as these pundits rattle off their
reasons, they sound as if they know what they're talking about.
do they? Philip Tetlock, a psychologist at the University of
California, Berkeley, has spent 25 years trying to find out. He first
got interested in the subject during the run-up to the 1984 presidential
election, when dovish experts said Ronald Reagan's tough talk to the
Soviets was needlessly antagonizing them, while hawkish experts were
convinced that the Soviets needed to be aggressively contained. Mr.
Tetlock began to monitor their predictions, and a few years later, he
came to a sobering conclusion: Everyone was wrong. Both hawks and doves
failed to anticipate the rise of Mikhail Gorbachev and glasnost, even if
the pundits now claimed to have seen it coming all along.
dismal performance of the experts inspired Mr. Tetlock to turn his case
study into an epic experimental project. He picked 284 people who made
their living "commenting or offering advice on political and economic
trends," including journalists, foreign policy specialists, economists
and intelligence analysts, and began asking them to make predictions.
Over the next two decades, he peppered them with questions: Would George
Bush be re-elected? Would apartheid in South Africa end peacefully?
Would Quebec secede from Canada? Would the dot-com bubble burst? In each
case, the pundits rated the probability of several possible outcomes.
By the end of the study, Mr. Tetlock had quantified 82,361 predictions.
did the experts do? When it came to predicting the likelihood of an
outcome, the vast majority performed worse than random chance. In other
words, they would have done better picking their answers blindly out of a
hat. Liberals, moderates and conservatives were all equally
ineffective. Although 96% of the subjects had post-graduate training,
Mr. Tetlock found, the fancy degrees were mostly useless when it came to
The main reason for the inaccuracy has to do with overconfidence.
Because the experts were convinced that they were right, they tended to
ignore all the evidence suggesting they were wrong. This is known as
confirmation bias, and it leads people to hold all sorts of erroneous
opinions. Famous experts were especially prone to overconfidence, which
is why they tended to do the worst. Unfortunately, we are blind to this
blind spot: Most of the experts in the study claimed that they were
dispassionately analyzing the evidence. In reality, they were indulging
in selective ignorance, as they explained away dissonant facts and
contradictory data. The end result, Mr. Tetlock says, is that the
pundits became "prisoners of their preconceptions." And their
preconceptions were mostly worthless.
disturbing about Mr. Tetlock's study is that the failures of the pundit
class don't seem to matter. We rely on talking heads more than ever,
even though the vast majority of them aren't worth their paychecks. Our
political discourse is driven in large part by people whose opinions are
less accurate than a coin toss.
Mr. Tetlock proposes
forming a nonpartisan center to track the performance of experts, just
as we track the batting averages of baseball players. In the meantime,
he suggests that we learn to ignore those famous pundits who are full of
bombastic convictions. "I'm always drawn to the experts on television
who stumble a little on their words," he adds. "For me, that's a sign
that they're actually thinking about the question, and not just giving a
canned answer. If an expert sounds too smooth, then you should probably
change the channel."
As Mr. Tetlock points out, the
future is impossible to predict. Even with modern polling, we can barely
anticipate the outcome of an election that is just a few days away. If a
pundit looks far beyond that time horizon, to situations with a
thousand variables and very little real information to back up a
prediction, we should stop listening and get out a quarter.
—Jonah Lehrer is the author of How We Decide. His column appears every other week in the WSJ.
Originally published in WSJ here: Beware Our Blind Seers.