Saturday, December 10, 2005

Believe Me; I'm Not Famous

New Yorker Book Review of Expert Political Judgment: How Good Is It? How Can We Know? by Philip Tetlock.

He conducted a twenty year study asking 284 of those who opine on economic and political matters their predictions on future events. The conclusion: they stink.
The accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge. People who follow current events by reading the papers and newsmagazines regularly can guess what is likely to happen about as accurately as the specialists whom the papers quote.
...
Knowing a little might make someone a more reliable forecaster, but Tetlock found that knowing a lot can actually make a person less reliable.... The expert also suffers from knowing too much: the more facts an expert has, the more information is available to be enlisted in support of his or her pet theories, and the more chains of causation he or she can find beguiling....

Most people tend to dismiss new information that doesn’t fit with what they already believe. Tetlock found that his experts used a double standard: they were much tougher in assessing the validity of information that undercut their theory than they were in crediting information that supported it....

Tetlock found that, consistent with this asymmetry, experts routinely misremembered the degree of probability they had assigned to an event after it came to pass. They claimed to have predicted what happened with a higher degree of certainty than, according to the record, they really did.... Plausible detail makes us believers....

In 1982, an experiment was done with professional forecasters and planners. One group was asked to assess the probability of “a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983,” and another group was asked to assess the probability of “a Russian invasion of Poland, and a complete suspension of diplomatic relations between the U.S. and the Soviet Union, sometime in 1983.” The experts judged the second scenario more likely than the first, even though it required two separate events to occur.

And, like most of us, experts violate a fundamental rule of probabilities by tending to find scenarios with more variables more likely.


"why some people make better forecasters than other people":
Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.

I'd like to think that I am a fox and that my predictions will be better, but then thinking I am a fox means I am expressing considerable confidence that I am already proficient at forecasting, meaning I am probably a hedgehog, however, this back and forth of uncertainty within this sentence leads me to the certian conclusion that I must be a fox! (And I am very much unfamous, that helps).

But seriously folks, the problem of confirmation bias is a serious one. Honestly, I do try to ask myself occatinally whether I am just as myopic as many of those I disagree with. I do tend to read things that I find agreeable; and when I read things I disagree with, I suspiciously consider each claim and think or reasons why the analysis is faulty. I tend to dismiss very strong statements as mere puffery when coming from a source I agree with, while I label similar statements by others as deceptive.

While he found there was no partisan bias in the distribution of foxes and hedgehogs among the experts, I wonder if that would hold in the general population? My confirmation bias says not, but if I had to predict with some money on it, I would say yes.

And of course, isn't this fox/hedgehog dichotomy sound an idea a hedgehog would come up with? I suppose the answer is that yes, its simplistic, but simplicity is useful in communication.

KQED had a program on recently with "IDEO General Manager Tom Kelley on Innovation." I found him to sound like a self-help guy for people who want to make their business work better... and full of shit. What turned me off was his disdain for the "devil's advocate" character people play in a discussion regarding a new idea. He imagines that many great ideas have been stifled by the devil's advocate. I am wondering if this guy has read his history (the Bay of Pigs?!) or has read the paper in the last 6 years (Dot com bust? AOL-Time Warner Merger? War in Iraq?). You would think a person who works in the technology field would be well aware of the dangers of positive thinking and dismissing criticism ("No, really, its a new economy. P/E ratios no longer matter.") He is a hedgehog promoting hedgehogism with little more support than repeating pleasant anecdotes (while ignoring the failures) and a persuasive tone of voice. Sure, companies should focus on innovating, but attempts at innovation can often be more damaging than the status quo.

The reflexive nature of an expert book on experts reminds me of the book on memes which forwards an idea that itself can be considered a meme.

No comments: