Tuesday, January 02, 2007
I’m here to issue a consumer warning: You would get a better handle on 2007 if you wrote out scenarios on scraps of paper, pinned them to dartboards, blindfolded yourself and aimed the darts.
Think I’m being too harsh?
It is one of the best-kept secrets of punditry that the better known a pundit is, the less likely are his or her forecasts to be correct. That’s right – the LESS likely.
Back two decades ago psychologist Philip Tetlock of the University of California Berkely began an ambitious project. He set out to test the forecasts of 284 famous Americans who made their living pontificating about politics and economics. They were the sorts of people who opine on chat shows, or get quoted in the newspaper on days such as today...
It wasn’t easy to test the predictions that they made in the media, because they were loaded with get-out clauses: words such as “remote chance”, “maybe” and “odds-on favourite”. As he says in his book Expert Political Judgement: How Good Is It? How Can we Know?, the word “likely” could mean anything from barely better than 50/50 to 99 percent.
So instead he asked them his own questions, each time offering three alternatives. One was the status quo, the second was a move in one direction, and the third was a move in the other. In Australia right now such a question would be: How will the Coalition fare in the next election? Will its proportion of the vote (a) stay the same, (b) increase, or (c) decrease?
He asked his experts to assign a probability to each outcome and examined how they fared a year later.
Over two decades he accumulated 82,361 testable forecasts.
He found that not only were the experts predictions not particularly good, but that as a group they had performed WORSE than if they had just assigned an equal probability to each of the scenarios presented to them. That is to say the experts would have done better by ignoring what they knew and assigning option (a) a probability of 33.3 per cent, option (b) a probability of 33.3 per cent, and option (c) a probability of 33.3 per cent.
Put more bluntly: the professional pundits would have done better had they used dartboards.
Tetlock asked the experts both about topics about which they knew a lot (for instance, economists were asked about interest rates, political scientists about elections) and about topics about which they knew little (economists were asked about politics, political scientists about the stock market). He expected the experts to perform better when asked about areas within their fields of expertise. Instead he found no statistically significant difference.
His conclusion: beyond a certain level of general knowledge, the kind you can get from reading the newspaper, extra specialist knowledge doesn’t seem to improve your ability to make predictions.
And there was one group of pundits that performed particularly badly when their predictions were put to the test – the pundits that were the most popular on the chat shows.
Tetlock found that their forecasts were the most extreme, the most attention getting. They rarely forecast the status quo. (Which is perhaps why they kept get invited back to appear on television.)
The problem facing experts is that they have the tools (and often the incentive) to convince themselves that their pet theories are right even when a rough glance at the evidence suggests that they are wrong. They know enough detail to convince themselves of things that you or I could not.
When I worked for ABC radio I would from time to time interview fund managers about the likely course of the share market and particular stocks. I had a couple of favourite interviewees. They were the entertaining ones. But I assumed that they would at least have a better idea about the future of the market than would a person off the street. They were after all employed for that expertise.
And yet year after year, in aggregate Australian fund managers have performed worse for their clients than they would have had they just left the money in the top 100 stocks and done nothing.
Without putting too fine a point on it we would have been better off if we had we paid the experts not to manage our funds but to twiddle their thumbs.
The last financial year was actually a particularly good one for Australian fund managers. Super Ratings reports that superannuation funds on average made 14.5 per cent. But the share market itself climbed by 19 per cent.
Daniel Kahneman, the first psychologist to win the Nobel Prize for economics, has coined the phrase “delusional optimism” to describe the way in which most of us convince ourselves that we are better at what we do than we really are.
One of the techniques is to simply not look at the evidence. Foreign exchange dealing rooms, funds management houses, hospitals collect masses of data that should enable us to work out just how good each surgeon and each screen jockey really is. But most of it lies unread. Teachers resist attempts to measure their performance.
Another technique is to draw the wrong conclusion when confronted with evidence that we have got something wrong. Each time we are so confronted we make a mental note of the cause of the mistake, so that we don’t make it again. Often repeatedly. We think that this means we are learning from our mistakes.
The more likely conclusion, that we are not very good at the task in hand, rarely occurs to us.
My own view is that we need delusional optimism in order to survive childhood. And when we become adults we often join corporations (or public service departments) in which delusional optimism is encouraged.
In most jobs it counts against you to admit that you don’t know, or are not sure, or that you have doubts.
Whoever has the least doubt gets promoted, becomes manager and gets their optimistic proposals accepted, often with disastrous results.
Kahneman says three quarters of corporate mergers and acquisitions “never pay off”. Yet grander and grander ideas get proposed each year.
The grandest proposed in 2006 was the takeover of Qantas. The grandest completed in 2005 was Sydney’s Cross City Tunnel, now in receivership
But I’m not making predictions about 2007. I don’t have the expertise.