Freakonomics: Why are we so bad at predicting the future?

An investor peers into a crystal ball for a glimpse into his stock's future.

Kai Ryssdal: Time now for a little Freakonomics Radio. It's that moment every couple of weeks when we talk to Stephen Dubner. He is the co-author of the books and the blog and the podcast of the same name. It is, of course, the hidden side of everything. Dubner, welcome back.

Stephen Dubner: Hey thanks, Kai. I thought we'd start things off today by playing some tape. Some very confident tape. Now, for all I know, some of this may have come from your own radio program.

Reporter: The experts on Wall Street are falling all over themselves to predict the year ahead.

Expert 1: Rising inflation rates will send the U.S. in a downward spiral.

Expert 2: Higher interest rates are going to hurt the housing market.

Expert 3: They're going to make money over time.

Expert 4: Amazon...

Expert 5: Cisco...

Expert 6: Pfizer...

Expert 7: I would bet on it.

Expert 8: Health care mutual funds.

Expert 9: General Motors...

Expert 10: Even good old IBM.

Expert 11: Don't hit the "sell" button just yet.

Ryssdal: Oh, my. I don't even know what that was. I can tell you, though, none of it came from my radio program.

Dubner: "I would bet on it," they say. This is about prediction. And it's not just about the economy. We human beings like to predict the future about just about everything. My Freakonomics friend and co-author Steve Levitt, he has a theory as to why.

Steven Levitt: When there are big rewards to people who make and get them right and there's zero punishment for people who make bad predictions because they're immediately forgotten, that's a recipe for getting people to make predictions over and over.

Ryssdal: I know what this is. And I'm going to channel Steven Levitt here. This is incentives mattering. If you pay people to make predictions and there's no downside, they'll just predict.

Dubner: That is exactly right. And that would be harmless if we weren't so bad at predicting the future. That's the problem here. If you look at academic study, one after the next, it turns out that even experts are only nominally better than a coin flip. But there is a massive demand for prediction and thus arises a big supply.

So let's look at this week, for example. On Thursday, the U.S. Department of Agriculture will release a very closely watched crop yield forecast. Here's Joe Prusacki from the USDA's statistics division.

Joseph Prusacki: People get stressed out about it, because it's the first one of the season, and it's like, "OK, is everything in place? Do we make any changes? Are the computer systems all working?"

Ryssdal: And I'm going to guess that the markets watch this one pretty closely just because it's the first one, right?

Dubner: Absolutely. And the stakes are high. The markets care about corn and soy beans and things like that. So the USDA will send a small army of enumerators, human beings out into the fields, to check on the progress of crops.

Here's Phil Friedrich, who's been slogging through the cornfields in Northeast Kansas.

Phil Friedrich: I go out there and measure the fields. I go out there and talk face to face with the farmers, and we collect information. What I collect is hard evidence.

Ryssdal: OK, so how hard is this though? You go out there, you count it up, you get a calculator, boom, you're done. I can do that, and I'm a history/political science guy.

Dubner: You would think so, wouldn't you? Unlike financial predictions or political predictions, you don't have to factor in human psychology. But with agriculture, you've got a little thing they like to call the weather.

Now, last year after the USDA put out its August crop forecast, a spell of hot, dry weather totally messed up that forecast. The corn yield almost immediately had to be revised downward by nearly 7 percent.

Ryssdal: So, Goldman Sachs misses earnings by 7 percent, it's the end of the world, right? What happens when the USDA misses by 7 percent.

Dubner: Well, they're really not held to blame, but it is a big deal in the markets. It depends on what side of the trade you're on, of course. Some people get very, very unhappy. They rely on these forecasts.

Joe Prusacki at the USDA read me some of the emails he got after this forecast was revised

Prusacki: OK, the first one was, "Thanks a lot for collapsing the green market today with your stupid..." And then there's a word with three letters. It has an "A" and two dollar signs.

Ryssdal: Oh, hey! This is a family radio show.

Dubner: But here's the thing. Joe Prusacki, personally, doesn't get punished if his forecasts are off. His performance reviews at an agency like the USDA are based on whether or not the report goes out on time, not on how accurate it turns out to be. And that's really pretty typical. We don't really punish bad predictions.

But, we here at Freakonomics Radio did some digging, and we found someone who's trying to add a little bit of accountability, finally, to the prediction industry. Here's a reporter name Vlad Mixich who tells us that in Romania, believe it or not, a senator has proposed a law to regulate fortune tellers.

Vlad Mixich: So if you are one of my clients and if I am a fortune teller, if I failed to predict your future, I would pay a quite substantial fine. Or if this happens many times, I will even go to jail.

Ryssdal: I actually love it. I think that's fabulous.

Dubner: You've got to love it, don't you? My first thought at this of course is, if fortune tellers are going to be held accountable, for goodness sake, why aren't market analysts and political pundits, maybe even radio show hosts?

Ryssdal: Hey, you are not welcome here anymore. Stephen Dubner, FreakonomicsRadio.com is the website. I predict you might be back, although I can't guarantee that.

Dubner: Uh oh. I'll bring some corn on the cob if I am.

Log in to post4 Comments

But we *are* very good a predicting the past -- & getting better & better as social science methods improve. That's worth something!

David Brin has been promoting the idea of a "predictions registry" to hold predictors accountable (or at least, allow people to find out which predictors, if any are reliable) for several years. Here's an article he wrote on the subject http://www.davidbrin.com/predictionsregistry.htm

This line of thinking could be extended to traders and pension managers, who, after all, are making predictions about the securities they are buying on behalf of their clients, are they not? If the value of the portfolio goes up, they get a fat bonus; if it goes down, nothing happens to them. This was also the problem with the CDO market, which rewarded creation of these Weapons of Financial Mass Destruction without consequence for the actual performance of the underlying assets.

The other aspect is how wrong does a prediction need to be before being concidered wrong? In the case of sport matches there's a clear winner/loser. But in the case of crop yeilds the prediction will not be %100 accurate, but most will concider it close enough.

With Generous Support From...