Thursday, February 19, 2009

Why the experts missed the crash

"Money Magazine) -- You've probably never wanted expert insight more than today - and never trusted it less. After all, the intelligent, articulate, well-paid authorities voicing these opinions are the ones who created the crisis or failed to predict it or lost 30% of your 401(k) in it.

Yet we can't tear ourselves away. The crisis has brought record ratings to CNBC and its parade of talking heads. You're probably still entrusting your portfolio to the experts running mutual funds. Despite everything, we can't shake the belief that elite forecasters know better than the rest of us what the future holds.

The record, unfortunately, proves no such thing. And no one knows that record better than Philip Tetlock, 54, a professor of organizational behavior at the Haas Business School at the University of California-Berkeley. Tetlock is the world's top expert on, well, top experts. Some 25 years ago, he began an experiment to quantify the forecasting skill of political experts.

By the time he finished in 2003, Tetlock had signed up nearly 300 academics, economists, policymakers and journalists and mapped more than 82,000 forecasts against real-world outcomes, analyzing not just what the experts said but how they thought: how quickly they embraced contrary evidence, for example, or reacted when they were wrong. And wrong they usually were, barely beating out a random forecast generator.

But you shouldn't simply write all gurus off. Tetlock's research found that one kind of expert turns out consistently more accurate forecasts than others. Understanding what makes them better can help you make more reliable predictions in your own life. Tetlock explained it all to Money's former managing editor, Eric Schurenberg, in a recent interview.

Why did so many experts miss the economic crash?

The people intimately involved in packaging [financial derivatives like] CDOs must have had some sense that they were unstable. But their superiors seem to have been lulled into complacency, partly because they were making a lot of money very fast and had no motivation to look closer. So greed played a role.

But hubris may have played a bigger one. Remember Greek tragedy? The gods don't like mortals who get too uppity. In this case the biggest source of hubris was the mathematical models that claimed you could turn iffy loans into investment-grade securities. The models rested on a misplaced faith in the law of large numbers and on wildly miscalculated estimates of the likelihood of a national collapse in real estate. But mathematics has a certain mystique. People get intimidated by it, and no one challenged the models.

Americans were shocked at how wrong the experts were. You weren't. Why not?

My research certainly prepared me for widespread forecasting failures. We found that our experts' predictions barely beat random guesses - the statistical equivalent of a dart-throwing chimp - and proved no better than predictions of reasonably well-read nonexperts. Ironically, the more famous the expert, the less accurate his or her predictions tended to be.

Money has written about human mental quirks that lead ordinary folks to make investing mistakes. Do the same lapses affect experts' judgment?

Of course. Like all of us, experts go wrong when they try to fit simple models to complex situations. ("It's the Great Depression all over again!") They go wrong when they leap to judgment or are too slow to change their minds in the face of contrary evidence.

And like all of us, experts have a hard time with randomness. I once witnessed an experiment that pitted a classroom of Yale undergrads against a lone Norwegian rat in a T-maze. Food was put in the maze in no particular pattern, except that it was designed to end up in the left side of the "T" 60% of the time. Eventually, the rat learned always to turn left and so was rewarded 60% of the time. The students, on the other hand, fell for a variant of the "gambler's fallacy." Picture a roulette player who sees a long sequence of red and puts all his money on black because it's "due." Or more subtly, he looks for complex, alternating patterns - the same kind of mental wild-goose chase that technical stock pickers go on. That's what happened to the Yalies, who kept looking for some pattern that would predict where the food would be every time. They ended up being right just 52% of the time. Outsmarted by a rat.

What makes some forecasters better than others?

The most important factor was not how much education or experience the experts had but how they thought. You know the famous line that [philosopher] Isaiah Berlin borrowed from a Greek poet, "The fox knows many things, but the hedgehog knows one big thing"? The better forecasters were like Berlin's foxes: self-critical, eclectic thinkers who were willing to update their beliefs when faced with contrary evidence, were doubtful of grand schemes and were rather modest about their predictive ability. The less successful forecasters were like hedgehogs: They tended to have one big, beautiful idea that they loved to stretch, sometimes to the breaking point. They tended to be articulate and very persuasive as to why their idea explained everything. The media often love hedgehogs.

How do you know whether a talking head is a fox or a hedgehog?

Count how often they press the brakes on trains of thought. Foxes often qualify their arguments with "however" and "perhaps," while hedgehogs build up momentum with "moreover" and "all the more so." Foxes are not as entertaining as hedgehogs. But enduring a little tedium is worth it if you want realistic odds on possible futures.

So if you were looking for a money manager, you'd want a fox?

If you want good, stable long-term performance, you're better off with the fox. If you're up for a real roller-coaster ride, which might make you fabulously wealthy or leave you broke, go hedgehog.

But it was doomster hedgehogs like money managers Robert Rodriguez and Jeremy Grantham who first saw the crisis coming.

Hedgehogs are sometimes way, way out front. But they can also be way, way off.

Most of the experts who called the downturn are still bearish. Would you expect them to be able to call the rebound too?

No. In our research, the hedgehogs who get out front don't tend to stay out front very long. They often overshoot. For example, among the few who correctly called the fall of the Soviet Union were what I call ethno-nationalist fundamentalists, who believed that multi-ethnic nations were likely to be torn apart. They were spectacularly right with Yugoslavia and the Soviet Union. But they also expected Nigeria, India and Canada to disintegrate. That's how it is with hedgehogs: You get spectacular hits but lots of false alarms.

How can we nonexperts test our own hunches?

Listen to yourself talk to yourself. If you're being swept away with enthusiasm for some particular course of action, take a deep breath and ask: Can I see anything wrong with this? And if you can't, start worrying; you are about to go over a cliff.

Considering how wrong they are, why are the same old talking heads continuing to give advice?

Unless you force experts to be specific, as we did, they can make predictions that are difficult to falsify. You know the cynical clich "Never assign a date and a number to the same prediction." That lets you get away with saying things like "Yes, I did say the Dow will hit 36,000, and it will - just wait. I was merely a little early."

Experts are also very good at explaining errors away by concocting counterfactual history. "If only the world had heeded the warnings of, say, [libertarian-leaning Texas Congressman] Dick Armey about Fannie Mae and Freddie Mac, the financial crisis would have been far less severe." This is a ridiculous line of reasoning. Nobody knows what would have happened in a hypothetical world.

Who are you listening to in this market?

I look for a combination of cognitive flexibility and high IQ. Moody's Economy.com chief economist Mark Zandi is not a bad person to listen to. He was somewhat out in front in anticipating this crisis and has a capacity for seeing different points of view. Larry Summers, head of the National Economic Council, also has the kind of intelligence and cognitive style that makes him a good bet.

Could we live without experts?

No way. We need to believe we live in a predictable, controllable world, so we turn to authoritative-sounding people who promise to satisfy that need. That's why part of the responsibility for experts' poor record falls on us. We seek out experts who promise impossible levels of accuracy, then we do a poor job keeping score. "

Wednesday, February 18, 2009

Our Galaxy could have 1 billion Earths

There could be one hundred billion Earth-like planets in our galaxy, a US conference has heard.

Dr Alan Boss of the Carnegie Institution of Science said many of these worlds could be inhabited by simple lifeforms.

He was speaking at the annual meeting of the American Association for the Advancement of Science in Chicago.

So far, telescopes have been able to detect just over 300 planets outside our Solar System.

Very few of these would be capable of supporting life, however. Most are gas giants like our Jupiter, and many orbit so close to their parent stars that any microbes would have to survive roasting temperatures.

But, based on the limited numbers of planets found so far, Dr Boss has estimated that each Sun-like star has on average one "Earth-like" planet.

This simple calculation means there would be huge numbers capable of supporting life.

"Not only are they probably habitable but they probably are also going to be inhabited," Dr Boss told BBC News. "But I think that most likely the nearby 'Earths' are going to be inhabited with things which are perhaps more common to what Earth was like three or four billion years ago." That means bacterial lifeforms.

Dr Boss estimates that Nasa's Kepler mission, due for launch in March, should begin finding some of these Earth-like planets within the next few years.

Recent work at Edinburgh University tried to quantify how many intelligent civilisations might be out there. The research suggested there could be thousands of them.

Wednesday, February 11, 2009

Coffee grounds next in line as biofuel source

Coffee grounds — currently wasted or used as garden compost — could become a cheap and environmentally friendly source of biodiesel and fuel pellets, says a study.

Spent coffee grounds contain 11–20 per cent oil, depending on their type. "This is competitive with other major biodiesel feedstocks such as rapeseed oil (37–50 per cent), palm oil (20 per cent), and soybean oil (20 per cent)," say researchers writing in the Journal of Agricultural and Food Chemistry.

Scientists at the US-based University of Nevada, Reno, used an inexpensive process to extract oil from the leftovers of making espressos, cappuccinos and other coffee preparations from a multinational coffeehouse chain.

This oil was then converted into biodiesel, which could be used to fuel cars and trucks.

The world's coffee production is more than 7.2 million tonnes per year, according to US Department of Agriculture figures cited in the study. This could yield about 340 million gallons of biodiesel, say the researchers.

"It is easy and economical to extract oil from used coffee grounds compared to traditional feedstocks," said Mano Misra, an author of the study. Further, coffee oil has some antioxidants which are required for biofuel stability," he told SciDev.Net. After the oil extraction the remaining solid waste from processed coffee can be used as garden compost or fuel pellets.

The process "would be ideal for countries where coffee is produced. A lot of defective coffee beans are discarded into the landfills every year. Processing these beans as well as coffee grounds would be an economical approach," said Misra.

The researchers calculate that in the United States an annual profit of more than US$8 million could be made from biodiesel and pellets from one major coffee chain alone.

• This article was shared by our content partner SciDev.Net, a member of the Guardian Environment Network