Field of Science

Why are humans risk averse?

After my first foray into computational simulations successfully predicted that losses should loom larger than gains, at least when the stakes are high, I decided to take on an even more complicated phenomenon in psychology and economics: risk aversion.

Daniel Kahneman, one of the few psychologists to win a Nobel Prize -- largely because there is no Nobel Prize for psychology -- along with Amos Tversky achieved at least some of his fame by demonstrating that humans are risk-averse for gains but risk-seeking for losses.

The most famous demonstration of this came from the following experiment (known as the "Asian Flu problem"):

The Center for Disease Control discovers there is an outbreak of Asian Flu in the United States. If nothing is done, they predict that 600 people will die. Two courses of action have been suggested. If program A is adopted, 200 people will be saved. If program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs do you favor?

Most people, it turns out, pick program A, the sure bet. This is interesting, because, on average, the two programs are exactly the same. That is, on average, program B will save 200 people, just like program A. The difference is that program B is more risky.

This would seem to demonstrate that people are risk-averse. However, you can reword the problem just slightly and get a very different response:

Two courses of action have been suggested. If program A is adopted, 400 will die. If program B is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. Which of the two programs do you favor?

Given this scenario, most people pick program B. However, notice that this is the exact same program as in the first version of the problem! It turns out that if people think about the issue in terms of lives saved, they are risk-averse. If they think about the issue in terms of lives lost, they are risk-seeking (they choose the riskier option).

There is no right or wrong answer according to logic, because logic and statistics tell us that program A and B are essentially identical.

In my last simulation, I suggested that it actually makes sense for losses to loom larger than gains, even though statistics and logic don't predict this. Maybe the same is true for being risk averse for gains and risk-seeking for losses. Maybe that is actually adaptive.

Here's how my simulation worked: Each of my simulated "creatures" played the following game: They could either take a sure gain of 10 units of food, or they could take a risky gain: a 50% chance of 5 units or a 50% chance of 15 units. Again, the two choices are statistically identical -- on average, you get 10 units of food either way. Some of the creatures were risk-averse and always took the sure bet; some were risk-seeking and always took the risky bet.

The creatures also played the same game for losses: they either had a guaranteed loss of 10 units or a 50% chance of losing 5 and a 50% chance of losing 15. Again, some were risk-seeking and some were risk averse.

Each creature played both games (the gain game and the loss game) 1000 times. There were 1000 creatures who were, like humans, risk-averse for gains and risk-seeking for losses. There were 1000 creatures who were risk-seeking for gains and risk-averse for losses (the opposite of humans). There were also 1000 creatures who were risk-seeking for both gains and losses.

The creatures all started with 100 pieces of food.

Risk-averse for gains/Risk-seeking for losses:
52% died
98 = average units of food at end of simulation

Risk-seeking for gains/Risk-averse for losses:
54% died
92 = average units of food left at end of simulation

Risk-seeking for gains & losses:
68% died
94 = average units of food left at end of simulation


While this simulation suggested that being risk-seeking across the board is not a good thing, it did not suggest that being risk-seeking for gains and risk-averse for losses was any better than the other way around. This could be because the size of the gains and losses was too large or two small relative to the starting endowment of food. I tried both larger endowments of food (200 units) and smaller (50 units), but the pattern of results was the same.

Again, this was a very simple simulation, so it is possible that it does not include the crucial factors that make the human strategy an adaptive one. It is also possible that the human strategy is not adaptive. Hopefully I will come across some papers in the near future that report better simulations that will shed some light onto this subject.


-----
(Note that as far as I can tell, being risk-seeking for losses should prevent people from buying insurance, yet people do. I'm not sure why this is, or how Kahneman's theory explains this.)




Tversky, A., Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453-458.

2 comments:

Tom Stafford said...

Brilliant stuff!

re: insurance. I think this is a facet of what's called 'regret aversion', ie being motivated to avoid future regrets, but this is only to give it a name --- the contradiction with prospect theory that you point out still holds

Anonymous said...

There is actually a substantial insurance gap, i.e. most people buy less insurance than they need. The exceptions are those who have seen examples of the problem that the insurance was intended to protect against. eE.g.

http://www.ifaonline.co.uk/ifaonline/news/1345152/life-insurance-gbp2-5trn-growing-swiss-re