This post is also available in Dutch.
Let’s simplify the problem and imagine the following scenario:
A city is preparing itself for a contagious disease, of which it is expected that 600 people will die if nothing is done. Two programs are suggested to deal with the disease. Experts estimate the consequences of these programs as follows:
Program A): There is a guarantee of saving 200 people.
Program B): There is a 33% chance that all 600 people will be saved but also a 66 % chance that no one will be saved.
Which of the two programs would you choose?
Difficult choice? The famous behavioral scientists (and Nobel prize winners) Daniel Kahneman and Amos Tversky posed this question in the ‘80s to a big group of participants. Guess what… Almost three quarters (72%) of the participants chose program A, opting for a certainty of saving 200 lives instead of a risk that no one would be saved. Why is this remarkable? Because the two options are, in fact, identical.
The rational decision maker
Calculate with me:
In program A we have a 100% chance of saving 200 lives, in other words: 100% x 200 = 200
In program B we have a 33% chance of saving 600 lives: 33% x 600 = 200.
So, the outcome of both programs is the same. But it feels different, right?
Economical models don’t have feelings; according to such a framework, the objective value of an option can be determined by multiplying the value of that option (for instance the number of saved lives) with the chance of actually getting that value. If people were completely rational1, we should feel no difference.
But then why do almost all people pick option A?
Risk aversion
The difference between the options is the presence and absence of risk. When there are two options with the same value, people often choose the secure option (guarantee of saving 200 lives). They even make this choice when the secure option is worth less than the insecure option.
Uncertainty in itself already makes people reluctant to choose a risky option. This is also called risk aversion (being afraid of risks).
So apparently decisions can be drastically influenced by uncertainty, even when the objective outcome has the same value – how irrational! But it gets worse….
The effect of framing
Think back to the situation at the start of this text, but now consider the following alternative programs:
Program C): There is a guarantee that 400 people will die.
Program D): There is 66% chance that 600 people will die, but also 33% chance that no one will die.
Which of the two programs would you choose?
As it turns out, in the study by Tversky and Kahneman,78% of the people now chose program D and thus picked the uncertain option. If you apply the calculation from before to this situation, you will notice that the mortality rates are also identical here. In fact, the outcome of program A (200 saved lives) is exactly the same as the outcome of program C (400 deaths). Why are people now all of a sudden choosing the uncertain option?
This has to do with so-called framing effects: In the first scenario (Program A/B) the frame of reference was the gamble that no one would be saved. Choosing to save 200 lives therefore feels like winning. In the second scenario (Program B/C) the gamble was, however, to have no one dying. The certain option (400 deaths) therefore feels like a loss, and so people would rather take the risk, hoping that no lives would be lost.
Winning or losing?
These problems show that people do not act according to completely rational principles and that we are sensitive to context: we are more conservative when it comes to winning, and riskier when we feel like we are losing something.
A way to protect yourself from such effects is by objectively noting the outcomes, such as by doing the calculation shown here. This way you can figure out that the options might not be as different as you expected, and can make a well-considered decision.
Note
1: What “rational” behavior is, is a topic of discussion in itself, but that is beyond the scope of this blog
Credits
Original language: Dutch
Author: Felix Klaassen
Buddy: Floortje Bouwkamp
Editor: Wessel Hieselaar
Translator: Jill Naaijen
Editor translation: Rebecca Calcott
Featured image courtesy of Skitterphoto via Pexels
Nice one, thanks for the blog post, very relevant these days 🙂 I agree that the framing effect in particular exposes irrationality in human decision-making. What I am a bit less sure about is whether options A and B (in both framings) are really equivalent to each other. Of course, they are equivalent when viewed through the lens of expected utility theory but I find this model a bit limited since it ignores the differences in risk between the two options.
Expected utility merely describes how good each choice is if it was repeated an infinite number of times. Indeed, if you choose option A and B an infinite number of times, they will both lead to 200 survivors on average. However, if you can choose only one time, they perform very differently. Option B will be either much better or much worse in that case – it will never perform the same way as option A. Therefore, being faced with the decision, I would ask myself questions such as: could I accept the unlucky outcome if I chose for option B? And I do find that quite rational given that the unlucky outcome is more likely (66%) than the lucky outcome (33%). So, once I choose for option B, I need to be prepared for 600 dead people. If I can’t handle 600 dead people but I can handle 400 dead people (e.g., because we need at least 100 people to survive as a group), then I’d say that option A is the more rational choice.
It reminded me a bit of investment principles. Stock has a higher expected return than bonds but even academic experts don’t think that everybody should invest in stock, since it’s also more risky. Instead, a common advice is to invest in stock if the investor is willing to stay invested for a long amount of time, so that the up’s and down’s can average out over time. So, it seems to me that it is very rational to prefer non-risky options if there is no chance for the risk to “average out”.
Just some thoughts. Of course, I know that you are adopting expected utility theory here and that you also acknowledged that there are other models of rational behavior 🙂
Hi André,
First of all, apologies for the incredibly slow reply.
Thank you for your interesting thoughts. As you indeed point out, the arguments in my blogpost mainly hold up if you adopt the expected utility framework, which for the purpose of the blog I indeed did. You’re right that there are other (possibly more valid) frameworks that may better define what a ‘rational’ decision would be, which your stock investment example nicely illustrates. I suppose that discussing other frameworks of ‘rationality’ could be material for a next blog!
Felix