Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Basic Biases: The Framing Effect

28/9/2014

0 Comments

 
The framing effect is probably one of the best known – and also one of the most interesting – biases due to its generality, hence today’s topic. Let’s start with a classic example from a classic paper:
Imagine that the United States is preparing for the outbreak of an unusual Asian disease that is expected to kill 600 people.  Two alternative programs to combat the disease have been proposed.  Assume that the exact scientific estimates of the consequences of the programs are as follows:

If Program A is adopted, 200 people will be saved.

If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Which of the two programs would you favor?
As a decision matrix, the situation looks like this:
Picture
As it has been formulated, there is obviously no correct answer to the question – the two options are statistically equal. What framing is about is that the way the situation is described influences our decision. If we formulate the question in terms of dead people (with the same cover story):

Picture
The formulations, as one can from the tables see, are equivalent. The surprise is that people made different choices in these situations. In the first case, 72 % chose plan A, 28 % chose plan B. With the second formulation, however, only 22 % chose plan C (equivalent to A) and 78 % opted for plan D! If framing had no power over us, we would choose the same option in both cases. So it’s not that choosing A or B per se would be irrational, it’s that making a different choice just because of framing is not rational.

The classic example is not a very natural example, however. I certainly hope I will never come across a similar situation! Thankfully, there are also more down-to-earth examples about framing. For example, suppose you are looking for some new dinnerware to buy. Visiting a flea market, you find a nice set of 8 dinner plates, 8 soup bowls and 8 dessert plates. You consider that the set is worth about 32 dollars.
Picture
As you’re just about to close the sale, the owner of the dinnerware suddenly remembers that “Oh! I just remembered! I also have some tea cups and saucers for the set!”. She adds 8 cups and 8 saucers to the set. Inspecting them, you notice that 2 of the cups and seven of the saucers are broken. How much are you willing to pay now?

Now, rationally, the set if of course worth more: after all, you get an intact saucer and 7 teacups on top of what you had before. At least it cannot be worth less – you could just throw away the additional pieces (let’s assume no costs are imposed on you by getting or disposing the broken pieces).

In fact, what happened in the experiment in Hsee (1998) was the following. Those who did joint evaluation, ie. they saw both sets (with and without broken pieces) reasoned just as we did above. The set including broken items was worth a little more. In contrast, those doing separate evaluation, ie. seeing only one of the sets, considered the second set to be worth less! In their mind, they compared it to a completely intact set, and thinking “oh, but this has broken items”. Those seeing the smaller, but completely intact set, reasoned “ah, it’s all intact and therefore good”. So a different frame generated a different evaluation of the intact pieces’ worth!

You could argue that the separate evaluators were doing their best – they didn’t know about the option of a similar set with additional pieces. And of course, that is correct. However – and this is why framing is such a sneaky bias – real life consists mainly of separate evaluations. In a store you just get to see that item with some strategically chosen comparison items next to it. When evaluating a business project, you’re mostly stuck with the description that the manager offers.

The only advice I can give about framing is that awareness matters. For example, I’ve come across situations at work when someone is asking me to do a small thing, and I’m thinking if I ought to do it now, or perhaps later. What has helped me to think is recognizing that the simple now/later is just one decisions frame. Often, I felt it’s better to back up to a wider frame and ask myself what I should be focusing in the first place. Sometimes, it turns out that I ought to do something that’s much more vital than the request. And on other occasions, when there are no other critical tasks, it’s perhaps just better to get it done right away.

So, even if I’m repeating myself a bit from last week, it’s a good idea to think about the alternatives at hand – and then question them. Are these really the alternatives? Is there a wider frame with other options? And is the description of the alternatives the only and the most relevant one?

So life is not exatly “What You See is What You Get”. It’s more exactly “What You See is What You Think You’re Getting”. Reminds me of this movie (and notice that Neo didn’t really reflect much on the frame he was given):
Picture
0 Comments

Two Keys for Better Decisions: Criteria and Alternatives

23/9/2014

0 Comments

 
Thomas came to see the new flat, climbing to the fifth floor in the cramped hallway – and no elevator. Ugh. What a trek. But as the estate agent showed him around the place, he was engulfed by light and the smell of fresh baked bread came from the kitchen. No matter that this was 20 minutes further from work. Thomas was sold.
Picture
Hell, I'd get this kitchen, too!
What’s wrong with this decision case? Well, it looks like Thomas is making his decision to buy an apartment based on criteria that only seem noteworthy at the apartment, not beforehand. Even worse, he ends up being carried away by the fresh smells – surely a trick from the estate agent’s sleeve. I’d venture to say that Thomas hasn’t made an exemplar decision here. What could he have done better?

An old adage works also in decision analysis: Think before you act. In the context of decisions, it refers to thinking about the problem itself first. In decisions, two key parameters largely define your success: the criteria, and the alternatives.

The criteria mean dimensions along which you compare and evaluate the alternatives. For example, for the apartment common ones are size, price, location, and so on. What’s the key is defining those criteria yourself. You don’t have to be constrained by what other people think. Your criteria are anything you care about. For example, one of Thomas’ criteria could be the amount of ambient light in the apartment, if he had thought about it beforehand. Thinking about the criteria before the decision helps to stay on the premeditated path, and not be drawn away by other enticing things. If you’ve given thought to criteria in advance of seeing the alternatives, you’re less likely to focus on salient, but ultimately irrelevant ones (like the fresh smell above). It’s like when you’re going to work: you decide to walk straight there, and don’t go into shops even when you see that shiny new guitar in the window (also, your boss might not value your musical enthusiasm to make it a good idea).

Another thing about criteria: they don’t necessary have to be numeric. Sure, there are benefits to using numerical values, especially when they are objective, like size. But inherently there’s nothing wrong with subjective criteria like a “feel” of an apartment, the comfort of a chair or the taste of a wine. After all, it’s your decision we’re talking about. The only thing that matters is that you can be consistent with the criteria, ie. you can rate equally tasty wines as equal on the taste. This is crucial, because otherwise you might be tempted to reevaluate some criteria to end up with the “best alternative”. The point of evaluation is to determine the best option, not to “prove” the choice ex post facto. 
Picture
An example of a consistent evaluation - with two hands, no less!
There’s one other trick that’s useful to remember: thinking about the alternatives. This might sound like an obvious thing, and often it is, too. For example, when buying a flat, most of us tend to spend countless hours on websites and with estate agents, looking at alternatives. However, that’s not exactly what I’m referring to. What’s important as well is conceptual alternative-generating before actual data gathering phase. In concrete terms: thinking about conceptually possible alternatives that you would like. In my case, as we’re thinking about apartments just now, it means the following. I enjoy living with a bike distance to the center, so I’ll mentally think of all neighborhoods that fill that criterion.

The point with this is that your decision quality is driven by the alternatives you’ve come up with. If you don’t find good alternatives, you might consider them nonexistent and fall for the status quo bias. Enlarging the conceptual alternative space will help to see what’s possible. An alternative you didn’t think of won’t get picked.

The major point being: you can improve decisions heavily by structured, reflective thinking. This is an idea that Ralph Keeney, an emeritus professor from Duke University, has championed for decades now (for example, in this paper, or this book). Most decisions are not important enough to require a huge decision analysis trade-off analysis. But thinking is almost free, and has the potential to help a lot.
0 Comments

Intelligence Doesn't Protect from Biases

16/9/2014

0 Comments

 
Perhaps one of the most common misconceptions about biases is that they only happen to dumb people. However, this is - to be clear - false. I think there are a few reasons why this misconception persists.

Firstly, a lot of bias experiments seem really obvious after the correct answer has been revealed. This plays directly into our hindsight bias – also aptly named the knew-it-all-along effect – in which the answer makes us think “oh, I wouldn’t have fallen into that trap”. Well, as the data shows, you most likely would have.

A second reason is that a popular everyday conception of intelligence implies roughly that “more intelligence = more good stuff”. Unfortunately, this simplistic rule fails to work here. Intelligence in scientific terms is cognitive ability, which is computational power. In terms of biases, lack of power is not the issue. The issue is that we don’t see or notice how to use the power in the right way. It’s like if someone is trying to hammer nails by hitting them with the handle end. Sure, we can say that he needs a bigger hammer, but a reasonable solution would be to use the hammer with the right end.
Picture
Special Offer: The Hammer of Rationality
Stanovich & Stanovich (2010, p. 220)  summarize in their paper why intelligence does not help with rational thinking very much:

[--] while it is true that more intelligent individuals learn more things than less intelligent individuals, much knowledge (and many thinking dispositions) relevant to rationality are picked up rather late in life. Explicit teaching of this mindware is not uniform in the school curriculum at any level. That such principles are taught very inconsistently means that some intelligent people may fail to learn these important aspects of critical thinking. 

In their paper they also tabulate which biases or irrational dispositions have an association with intelligence, and which have not (Stanovich & Stanovich, 2010, p. 221):
Picture
Now I’m not going to go through the list more than this, the point is just to show that there are tons of biases that have no relation to intelligence, and that for the other ones the association is still quite low (.20-.35). In practice, such a low correlation means that intelligence is not a dominant factor: dumb people can be rational and intelligent people can be irrational.

Now, some might feel the lack of association to intelligence a dystopian thought. If intelligence is of no use here, what can we do? To be absolutely clear, I’m not saying that we are doomed to suffer from these biases forever. Even though intelligence does not help, we can still help ourselves by being aware of the biases and learning better reasoning strategies. Most biases arise due to our System 1 heuristics getting out of hand. What we need in those situations is better mindware, complemented by slower and more thorough reasoning. 

Thankfully, that can be learned.
0 Comments

Basic Biases: Context-Dependent Preferences

9/9/2014

0 Comments

 
Picture
You wander around at the store and see a nice looking pair of speakers. You plug in you iPhone and test them out, rocking it out to your favorite tunes. The speakers are very enticing already, but you decide to test out the next, slightly more expensive set just to be sure. Comparing the sounds, the more expensive set sounds just so much clearer, with better bass punch, too… Oh, it’s just so much better! Walking out from the store with a pair of speakers that are way better than your needs, you’ve just exhibited a prime example of context-dependent preferences.

In its simplicity, this bias may sound like an old truth. Sure, our preferences are changed by the context, so what? Unfortunately, in its simplicity lies the problem: this bias has the potential to affect us in almost any situation involving comparisons. And in the modern information era - with comparisons just a few taps away – well, that’s just about any situation. So what’s the bias? To be concise, the point is that choices are affected by changing the choice set, for example by adding new irrelevant alternatives. In effect, this can mean than whereas you this time preferred the high-grade speakers, adding a third middle option might have pushed you to choose the most cheapest, lowest quality ones instead. I’ll explain the theory with the help of a few images, borrowed from Tversky’s and Simonson’s paper.

Picture
If you look at the figure above, it shows three products that are quite different. Product Z is high in quality (attribute 2) but unattractive due to high price (ie. low on “affordability”, attribute 1). Product X, on the other hand, is very cheap but low quality. Product Y is somewhere in between.

The worrying part in the context-dependency is that our choices between options can be largely influenced by adding or removing options. For example, if we start with products X and Z and then add Y, by strategically placing it our choices can be heavily influenced. If Y is placed like in the next figure, a large proportion of decision makers will tend to switch to preferring Z, despite preferring X when they just had a choice between X and Z. Let’s look at a figure that shows this better.
Picture
The reason for the bias is that quality (attribute 2) now seems much more important after seeing that Y has a lot of that, too. X, on the other hand, is still cheap but looks much worse in terms of quality. After all, you don't want to get the lowest quality option. Depending on the setup, this will either lead to picking Y (which is not a bias, since you couldn't choose that initially) or picking X (which is, if you preferred Z initially). If you want to see the equations that clarify how the placement logic works, they are in the Tversky and Simonson’s paper. In addition, the same effect with a different example is very nicely explained here in Dan Ariely's lecture.

The gist of the issue is this: context-dependency means that with some set of options, we would choose X over Z, whereas a change in the option set – for example adding Y – may nudge us to choose Z instead. What you see influences you heavily.

So what’s the problem in preferring X in some situation and Z in another? Well, the problem is twofold. First of all, if our choices are affected by options we don’t even pick in the end – so they should be irrelevant – it can be argued that our sense of what we actually want is problematically limited. Admittedly, this is a big concern in its own right. A bigger issue is the fact that often we don’t get to pick the options we see. What this means is that our choices can be influenced by marketers and other people who have the power to set up the choice situation.

Thankfully, I think there’s a remedy. Contextual choice works in both ways, so you can use it to your advantage, too. When considering what you’d prefer, you can play out the situation by creating different alternatives – even irrelevant ones –and reconsider your choice. This kind of thinking will not only make you less susceptible to choosing on a whim as you consider things more carefully, thinking about other alternatives may give ideas on what’s actually possible in the situation and what you actually value.

0 Comments

Why Must We Handle Uncertainty?

2/9/2014

0 Comments

 
There’s a really odd comment that I’ve sometimes heard about dealing with uncertainty. The comment goes somewhat along the lines like “oh you know, uncertainty is a problem now, but once we have good AI and algorithms our systems will be much more accurate”. I don’t think this is a very good argument for discrediting decision methods that try to grapple with uncertainty.

Why is it that we need decision methods and procedures for dealing with such situations? Why not just produce certainty and just base our decisions on that?

The annoying answer to this is that it simply costs too much. Reducing uncertainty is possible, but the more you reduce it, the more expensive it gets. To be exact, we can say that the marginal cost of uncertainty rises.

Consider an example from industrial production. Let’s say you have a production line that churns out really nice hiking boots. Unfortunately, there are some production errors every once in a while, as your line manager kindly tells you. But there is a level of uncertainty in the estimate: he is not sure how many faulty shoes will be produced in each production batch. To reduce uncertainty, you can take all kinds of measures. For example, you can hire a team of employees to inspect some of the manufactured shoes and discard any faulty ones. However, this does not eliminate uncertainty: after all, they cannot inspect every shoe. But wait, you can do more! To reduce uncertainty even more, you hire even more inspectors so that they can inspect every single shoe produced. Surely now there is no uncertainty left?

Well, unfortunately, there is. The inspectors are only human – they make errors too. So every once in a while, while one of your beloved inspectors is thinking about the upcoming football match of the evening, a faulty shoe escapes his gaze. Undaunted, you resolve to eliminate uncertainty, and fit the production line with an expensive machine inspection system. The system checks every shoe that passes the human inspectors so that they are double-checked. Surely now each produced shoe is good to go? Most days they are, until a programming error in the machine causes a problem: a shoe in an unconventional orientation is actually faulty yet passes undetected through the machine. In a fit, you eliminate the marketing department and use their funding to eliminate the uncertainty in production faults once and for all…
Picture
Uh oh, another unforeseen cause of faulty shoes!
As the example shows, reduction of uncertainty gets progressively more and more expensive every round. The more you’ve invested in it, the more you have to invest for a further reduction in uncertainty. What’s even worse – and this argument borders on the philosophical – there is practically no such thing as elimination of uncertainty. Whatever systems you come up with, there’s always a way for something really unforeseen to happen: a power failure incapacitates your inspection system, a burglar changes their settings, a meteor strikes at an inopportune time. The cause in itself is irrelevant. The point is that there’s always something you didn't anticipate.


The conclusion? There will always be some uncertainty.

And what’s more: since we have limited funds, there’s a practical limit for reducing uncertainty. At that point, we must use methods that can cope with uncertainty, because there are no other alternatives anymore.

This inevitability of facing uncertainty is why we need decision makers equipped with proper methods. Decades of behavioral decision research show (more about this in later posts) that humans are really not very good intuitive statisticians. Once you have many variables with various levels of uncertainty, there’s practically no way to make good decisions based on gut and intuition alone. What we need is methods and frameworks that simplify and aggregate information – but then again, not by too much – which we can then feed to the decision maker for processing. 
0 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.