Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Choice Overload May Not Exist

21/4/2015

2 Comments

 
Have you ever heard about the jam study, by Iyengar & Lepper? If you haven’t, the study goes like this. Customers came into a grocery store, and saw a tasting booth for jams. On one day, the customers saw 6 different flavors of jam, and on another day they saw 24. So we have two groups, and we’re interested about how their behavior differs with the size of the choice set. Both customer groups could freely sample from the provided flavors, and after the tasting, they got a discount coupon for jams. So which group bought jam more often? Intuitively, we would expect the second group to buy more, since they got more options. However, this isn’t what happened. The one who got more options bought in fact less!

The study is by far the most famous example of choice overload: having more options making choice harder. The explanation for the effect has traditionally been that once we have too many options, we are so overwhelmed we rather give up altogether, and don’t choose anything.

But now, I have a crisis of faith. It’s looking like one of the most famous effects of behavioral decision making might not exist.

I came across a meta-analysis done by Scheibehenne, Greifeneder and Todd in 2010. The result of the meta-analysis? It looks very much possible that the choice overload doesn’t exist. All of the studies that have looked into it have an average effect of XY.12, which is essentially nothing. For me, this is big news, since I’ve used the choice overload as an example in several lectures, and I’ve also mentioned in here before. And now they’re telling me that it could be just noise that these studies are following. If we look at the funnel plot from their meta-analysis, the data seems to fit pretty well to the null that the grand mean would be zero. But (and there’s always a but in science), there does seem to be a group of studies on the right side of the funnel, finding an effect of around d>0.2, which indicates the existence of choice overload. What could this mean?
Picture
Well, Scheibehenne et al. did what any proficient meta-analyst would do, and regressed the effects on the parameters of studies. Most of the things didn’t have an effect, but they found some meaningful things:
Picture
Main points:
  • the result being published in a journal makes the effect size larger
  • subjects having expertise makes the effect smaller
(I’m ignoring the consumption variable here because that’s basically driven by one study)

The first what we should expect: science definitely has a publication bias. If your study found a significant effect, it’s much more likely to get published. If your study showed no effects, the reviewers might find the study uninteresting, demoting it to your desk drawer instead.

The second thing also seems to support general intuitions about the matter. For example, the in the jam study, all of the options were jams generally commonly bought. If they had included lemon curd, for example, a lot of people would just get that without looking at the other options. This supports the idea that choice overload could exist especially in situations, in which we are quite unfamiliar with the options. The finding that prior preferences decrease the choice overload effect is actually a good thing: it shows that the variation in studies is not just driven by random noise (Chernev, Böckenholt & Goodman, 2010).

So what’s the conclusion here: does choice overload exist or not? Scheibehenne et al. say that it certainly looks like choice overload is not a very robust phenomenon. However, the group of studies on the right side of the funnel complicates things. They are probably partly due to publication bias, but it’s certainly possible that there are conditions that facilitate choice overload, but were not captured in the meta-analysis.

I’m still reeling a bit from this finding. It certainly shows that popular science doesn’t catch up very fast: I’ve seen the jam study dozens of times in books, TED talks and presentations about decision making. It's such a famous finding that I even spent a full post describing it. But I had never even once seen the meta-analysis, despite it being already almost five years old. What can I say, whoops?

There’s still no grand truth to whether choice overload is real or not, but it certainly is not looking so outrageous after this paper. Perhaps it exists, perhaps not. Time will tell, but for now we can stick to “the effect might exist”, instead of “this is a really strong behavioral effect that changes decision making everywhere”, which is how some authors painted it.
2 Comments

Which Outside View? The Reference Class Problem

14/4/2015

0 Comments

 
One of the most sensible and applicable pieces of advice in the decision making literature is to take the outside view. Essentially, this means getting outside your own frame and looking at the statistical data of what has happened before.

For example, suppose you’re planning to put together a new computer from parts you order online. You’ve ordered the parts, and feel that this time you know most of the common hiccups of building the machine. You estimate that it will take you two weeks to complete. However, in the past you’ve built three computers – and they took 3, 5 and 4 weeks, respectively. Once the parts came in later than expected, once you were at work too much to manage the build and once you had some issues that needed resolving. But this time is different!

Now, the inside view says you feel confident that you’ve learnt from you mistakes. Therefore, estimating less build time than in history seems to make sense. The outside view, on the other hand, says that even if you have learnt something, there have always been hiccups of some kind – so that is likely to happen again. Hence, the outside view would estimate your build time to be around the average of your historical record.

In such a simple case it’s quite easy to see why taking the outside view is sensible, especially now that I’ve painted the inside view as a sense of “I’m better than before”. Unfortunately, real world is not this clean, but much messier. In the real world, the question is not should you use the outside view (you should), but which one?  The problem is that you’ve often got several options.

For example, suppose you were recently appointed as a project manager in a company, and you’ve led projects for a year now. Two months ago, your team got a new integration specialist. Now, you’re trying to think how much time it would be to install a new system to a very large corporate client. You’d like to use the outside view, but don’t know which one. What’s the reference point? All projects you’ve ever led? All projects you’ve led in this company? All projects with the new integration specialist? All projects for a very large client?

As we see, picking the outside view to use is not easy. In fact, this problem – a deep philosophical problem in frequentist statistics – is known in statistics and philosophy as the reference class problem. All the possible reference class in this example make some sense. The problem is that of causality: you have incomplete knowledge about which attributes impact your success, and how much. Does it matter that you have a new integration specialist? Are these projects very similar to ones you’ve done at the previous company? How much do projects differ by client size? If you can answer all these questions, you’d know which reference class to use. But if you knew the answers to these, you probably won’t need the outside view in the first place! So what can you do?

A practical suggestion: use several reference classes. If the estimates from these differ by a lot, then the situation is difficult to estimate. But hopefully finding this out improves your sense of what are the drivers of success for the project. If  the estimates don’t diverge, then it doesn’t really matter which outside view you pick, so you can be more confident of the estimate.
0 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.