The study is by far the most famous example of choice overload: having more options making choice harder. The explanation for the effect has traditionally been that once we have too many options, we are so overwhelmed we rather give up altogether, and don’t choose anything.
But now, I have a crisis of faith. It’s looking like one of the most famous effects of behavioral decision making might not exist.
I came across a meta-analysis done by Scheibehenne, Greifeneder and Todd in 2010. The result of the meta-analysis? It looks very much possible that the choice overload doesn’t exist. All of the studies that have looked into it have an average effect of XY.12, which is essentially nothing. For me, this is big news, since I’ve used the choice overload as an example in several lectures, and I’ve also mentioned in here before. And now they’re telling me that it could be just noise that these studies are following. If we look at the funnel plot from their meta-analysis, the data seems to fit pretty well to the null that the grand mean would be zero. But (and there’s always a but in science), there does seem to be a group of studies on the right side of the funnel, finding an effect of around d>0.2, which indicates the existence of choice overload. What could this mean?
- the result being published in a journal makes the effect size larger
- subjects having expertise makes the effect smaller
The first what we should expect: science definitely has a publication bias. If your study found a significant effect, it’s much more likely to get published. If your study showed no effects, the reviewers might find the study uninteresting, demoting it to your desk drawer instead.
The second thing also seems to support general intuitions about the matter. For example, the in the jam study, all of the options were jams generally commonly bought. If they had included lemon curd, for example, a lot of people would just get that without looking at the other options. This supports the idea that choice overload could exist especially in situations, in which we are quite unfamiliar with the options. The finding that prior preferences decrease the choice overload effect is actually a good thing: it shows that the variation in studies is not just driven by random noise (Chernev, Böckenholt & Goodman, 2010).
So what’s the conclusion here: does choice overload exist or not? Scheibehenne et al. say that it certainly looks like choice overload is not a very robust phenomenon. However, the group of studies on the right side of the funnel complicates things. They are probably partly due to publication bias, but it’s certainly possible that there are conditions that facilitate choice overload, but were not captured in the meta-analysis.
I’m still reeling a bit from this finding. It certainly shows that popular science doesn’t catch up very fast: I’ve seen the jam study dozens of times in books, TED talks and presentations about decision making. It's such a famous finding that I even spent a full post describing it. But I had never even once seen the meta-analysis, despite it being already almost five years old. What can I say, whoops?
There’s still no grand truth to whether choice overload is real or not, but it certainly is not looking so outrageous after this paper. Perhaps it exists, perhaps not. Time will tell, but for now we can stick to “the effect might exist”, instead of “this is a really strong behavioral effect that changes decision making everywhere”, which is how some authors painted it.