Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Stairs vs. Elevators: Applying Behavioral Science

12/5/2015

0 Comments

 
So, last week I had the fantastic opportunity of participating in the #behavioralhack event, organized by Demos Helsinki and Granlund. The point of the seminar was applying behavioral science, energy expertise and programming skills to reduce energy consumption in old office building. We formed five different groups consisting of behavioral scholars, energy experts and coders. Our group focused on the old conundrum of how to get people to use the stairs more, and elevators less.

The first observation from us was that – apart from just shutting down the elevators altogether – there is unlikely to be a one-size-fits-all magic bullet to solve this. On the other hand, we know from research that people are very susceptible to the environment. Running mostly with System 1, we tend to do what fits together with the environment. And, unfortunately, our environments support elevators much more than stairs.
Picture
Thinking about our own workplaces, we quickly discovered all sorts of features of the environment that support elevator use, but not stairs:

  1. The restaurant menu is in the elevator
  2. There’s a mirror (apparently many women use this to check their hair when arriving)
  3. The carpets for cleaning your feet direct you to the elevator
  4. The staircase might smell, or be badly lit
  5. You can get stuck in the staircase if you forget your keycard

All these features make the elevator easier or more comfortable than the stairs. Considering that the elevator has a comfort factor advantage from the start, small wonder people refrain from using the stairs!

All in all, our solution proposal was quite simply a collection of such small items. Since the point of the seminar was to look for cheap solutions, we just proposed a sign, pointing to the elevator and stairs, with “encouraging” imagery to associate stairs with better fitness. Fixing the above list so that the stairs also include a mirror and a menu also cost almost nothing. In fact, the advantage can even be reversed: remove the mirror etc. from the elevator, and replace them with just a poster saying that walking one flight of stairs a year equals a few pounds of fat loss (it does).

For a heavier solution version, we noted that you could make the stairs vs. elevators a company wide competition, by for example tracking people in the hallways with wifi, Bluetooth etc. Additionally, stairways could have screens showing the recent news, comics, funny pictures, or anything that fits with the company culture. On the other hand, we said that probably most of the change can already be achieved with the above cheap suggestions, and so ended up presenting that as the main point.

From a meta point of view, I really had a lot of fun! It was great to apply behavioral science to a common problem – and I was surprised with the amount and quality of ideas we had. Combining people from different fields and backgrounds turned out to be a really good thing. I know it’s a kind of platitude, but I really now appreciate the fact that novices can create big insights by asking even really basic questions, since they come without any of the theory-ladenness of academic expertise :) I have to say that a fun and competent team made for a great evening!
0 Comments

Ethics of Nudging: The Freedom of Choice Argument Is Suspect

17/12/2014

0 Comments

 
Today’s post started from a question concerning the ethics of nudging. To be clear, I’ve always been of the opinion that nudging is a no-brainer: if you’re not decreasing choice options but just changing the default, nobody should object. After all, you can still choose as you wish, so what’s the problem? Well, there are problems involved, as it turns out.

But first, to sensibly talk about nudging, we need to define what we mean by a nudge. Specifically, what I mean (and what I’ve understood Thaler and Sunstein to mean in their book Nudge) is the following:

A nudge:

  •  is a cue that drives behavior in a collectively beneficial direction
  • does not reduce freedom of choice
  • is behavior-based, not just an incentive
Picture
The problem with this argument is the assumption people choosing rationally, according to their best interest. This is directly in conflict with another assumption of nudging, which is that people do not choose rationally. After all, if we didn’t assume that, why would we do nudging in the first place? So, it seems to me that first nudging assumes (quite correctly) imperfect rationality, but when people question the ethics, then suddenly we’re assuming perfect rationality. Something seems off here.

On the other hand, I don’t think this is a knockdown argument for all nudges. The above fruit section example seems ethical to me, since it’s not really imposing any extra costs for the DM. The tax letter, in contrast, is more difficult. Paying taxes is a direct cost to the person, compared to not paying them. On the other hand, if she doesn’t pay her taxes, she’ll probably have a lot of trouble with the authorities on the longer term, thus ending up to be even more costly. But can we use such a long-term argument? Where’s the limit? How much better does the long-term benefit have to be so nudging is justified?

A final thing is that nudges aren’t really independent. For example, if an organization would start building all kinds of nudges using defaults and the status quo bias, at some point there’s just too many for us to pay attention. For example, the BIT in the UK once said companies might enroll employees into plans that automatically donate a percentage of their paycheck to charity. Even though you could of course opt out, this is very suspect. Imagine, if a company made tens of such choices: at some point you’d probably be too tired to think things through, so you’d just accept the defaults – which would cost you money. So even though charity is beneficial for the society as a whole, I don’t think it’s justifiable to have a default option donating to charities.

So, all in all, the freedom of choice argument that defenders of nudging often use (I’m one, personally), doesn’t really seem to be as strong as I thought before. With this problem in my mind, I just want to wish everyone a perfectly Merry Christmas and a Happy New Year! Bias Hunter will be back in January again!

Picture
The problem with this argument is the assumption people choosing rationally, according to their best interest. This is directly in conflict with another assumption of nudging, which is that people do not choose rationally. After all, if we didn’t assume that, why would we do nudging in the first place? So, it seems to me that first nudging assumes (quite correctly) imperfect rationality, but when people question the ethics, then suddenly we’re assuming perfect rationality. Something seems off here.

On the other hand, I don’t think this is a knockdown argument for all nudges. The above fruit section example seems ethical to me, since it’s not really imposing any extra costs for the DM. The tax letter, in contrast, is more difficult. Paying taxes is a direct cost to the person, compared to not paying them. On the other hand, if she doesn’t pay her taxes, she’ll probably have a lot of trouble with the authorities on the longer term, thus ending up to be even more costly. But can we use such a long-term argument? Where’s the limit? How much better does the long-term benefit have to be so nudging is justified?

A final thing is that nudges aren’t really independent. For example, if an organization would start building all kinds of nudges using defaults and the status quo bias, at some point there’s just too many for us to pay attention. For example, the BIT in the UK once said companies might enroll employees into plans that automatically donate a percentage of their paycheck to charity. Even though you could of course opt out, this is very suspect. Imagine, if a company made tens of such choices: at some point you’d probably be too tired to think things through, so you’d just accept the defaults – which would cost you money. So even though charity is beneficial for the society as a whole, I don’t think it’s justifiable to have a default option donating to charities.

So, all in all, the freedom of choice argument that defenders of nudging often use (I’m one, personally), doesn’t really seem to be as strong as I thought before. With this problem in my mind, I just want to wish everyone a perfectly Merry Christmas and a Happy New Year! Bias Hunter will be back in January again!

0 Comments

Nudging Yourself to Better Choices

7/10/2014

2 Comments

 
A study of the different biases and human irrationality may at times look like a depressing task. After all, one is mostly finding out all the ways we screw up, all the ways we behave unoptimally and just make stupid decisions. Well, thankfully, the same findings can be used in another direction – helping us to make wiser, sounder decisions. This is usually called nudging, a term coined in Thaler’s and Sunstein’s prize-winning book Nudge.

At the heart of nudging is the idea that we don’t have unlimited amounts of free will and energy. No, we get lazy, tired, worn out and sometimes just don’t pay attention. However, to coerce people would be immoral. We all have our right to choose – no matter how bad the choice. That’s why nudging focuses on the choice architecture. That means changing the decision situation so that people will in fact choose better, i.e. they are more likely choose what they want in the long term, instead of succumbing to the willpower or attention deficits in the immediate situation. It’s like building hallways that make more sense, and lead you more directly to where you want to go. You can still choose to go someplace else, getting what you (usually) want has just been made a little easier.
Picture
In need of a little nudging?
Thaler’s and Sunstein’s book focuses on the implications of nudging for public policy. But in this post, I’ll take a narrower perspective, just looking at how you can nudge yourself to better decisions.

The main finding from the last decades is that we have two main ways to make choices. The first is System 1, which is a fast, associative and unreflective way. System 1 is the one we use most of the time, because it’s easy and requires little effort. System 2, on the other hand, is slow, reflective, and requires a lot of effort. That’s one big reason why we cannot use System 2 all the time. As it stands, System 1 is quite error-prone: with bad decision architecture, it can focus on wrong cues and lead to really stupid choices. But with a good architecture, choosing is smooth sailing. Choosing with System 2, on the other hand, is tough and effortful, but should in most cases lead to a good choice.

This very rough and simplified theory leads to two main ways to nudge: improving the architecture for a better System 1 choice, or engaging System 2 for the choice. Both are legitimate and powerful options. Which to use – well, that depends on the context. Let’s look at some known examples:

The 20 second rule

You’re at home, watching your favorite TV show with pleasure. As often is the case, you feel a slight twinge of hunger – a snacking hunger. What do you eat? Usually, at this point people go to the kitchen and get somethinIg that’s in easy reach and doesn’t need preparing – like chocolate, or chips. What if the chips were on the top shelf? Would you still get them?
Picture
Still, it's just a nudge - when there's a will, there's a way...
That’s the point of the 20 second rule: you’re more likely to choose something requiring little effort. Just having the chips on the top shelf is likely to stop you from getting them, just like placing the scones out of reach at a meeting will decrease their consumption heavily. This is such a common tip that there are tons of examples: laying out your running gear for the morning, hiding the remote to read books, or setting up a site blocker that you can set to require a time-consuming task before you can launch Facebook. All these have the same aim: guiding your System 1 towards choices you would – in a more energized and reflective mood – approve as the better ones.

Default routines

A variation of the 20 second rule is to create default routines. That means creating patterns, which will be beneficial for you and which you will execute even when tired. For example, our PhD seminars have time and again told us to write in the morning, every day you come to work. For one thing, writing is important, and this pattern ensures I’ll have time for it. For a second thing – and I think this is even more important – having writing as a default routine ensures I’ll start writing even when tired, confused or just “not feeling like it”. But usually, once I get off the ground, I’ll be in the mood. 
Picture
Ready to write any moment now!
Another example is a guy from SF I once talked to. He had this habit of always cutting up about 500g of vegetables when he arrived from work. Having done that, it was easy to blend them into a smoothie or make a salad. And having them already cut up usually meant he ate them, too, since he wouldn’t want to waste food. I thought this was ingenious!

Blocking easy cues

For engaging System 2, it can help to block cues that System 1 would like to use. For example, a known problem is the halo effect, meaning perceiving one good attribute will cause us to evaluate other attributes more highly, too. For example, people tend to think better looking people are also more intelligent. If you’re evaluating project proposals, you could hide the names of the proposers and evaluate the proposals just on their own terms. Having the names visible might influence you in a bad way. After all, you wouldn’t want to approve a project just because it’s been proposed by a colleague you like to play tennis with? Or, to remove the effect of visual design, have the proposals submitted on a template, so they all look alike (a lot of foundations seem to do this). Making decisions based on template proposals without names is going to be harder - but that’s the point. Necessarily, you will have to focus on the content, since System 1 doesn’t have a lot to go on anymore. And, being a diligent person, your System 2 choices will outperform the System 1 choices.

So, as a wrap-up, here are the two main pathways to nudging towards better choices:

1.       Helping System 1 to better options by better choice architecture

2.       Engaging System 2 by blocking System 1

Which option to go for depends on the case. The more complex the decision at hand, the better option 2 is going to be. In contrast, the more often a choice situation occurs, the more sense it makes to use System 1 on that, saving energy.
2 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.