Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Ethics of Nudging: The Freedom of Choice Argument Is Suspect

17/12/2014

0 Comments

 
Today’s post started from a question concerning the ethics of nudging. To be clear, I’ve always been of the opinion that nudging is a no-brainer: if you’re not decreasing choice options but just changing the default, nobody should object. After all, you can still choose as you wish, so what’s the problem? Well, there are problems involved, as it turns out.

But first, to sensibly talk about nudging, we need to define what we mean by a nudge. Specifically, what I mean (and what I’ve understood Thaler and Sunstein to mean in their book Nudge) is the following:

A nudge:

  •  is a cue that drives behavior in a collectively beneficial direction
  • does not reduce freedom of choice
  • is behavior-based, not just an incentive
Picture
The problem with this argument is the assumption people choosing rationally, according to their best interest. This is directly in conflict with another assumption of nudging, which is that people do not choose rationally. After all, if we didn’t assume that, why would we do nudging in the first place? So, it seems to me that first nudging assumes (quite correctly) imperfect rationality, but when people question the ethics, then suddenly we’re assuming perfect rationality. Something seems off here.

On the other hand, I don’t think this is a knockdown argument for all nudges. The above fruit section example seems ethical to me, since it’s not really imposing any extra costs for the DM. The tax letter, in contrast, is more difficult. Paying taxes is a direct cost to the person, compared to not paying them. On the other hand, if she doesn’t pay her taxes, she’ll probably have a lot of trouble with the authorities on the longer term, thus ending up to be even more costly. But can we use such a long-term argument? Where’s the limit? How much better does the long-term benefit have to be so nudging is justified?

A final thing is that nudges aren’t really independent. For example, if an organization would start building all kinds of nudges using defaults and the status quo bias, at some point there’s just too many for us to pay attention. For example, the BIT in the UK once said companies might enroll employees into plans that automatically donate a percentage of their paycheck to charity. Even though you could of course opt out, this is very suspect. Imagine, if a company made tens of such choices: at some point you’d probably be too tired to think things through, so you’d just accept the defaults – which would cost you money. So even though charity is beneficial for the society as a whole, I don’t think it’s justifiable to have a default option donating to charities.

So, all in all, the freedom of choice argument that defenders of nudging often use (I’m one, personally), doesn’t really seem to be as strong as I thought before. With this problem in my mind, I just want to wish everyone a perfectly Merry Christmas and a Happy New Year! Bias Hunter will be back in January again!

Picture
The problem with this argument is the assumption people choosing rationally, according to their best interest. This is directly in conflict with another assumption of nudging, which is that people do not choose rationally. After all, if we didn’t assume that, why would we do nudging in the first place? So, it seems to me that first nudging assumes (quite correctly) imperfect rationality, but when people question the ethics, then suddenly we’re assuming perfect rationality. Something seems off here.

On the other hand, I don’t think this is a knockdown argument for all nudges. The above fruit section example seems ethical to me, since it’s not really imposing any extra costs for the DM. The tax letter, in contrast, is more difficult. Paying taxes is a direct cost to the person, compared to not paying them. On the other hand, if she doesn’t pay her taxes, she’ll probably have a lot of trouble with the authorities on the longer term, thus ending up to be even more costly. But can we use such a long-term argument? Where’s the limit? How much better does the long-term benefit have to be so nudging is justified?

A final thing is that nudges aren’t really independent. For example, if an organization would start building all kinds of nudges using defaults and the status quo bias, at some point there’s just too many for us to pay attention. For example, the BIT in the UK once said companies might enroll employees into plans that automatically donate a percentage of their paycheck to charity. Even though you could of course opt out, this is very suspect. Imagine, if a company made tens of such choices: at some point you’d probably be too tired to think things through, so you’d just accept the defaults – which would cost you money. So even though charity is beneficial for the society as a whole, I don’t think it’s justifiable to have a default option donating to charities.

So, all in all, the freedom of choice argument that defenders of nudging often use (I’m one, personally), doesn’t really seem to be as strong as I thought before. With this problem in my mind, I just want to wish everyone a perfectly Merry Christmas and a Happy New Year! Bias Hunter will be back in January again!

0 Comments

Benefits of Decision Analysis

19/10/2014

0 Comments

 
Why is decision analysis a good idea in the first place? Why should we focus on making some decisions supported by careful modelling, data gathering and analysis? Here, I provide some arguments as to why decision analysis is beneficial. Of course, not all decisions benefit from it: some considerations are too unimportant to warrant much analysis, and some might be simple enough to not need it. But then again, many problems are important, or complex, or politically hot. For these problems, decision analysis can be especially beneficial.

Identification of the best alternative

The main point of decision analysis (DA) is of course to arrive at the best possible alternative, or even a “good enough” one. This is essentially the focus on most discussions of DA, and therefore I won’t dwell on it more. How to determine the best feasible option is a very hard problem in its own right, deserving a book of its own.

Identification of objectives

Book examples of decision analysis start from a defined problem, and the point is to somehow satisfactorily solve it. Reality, however, starts from a different point. The first problem in reality is defining the problem itself. In fact, as a few classic books in DA emphasize, formulating the problem is one of the hardest and most important steps of DA. Much of the benefit of DA comes from forcing us to formulate the problem carefully, and preventing us from pretending to solve complex dynamic issues by intuition alone.
Picture
The first step of decision analysis!
Creation of new alternatives

Many descriptions of DA also assume that the alternatives are already there, and that the tricky part is comparing them. Unfortunately, in actual circumstances the decision maker or his supporters are commonly responsible for coming up with alternatives, too. This is likewise critical for success, because alternatives that won’t be chosen include the ones you didn’t think of – no matter how good they would be. Duke University’s DA guru Professor Keeney has emphasized this heavily.

Analysis of complex causal relationships

It goes without saying that many issues are complex and difficult to solve – that’s why decision analysis is used, after all. A benefit of thinking the model properly through is that it can reveal some of our unvetted assumptions, even radically changing our perception of an issue. For example, I was once involved in a project setting up a new logistics center for a company. Their goal was to increase customer satisfaction by shorter delivery times. After careful analysis it turned out that the new center wouldn’t reduce delivery time by very much. So someone thought that “wow, delivery time must be really important for the customers to warrant this” and looked up the satisfaction survey data. Well, it turned out it wasn’t very important: current deliveries were well within the limit defined by customers as satisfactory. In fact, it was clear from the surveys that to increase satisfaction they ought to be doing something else entirely, like improving customer service or product quality! It sure was an interesting finding, but it took some time to convince the directors that logistics really wasn’t their problem.
Picture
A little analysis needed.
Quantification of subjective knowledge

Somewhat related to the previous example, many analyses come up against the problem of uncertain or vague knowledge. Especially organizations have a habit of being full of people very knowledgeable about the business and its environment, but this knowledge isn’t really anywhere for the development department to use. It seems to go something like this. First, the analyst finds out he needs some data on, say, failure rate of delivery cars. The analyst asks a Business Development Manager, who doesn’t know anything, and tells the analyst to use some estimate. The analyst doesn’t know anything either, so ends up interviewing some delivery people, uncovering subjective, unquantified knowledge about the actual failure rate. There’s nothing wrong with subjective knowledge, it’s just that it’s of no use if the DM isn’t aware of it! By uncovering and quantifying subjective knowledge in the organization, the analysis can actually benefit the company very much also in the long term, since now they have even more knowledge to base their future decisions on.

Creation of a common decision framework

Speaking of the future, one final benefit of DA is that it provides the decision maker with a decision framework, a model to replicate next time when faced with a similar decision. This is especially beneficial in organizations, since they get stuck in meta-level issues: arguing about how to make decisions in the first place.
In the best case, DA can provide an almost ready-made framework to follow, so that the managers can focus on actually making the decision. However, it’s important to recognize that different decisions have different stakeholders and take that into account. For example, a new logistics center may be an issue mostly about operational efficiency, but a new factory demands the inclusion of environmental and labour organizations. Just taking a previously used DA framework does not ensure it’s a good fit with the new problem. But the DA frame can be something to start from, which can help in reducing political conflicts between stakeholders. In fact, there’s nothing to prevent using DA from different perspectives. For example, DA has been used successfully in problems such as oil industry regulation, or moving from a segregated schooling system to a racially integrated one. Both politically hot examples can be found in Edwards’ and von Winterfeldt’s classic.

I guess if you wanted to summarize the benefits of DA in a sentence, it could look something like this: creating structure and helping to use it. So, in fact what it does is it helps us to think better as we are forced to consider things more thoroughly and explicitly. It’s a methods that helps us to deal with uncertainty and still make a decision.
0 Comments

Nudging Yourself to Better Choices

7/10/2014

2 Comments

 
A study of the different biases and human irrationality may at times look like a depressing task. After all, one is mostly finding out all the ways we screw up, all the ways we behave unoptimally and just make stupid decisions. Well, thankfully, the same findings can be used in another direction – helping us to make wiser, sounder decisions. This is usually called nudging, a term coined in Thaler’s and Sunstein’s prize-winning book Nudge.

At the heart of nudging is the idea that we don’t have unlimited amounts of free will and energy. No, we get lazy, tired, worn out and sometimes just don’t pay attention. However, to coerce people would be immoral. We all have our right to choose – no matter how bad the choice. That’s why nudging focuses on the choice architecture. That means changing the decision situation so that people will in fact choose better, i.e. they are more likely choose what they want in the long term, instead of succumbing to the willpower or attention deficits in the immediate situation. It’s like building hallways that make more sense, and lead you more directly to where you want to go. You can still choose to go someplace else, getting what you (usually) want has just been made a little easier.
Picture
In need of a little nudging?
Thaler’s and Sunstein’s book focuses on the implications of nudging for public policy. But in this post, I’ll take a narrower perspective, just looking at how you can nudge yourself to better decisions.

The main finding from the last decades is that we have two main ways to make choices. The first is System 1, which is a fast, associative and unreflective way. System 1 is the one we use most of the time, because it’s easy and requires little effort. System 2, on the other hand, is slow, reflective, and requires a lot of effort. That’s one big reason why we cannot use System 2 all the time. As it stands, System 1 is quite error-prone: with bad decision architecture, it can focus on wrong cues and lead to really stupid choices. But with a good architecture, choosing is smooth sailing. Choosing with System 2, on the other hand, is tough and effortful, but should in most cases lead to a good choice.

This very rough and simplified theory leads to two main ways to nudge: improving the architecture for a better System 1 choice, or engaging System 2 for the choice. Both are legitimate and powerful options. Which to use – well, that depends on the context. Let’s look at some known examples:

The 20 second rule

You’re at home, watching your favorite TV show with pleasure. As often is the case, you feel a slight twinge of hunger – a snacking hunger. What do you eat? Usually, at this point people go to the kitchen and get somethinIg that’s in easy reach and doesn’t need preparing – like chocolate, or chips. What if the chips were on the top shelf? Would you still get them?
Picture
Still, it's just a nudge - when there's a will, there's a way...
That’s the point of the 20 second rule: you’re more likely to choose something requiring little effort. Just having the chips on the top shelf is likely to stop you from getting them, just like placing the scones out of reach at a meeting will decrease their consumption heavily. This is such a common tip that there are tons of examples: laying out your running gear for the morning, hiding the remote to read books, or setting up a site blocker that you can set to require a time-consuming task before you can launch Facebook. All these have the same aim: guiding your System 1 towards choices you would – in a more energized and reflective mood – approve as the better ones.

Default routines

A variation of the 20 second rule is to create default routines. That means creating patterns, which will be beneficial for you and which you will execute even when tired. For example, our PhD seminars have time and again told us to write in the morning, every day you come to work. For one thing, writing is important, and this pattern ensures I’ll have time for it. For a second thing – and I think this is even more important – having writing as a default routine ensures I’ll start writing even when tired, confused or just “not feeling like it”. But usually, once I get off the ground, I’ll be in the mood. 
Picture
Ready to write any moment now!
Another example is a guy from SF I once talked to. He had this habit of always cutting up about 500g of vegetables when he arrived from work. Having done that, it was easy to blend them into a smoothie or make a salad. And having them already cut up usually meant he ate them, too, since he wouldn’t want to waste food. I thought this was ingenious!

Blocking easy cues

For engaging System 2, it can help to block cues that System 1 would like to use. For example, a known problem is the halo effect, meaning perceiving one good attribute will cause us to evaluate other attributes more highly, too. For example, people tend to think better looking people are also more intelligent. If you’re evaluating project proposals, you could hide the names of the proposers and evaluate the proposals just on their own terms. Having the names visible might influence you in a bad way. After all, you wouldn’t want to approve a project just because it’s been proposed by a colleague you like to play tennis with? Or, to remove the effect of visual design, have the proposals submitted on a template, so they all look alike (a lot of foundations seem to do this). Making decisions based on template proposals without names is going to be harder - but that’s the point. Necessarily, you will have to focus on the content, since System 1 doesn’t have a lot to go on anymore. And, being a diligent person, your System 2 choices will outperform the System 1 choices.

So, as a wrap-up, here are the two main pathways to nudging towards better choices:

1.       Helping System 1 to better options by better choice architecture

2.       Engaging System 2 by blocking System 1

Which option to go for depends on the case. The more complex the decision at hand, the better option 2 is going to be. In contrast, the more often a choice situation occurs, the more sense it makes to use System 1 on that, saving energy.
2 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.