Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Ethics of Nudging: The Freedom of Choice Argument Is Suspect

17/12/2014

0 Comments

 
Today’s post started from a question concerning the ethics of nudging. To be clear, I’ve always been of the opinion that nudging is a no-brainer: if you’re not decreasing choice options but just changing the default, nobody should object. After all, you can still choose as you wish, so what’s the problem? Well, there are problems involved, as it turns out.

But first, to sensibly talk about nudging, we need to define what we mean by a nudge. Specifically, what I mean (and what I’ve understood Thaler and Sunstein to mean in their book Nudge) is the following:

A nudge:

  •  is a cue that drives behavior in a collectively beneficial direction
  • does not reduce freedom of choice
  • is behavior-based, not just an incentive
Picture
The problem with this argument is the assumption people choosing rationally, according to their best interest. This is directly in conflict with another assumption of nudging, which is that people do not choose rationally. After all, if we didn’t assume that, why would we do nudging in the first place? So, it seems to me that first nudging assumes (quite correctly) imperfect rationality, but when people question the ethics, then suddenly we’re assuming perfect rationality. Something seems off here.

On the other hand, I don’t think this is a knockdown argument for all nudges. The above fruit section example seems ethical to me, since it’s not really imposing any extra costs for the DM. The tax letter, in contrast, is more difficult. Paying taxes is a direct cost to the person, compared to not paying them. On the other hand, if she doesn’t pay her taxes, she’ll probably have a lot of trouble with the authorities on the longer term, thus ending up to be even more costly. But can we use such a long-term argument? Where’s the limit? How much better does the long-term benefit have to be so nudging is justified?

A final thing is that nudges aren’t really independent. For example, if an organization would start building all kinds of nudges using defaults and the status quo bias, at some point there’s just too many for us to pay attention. For example, the BIT in the UK once said companies might enroll employees into plans that automatically donate a percentage of their paycheck to charity. Even though you could of course opt out, this is very suspect. Imagine, if a company made tens of such choices: at some point you’d probably be too tired to think things through, so you’d just accept the defaults – which would cost you money. So even though charity is beneficial for the society as a whole, I don’t think it’s justifiable to have a default option donating to charities.

So, all in all, the freedom of choice argument that defenders of nudging often use (I’m one, personally), doesn’t really seem to be as strong as I thought before. With this problem in my mind, I just want to wish everyone a perfectly Merry Christmas and a Happy New Year! Bias Hunter will be back in January again!

Picture
The problem with this argument is the assumption people choosing rationally, according to their best interest. This is directly in conflict with another assumption of nudging, which is that people do not choose rationally. After all, if we didn’t assume that, why would we do nudging in the first place? So, it seems to me that first nudging assumes (quite correctly) imperfect rationality, but when people question the ethics, then suddenly we’re assuming perfect rationality. Something seems off here.

On the other hand, I don’t think this is a knockdown argument for all nudges. The above fruit section example seems ethical to me, since it’s not really imposing any extra costs for the DM. The tax letter, in contrast, is more difficult. Paying taxes is a direct cost to the person, compared to not paying them. On the other hand, if she doesn’t pay her taxes, she’ll probably have a lot of trouble with the authorities on the longer term, thus ending up to be even more costly. But can we use such a long-term argument? Where’s the limit? How much better does the long-term benefit have to be so nudging is justified?

A final thing is that nudges aren’t really independent. For example, if an organization would start building all kinds of nudges using defaults and the status quo bias, at some point there’s just too many for us to pay attention. For example, the BIT in the UK once said companies might enroll employees into plans that automatically donate a percentage of their paycheck to charity. Even though you could of course opt out, this is very suspect. Imagine, if a company made tens of such choices: at some point you’d probably be too tired to think things through, so you’d just accept the defaults – which would cost you money. So even though charity is beneficial for the society as a whole, I don’t think it’s justifiable to have a default option donating to charities.

So, all in all, the freedom of choice argument that defenders of nudging often use (I’m one, personally), doesn’t really seem to be as strong as I thought before. With this problem in my mind, I just want to wish everyone a perfectly Merry Christmas and a Happy New Year! Bias Hunter will be back in January again!

0 Comments

What You See Is What You Believe

8/12/2014

0 Comments

 
The old saying everybody has heard says “don’t believe everything you read”. Current research shows while this is true, we also need a saying instructing us “don’t believe everything you see – especially on a chart”.

In their recent article Blinded with science: Trivial graphs and formulas increase ad persuasiveness and belief in product efficacy Tal and Wansink experiment with different ways of providing information about medical drugs to consumers. They have two groups of people: the ones seeing just text, and the ones seeing also a chart. The graph was trivial: it added no new information at all. The control group saw a text saying that
A large pharmaceutical company has recently developed a new drug to boost peoples’ immune function. It reports that trials it conducted demonstrated a drop of forty percent (from eighty seven to forty seven percent) in occurrence of the common cold. It intends to market the new drug as soon as next winter, following FDA approval.
The experimental group saw the same text, followed by a graph:
Picture
The result? Participants who saw the graph believed the drug to be more effective than the control group.

As it is, this result could be just due to increased information retention: graphs make it easier to remember things. Well, the authors thought the same, and replicated the study by checking for retention 30 minutes after the study. It turned out there was no difference in memory, but the same effect persisted. What could be the cause? Looking at the data, the authors found out there’s a significant interaction of belief in science and the chart effect – people with more belief in science ended up being more convinced because of the graph! Uh oh, this is worrying for us science buffs.

In their third experiment, they changed the setting by replacing the bar chart with a chemical formula. What happened was that once again, a scientifically-looking item increased the effectiveness rating. Unfortunately, the authors neglect to mention whether the interaction with belief in science was significant here 8probably not, since it wasn’t reported).

It’s understandable that a chart would make a claim seem more scientific. After all, one of the hallmarks of pseudoscience is hiding all the results behind vague words – a chart is at least clear. Unfortunately, by choosing the right measures and axes, one can design compelling yet false claims with graphs quite easily. A chart does not ensure sound science, or sound data. I guess we’ll just have to stick to trying to evaluate the actual claims carefully. Don’t believe me? Well, here’s a chart to convince you:
Picture
0 Comments

Measure Right and Cheap: Overcoming “we need more information”

1/12/2014

1 Comment

 
Ah, information. The One Thing that will solve all your problems, and make all the hard decisions for you. Or so many keep thinking: if I only had more information… Of course, in many ways, this is exactly right. More information does equal better decisions, as long as the information is – sorry for the pun – informative. Unfortunately, in many cases we either acquire the wrong information, or pass by getting the right kind of data, thinking that it’s too costly.

Thinking about that, I have a hypothesis about why the feeling “we need more information” persists:

  1. Even with information, hard decisions are still hard
  2. Information is of the wrong kind
  3. Thinking information costs too much

Even with information, hard decisions are still hard

This is really not very surprising, but there’s a common thread linking all hard decisions: they are hard. If they were easy, you wouldn’t be sitting there, thinking about the problem. No, you’d be back at home, or enjoying a run, or whatever. Decisions are hard for two main reasons: uncertainty and tradeoffs. Uncertainty makes decisions hard, but it can be mitigated with measurements. But what about those pesky cases when you can’t measure? Well, I’m going to say it flat out: there are no such cases. Sure, you can rarely get perfect certainty, but usually you can reduce uncertainty by a whole lot.

The second problem, that of tradeoffs, is the true culprit for hard decisions’ existence. Often we’re faced with situations, in which one option is more certain, but another has more potential profit. For example, when I run a race, I can start with a slower pace or harder pace. The slower pace is safer: I’ll definitely finish. The hard start pace, in contrast, is more risky: my reachable time at finish is better, but I run the risk of cramps and might not finish at all. Tradeoffs are annoying in the sense that there’s often nothing you can do about it, no measurement will save you. If you’re thinking between a cheap but ugly car, and an expensive but fancier one, what could you measure? No, you’ll just have to make up your mind about what you value.
Picture
Iron Man, Hulk, or Spider-Man? Why not all three?
Information is of the wrong kind

According to a management joke, there are two kinds of information: what we need, and what we have. I think there’s some truth in this. 
Picture
A fundamental problem with information is that not all things are equally straightforward to measure. It’s quite a lot more difficult to measure employee motivation, and a lot easier to measure the number of defect products in a batch. For this reason, a lot of companies end up measuring just the latter. It’s just so much easier, so shouldn’t we focus our efforts on that? Well, not necessarily. It’s not the cheapest measurements you ought to do, but the ones with the most impact. In his book How To Measure Anything Doug Hubbard tells that he was shocked by companies measurements: many were measuring the easy things, and had left several variables with a large impact completely unquantified! As Hubbard explains (p. 111):
The highest-value measurements almost always are a bit of surprise to the client. Again and again, I found that clients used to spend a lot of time, effort, and money measuring things that just didn’t have a high information value while ignoring variables that could significantly affect real decisions.
Thinking information costs too much

It’s an honest mistake, thinking that if you have a lot of uncertainty, you need a lot of information to help you. But, in fact, the relationship is exactly the inverse. The more uncertainty you have, the less information you need to improve the situation. If you’re Jon Snow, just spending a moment looking around will improve things!

I think this mistake has to do with looking for perfect information. Sure, the gap to perfect information is much larger here. But the point is that if you know next to nothing, you get to pick the low-hanging fruit and improve the situation with those very cheap pieces of information, while in a more advanced situation with less uncertainty, you’d need more and more complex and expensive measurements.

For example, many startups face the following question in the beginning: Is there demand for our product? In the beginning, they know almost nothing. They probably feel good about the product, but that’s not really much data. An expensive way of getting data would be to hire a marketing research firm, do a study or two about the demand, burning tens of thousands in the process. A cheaper way: call a few potential customers, or go to the market and set up a stand. You won’t have perfect information, but you’ll know a lot more than you did just a while ago! It’s good to see that the entrepreneurship literature has taken this to heart, and guys like Eric Ries are teaching also bigger companies that more costly doesn’t always equal better. Or even if it would, maybe it’s still unnecessary. Simple measurements go a long way.
1 Comment

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.