Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Pure Importance Says Nothing

26/5/2015

0 Comments

 
”Content of the job is more important than wage”

“Debt reduction trumps financial growth”

“Life is more important than money”

All three above statements are sentenced I could well imagine someone utter in an intelligent discussion. All the statements have one other thing in common: they’re pretty much meaningless.

It’s clear that we can – and often do – make such statements. In itself, there’s nothing wrong about saying that A is more important than B. For example, “the math exam is more important than the history exam” is a perfectly legit way of relating your lack of interest in what happened in the 30 Years’ War. But when it comes to talking about what you want, and how you should distribute your resources, importance statements are meaningless without numbers.

The third case is perhaps the most common one. Presumably the idea is to say that we should never sacrifice human life to gain financially. Of course, that’s flat out wrong. Even if you agreed with that in principle, in practice you’re trading off human life for welfare all the time. When you go to work, you risk getting killed in an accident on the way, but have a chance of getting paid. Buying things from the grocery means someone has risked themselves picking, packing and producing the items – if you really valued their health, you’d grow your own potatoes. In healthcare, we recognize that some treatments are too expensive to offer – the money is better used for other welfare-increasing things, like building roads for instance. Life can be traded for welfare, ie. money.
Picture
The first case also seems clear in intent: you want to have a meaningful job, instead of a becoming a cubicle hamster for a faceless corporation, no matter the wage differential. However, it’s surely not true that meaning of the job is infinitely more important. Would you rather help the homeless for free, or be a hamster for 10 million per hour?

The problem with importance without numbers is that they are hinting at tradeoffs, but grossly misrepresenting what we’re willing to accept. The choice examples involve tradeoffs, and tradeoffs are impossible if one goal is always more important than another. This causes an infinite tradeoff rate, causing you to favor a teeny-tiny probability of loss of life over the GDP of the whole world. Doesn’t sound too reasonable, does it? In fact, Keeney (1992, p.147) calls the lack of attention to range “the most common critical mistake”.

Naturally, we can always say that the examples are ridiculous, surely no one is thinking about such tradeoffs when they say life is more important than money, surely they’re thinking in terms of “sensible situations”. In a sense, I agree. Unfortunately, one’s ridiculous example is another’s plausible one. If you don’t say anything about the range of life and money that you’re talking about, I can’t know what you’re trying to say. It’s just much easier to say it explicitly: life is more important than money, for amounts smaller than 1000 euros, say.

Even this gets us into problems, because now if I originally have a choice problem involving 3000 euros and chance of death, you’d be willing to make some kind of tradeoff. But if I subdivide the issue into three problems, now suddenly human life always wins. If you think about utility functions, you can see how this can quickly become a problem. But the situation is still better than not having any ranges at all. Even better would be to assign a tradeoff ration that’s high but not infinite. 
0 Comments

CRT and real-life decision making

19/5/2015

0 Comments

 
The Cognitive Reflection Test (CRT) is a really neat instrument: consisting of just three simple questions, it has shown to be a good predictor of successful decision making in laboratory studies. For example, higher CRT scores have predicted success in probabilistic reasoning, avoiding probabilistic biases, overconfidence, and intertemporal choice. What’s more, the CRT explains success in these tasks over and above the contribution of intelligence, executive function or thinking dispositions.

I can’t properly emphasize how exciting the CRT is. Especially since the CRT is just three questions – making it really easy to administer – it has been lauded as one of the most important findings in decision making research for years. So far, it’s mostly been explored in the lab, but since the success in the lab is huge, and since lab decision making tends to correlate with real-life decision making, it should predict real-life outcomes as well, right?

Well, not exactly, according to a new paper by Juanchich, Dewberry, Sirota, and Narendran.

They had 401 participants – recruited via a marketing agency – fill out the CRT, a decision style measure, a personality measure, and a decision outcomes measure.

The Decision Outcome Inventory consists of 41 questions, which ask whether you’ve experienced a bad outcome such as locked yourself out of a car. The timeframe for the question is the past 10 years. These scores are weighted by the percentage of people avoiding them, since more serious negative outcomes, like jail, are more uncommon. Finally, the weighted average is substracted from 0, yielding a score range from -1 to 0, where a higher score means better decision making.

When they regress the decision outcome measure on the CRT, Big Five personality scores and the decision styles scores, this is what they get:
Picture
What we see here is that the only statistically significant predictors are Extraversion and Conscientiousness. CRT has common variance with the other predictors and thus doesn’t reach significance.

The main result: the CRT explains 2% of the variance in the outcome measure, but only 1% if the other measures are also included. In short, the CRT doesn’t really predict the decision outcomes. What’s going on?

Well, there are a few key candidates for the explanation:

1)      The DOI itself might be argued to be problematic. It is admittedly only half of good decision making: avoiding bad outcomes. But, if you look at the questions, some of those bad outcomes can arise through calculated risk-taking. For example, loaning money and not getting it back or losing 1000$ on stocks can be results of a gamble that was very positive on expected value – not results of bad decisions. Additionally, other items like “you had to spend 500$ fix a car you’d owned for less than half a year” seem to penalize for bad luck: it’s really not your fault if a new car has a manufacturing mistake in there.

2)      Most lab measures of decision making have a lot to do with numeracy. However, the real-life outcomes in the DOI, like forgetting your keys or getting divorced, do not. Perhaps they are more about self-control than numeracy. One explanation thus could be that since the CRT is strongly connected to numeracy, it therefore explains lab outcomes but not the DOI outcomes. 

3)      More worryingly, it could be that lab studies and real-life outcomes are not just very well connected altogether. I don’t think this is the reason, but there have been some studies that failed to found an association with real-life outcomes and some lab measures.


Of these explanations, the first is the best for CRT. If the failure is in the use of the DOI, then CRT is fine by that. The second is a little worrying: it tells us that the CRT is maybe not the magical bullet after all – maybe it’s tapping numeracy and not cognitive reflection. Finally, the third thing would be the worst. If lab studies don’t relate to real outcomes, it of course calls into question the whole culture of doing lab studies like we’ve used to.

I don’t know enough to pass judgment on any of these causes, but at the moment I’m leaning towards it being a toss-up between options 1 and 2. The DOI as a measure is not my favourite: it seems to penalize for things I consider just generally bad luck. From the perspective of the DOI, just sitting at home doing nothing would be good decision making. The option 3 is definitely too strong a conclusion to make based on this paper, or even just a few papers. What I’d like to see is a good meta-analysis of lab-reality correlations: though I’m not sure if that exists.
0 Comments

Stairs vs. Elevators: Applying Behavioral Science

12/5/2015

0 Comments

 
So, last week I had the fantastic opportunity of participating in the #behavioralhack event, organized by Demos Helsinki and Granlund. The point of the seminar was applying behavioral science, energy expertise and programming skills to reduce energy consumption in old office building. We formed five different groups consisting of behavioral scholars, energy experts and coders. Our group focused on the old conundrum of how to get people to use the stairs more, and elevators less.

The first observation from us was that – apart from just shutting down the elevators altogether – there is unlikely to be a one-size-fits-all magic bullet to solve this. On the other hand, we know from research that people are very susceptible to the environment. Running mostly with System 1, we tend to do what fits together with the environment. And, unfortunately, our environments support elevators much more than stairs.
Picture
Thinking about our own workplaces, we quickly discovered all sorts of features of the environment that support elevator use, but not stairs:

  1. The restaurant menu is in the elevator
  2. There’s a mirror (apparently many women use this to check their hair when arriving)
  3. The carpets for cleaning your feet direct you to the elevator
  4. The staircase might smell, or be badly lit
  5. You can get stuck in the staircase if you forget your keycard

All these features make the elevator easier or more comfortable than the stairs. Considering that the elevator has a comfort factor advantage from the start, small wonder people refrain from using the stairs!

All in all, our solution proposal was quite simply a collection of such small items. Since the point of the seminar was to look for cheap solutions, we just proposed a sign, pointing to the elevator and stairs, with “encouraging” imagery to associate stairs with better fitness. Fixing the above list so that the stairs also include a mirror and a menu also cost almost nothing. In fact, the advantage can even be reversed: remove the mirror etc. from the elevator, and replace them with just a poster saying that walking one flight of stairs a year equals a few pounds of fat loss (it does).

For a heavier solution version, we noted that you could make the stairs vs. elevators a company wide competition, by for example tracking people in the hallways with wifi, Bluetooth etc. Additionally, stairways could have screens showing the recent news, comics, funny pictures, or anything that fits with the company culture. On the other hand, we said that probably most of the change can already be achieved with the above cheap suggestions, and so ended up presenting that as the main point.

From a meta point of view, I really had a lot of fun! It was great to apply behavioral science to a common problem – and I was surprised with the amount and quality of ideas we had. Combining people from different fields and backgrounds turned out to be a really good thing. I know it’s a kind of platitude, but I really now appreciate the fact that novices can create big insights by asking even really basic questions, since they come without any of the theory-ladenness of academic expertise :) I have to say that a fun and competent team made for a great evening!
0 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.