Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

CRT and real-life decision making

19/5/2015

0 Comments

 
The Cognitive Reflection Test (CRT) is a really neat instrument: consisting of just three simple questions, it has shown to be a good predictor of successful decision making in laboratory studies. For example, higher CRT scores have predicted success in probabilistic reasoning, avoiding probabilistic biases, overconfidence, and intertemporal choice. What’s more, the CRT explains success in these tasks over and above the contribution of intelligence, executive function or thinking dispositions.

I can’t properly emphasize how exciting the CRT is. Especially since the CRT is just three questions – making it really easy to administer – it has been lauded as one of the most important findings in decision making research for years. So far, it’s mostly been explored in the lab, but since the success in the lab is huge, and since lab decision making tends to correlate with real-life decision making, it should predict real-life outcomes as well, right?

Well, not exactly, according to a new paper by Juanchich, Dewberry, Sirota, and Narendran.

They had 401 participants – recruited via a marketing agency – fill out the CRT, a decision style measure, a personality measure, and a decision outcomes measure.

The Decision Outcome Inventory consists of 41 questions, which ask whether you’ve experienced a bad outcome such as locked yourself out of a car. The timeframe for the question is the past 10 years. These scores are weighted by the percentage of people avoiding them, since more serious negative outcomes, like jail, are more uncommon. Finally, the weighted average is substracted from 0, yielding a score range from -1 to 0, where a higher score means better decision making.

When they regress the decision outcome measure on the CRT, Big Five personality scores and the decision styles scores, this is what they get:
Picture
What we see here is that the only statistically significant predictors are Extraversion and Conscientiousness. CRT has common variance with the other predictors and thus doesn’t reach significance.

The main result: the CRT explains 2% of the variance in the outcome measure, but only 1% if the other measures are also included. In short, the CRT doesn’t really predict the decision outcomes. What’s going on?

Well, there are a few key candidates for the explanation:

1)      The DOI itself might be argued to be problematic. It is admittedly only half of good decision making: avoiding bad outcomes. But, if you look at the questions, some of those bad outcomes can arise through calculated risk-taking. For example, loaning money and not getting it back or losing 1000$ on stocks can be results of a gamble that was very positive on expected value – not results of bad decisions. Additionally, other items like “you had to spend 500$ fix a car you’d owned for less than half a year” seem to penalize for bad luck: it’s really not your fault if a new car has a manufacturing mistake in there.

2)      Most lab measures of decision making have a lot to do with numeracy. However, the real-life outcomes in the DOI, like forgetting your keys or getting divorced, do not. Perhaps they are more about self-control than numeracy. One explanation thus could be that since the CRT is strongly connected to numeracy, it therefore explains lab outcomes but not the DOI outcomes. 

3)      More worryingly, it could be that lab studies and real-life outcomes are not just very well connected altogether. I don’t think this is the reason, but there have been some studies that failed to found an association with real-life outcomes and some lab measures.


Of these explanations, the first is the best for CRT. If the failure is in the use of the DOI, then CRT is fine by that. The second is a little worrying: it tells us that the CRT is maybe not the magical bullet after all – maybe it’s tapping numeracy and not cognitive reflection. Finally, the third thing would be the worst. If lab studies don’t relate to real outcomes, it of course calls into question the whole culture of doing lab studies like we’ve used to.

I don’t know enough to pass judgment on any of these causes, but at the moment I’m leaning towards it being a toss-up between options 1 and 2. The DOI as a measure is not my favourite: it seems to penalize for things I consider just generally bad luck. From the perspective of the DOI, just sitting at home doing nothing would be good decision making. The option 3 is definitely too strong a conclusion to make based on this paper, or even just a few papers. What I’d like to see is a good meta-analysis of lab-reality correlations: though I’m not sure if that exists.
0 Comments



Leave a Reply.

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.