I can’t properly emphasize how exciting the CRT is. Especially since the CRT is just three questions – making it really easy to administer – it has been lauded as one of the most important findings in decision making research for years. So far, it’s mostly been explored in the lab, but since the success in the lab is huge, and since lab decision making tends to correlate with real-life decision making, it should predict real-life outcomes as well, right?
Well, not exactly, according to a new paper by Juanchich, Dewberry, Sirota, and Narendran.
They had 401 participants – recruited via a marketing agency – fill out the CRT, a decision style measure, a personality measure, and a decision outcomes measure.
The Decision Outcome Inventory consists of 41 questions, which ask whether you’ve experienced a bad outcome such as locked yourself out of a car. The timeframe for the question is the past 10 years. These scores are weighted by the percentage of people avoiding them, since more serious negative outcomes, like jail, are more uncommon. Finally, the weighted average is substracted from 0, yielding a score range from -1 to 0, where a higher score means better decision making.
When they regress the decision outcome measure on the CRT, Big Five personality scores and the decision styles scores, this is what they get:
The main result: the CRT explains 2% of the variance in the outcome measure, but only 1% if the other measures are also included. In short, the CRT doesn’t really predict the decision outcomes. What’s going on?
Well, there are a few key candidates for the explanation:
1) The DOI itself might be argued to be problematic. It is admittedly only half of good decision making: avoiding bad outcomes. But, if you look at the questions, some of those bad outcomes can arise through calculated risk-taking. For example, loaning money and not getting it back or losing 1000$ on stocks can be results of a gamble that was very positive on expected value – not results of bad decisions. Additionally, other items like “you had to spend 500$ fix a car you’d owned for less than half a year” seem to penalize for bad luck: it’s really not your fault if a new car has a manufacturing mistake in there.
2) Most lab measures of decision making have a lot to do with numeracy. However, the real-life outcomes in the DOI, like forgetting your keys or getting divorced, do not. Perhaps they are more about self-control than numeracy. One explanation thus could be that since the CRT is strongly connected to numeracy, it therefore explains lab outcomes but not the DOI outcomes.
3) More worryingly, it could be that lab studies and real-life outcomes are not just very well connected altogether. I don’t think this is the reason, but there have been some studies that failed to found an association with real-life outcomes and some lab measures.
Of these explanations, the first is the best for CRT. If the failure is in the use of the DOI, then CRT is fine by that. The second is a little worrying: it tells us that the CRT is maybe not the magical bullet after all – maybe it’s tapping numeracy and not cognitive reflection. Finally, the third thing would be the worst. If lab studies don’t relate to real outcomes, it of course calls into question the whole culture of doing lab studies like we’ve used to.
I don’t know enough to pass judgment on any of these causes, but at the moment I’m leaning towards it being a toss-up between options 1 and 2. The DOI as a measure is not my favourite: it seems to penalize for things I consider just generally bad luck. From the perspective of the DOI, just sitting at home doing nothing would be good decision making. The option 3 is definitely too strong a conclusion to make based on this paper, or even just a few papers. What I’d like to see is a good meta-analysis of lab-reality correlations: though I’m not sure if that exists.