Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Deeper Look at the Rationality Test

15/10/2015

1 Comment

 
Okay, so I promised to reveal my own results for the last week’s Rationality test, and also take a deeper look at the questions while I’m at it. So here goes.

You might guess that – based on the fact that rationality is a big theme of this blog – I would receive the score “Rationalist”. Well, you’d be half right. When I was trying out the beta version of the test (with slightly different questions I think), I got “Skeptic”. Also, the more rationalist result of the second try was a lot due to the fact that for some questions I knew what I was supposed to answer. I guess this shows there’s still room for improvement. Anyway, so what are you supposed to answer, and why? I’ll go through the test question-by-question, providing short explanations and links to relevant themes or theories. At the end, I’ll show how the questions relate to the skills and sub-categories.

Question-by-question analysis

1. LA movie
Well, this is just a classic case of sunk costs. You’ve invested time and money into something, but it’s not bringing the utility (or profit, or whatever) that you thought. If you abandon it, you abandon all chance of getting any utility out from the investment. However, as far as rationality is concerned, past costs are not relevant, since you can’t affect them anymore. The only thing that matters is the opportunity cost: should you stay in the movie, or rather do something else. If that something else brings more benefits, rationally you should be doing that.

2. Music or job
This problem is a classic case of how to confuse your opponents in debates. You can see this in politics, like suppose you’re talking about school shootings with a hard-line conservative: “Either we give guns to teachers or we lower the right to bear arms to 8 years old!”. Well, of course you see right away that you’re being presented a false dichotomy: there are many other ways to prevent shootings – like banning all arms altogether. But to skew the debate and try to put you in a position in which you have to accept something you don’t like, your opponent tries to lure you into the these-are-the-only-two-options trap.

3. Doughnut-making machine
Now, this question is basically just simple arithmetic. However, the trick here is that the answer that immediately comes to mind is incorrect, ie. a false System 1 response. Instead, what you need to do is to mentally check that number, see it is wrong, and use System 2 to provide the right answer. The question itself is just a rephrase of one question in the classic Cognitive Reflection Test.

4. Fixating on improbable frightening possibilities
I’m a little puzzled about this question. Sure, I understand the point is that if you’re always fixating on really unlikely bad things, you’re doing something wrong. Still, I find it hard to see anyone would actually be like this!

5. The dead fireman
Now, the point in this question was to see how many possible causes you would think of before deciding on one. The idea is, naturally enough, confirmation bias. We’re too often thinking of a certain explanation, and then immediately jumping to look for confirming evidence. In complex problems, this is a special problem since as we all know, if you torture the numbers enough with statistics, you can make them confess to anything.

6. Things take longer
Well, I presume this simple self-report question is just measuring your susceptibility to the planning fallacy.

7. Bacteria dish
This question has the same idea as the Doughnut-making machine. You get a System 1 answer, suppress it, and (hopefully) answer correctly with System 2. This question is also a rephrase of a question from the Cognitive Reflection Test.

8. Refuting arguments
Being able to argue against other people is a clear sign of rhetorical skills and logical thinking.

9. Budgeting time for the essay
This question checks the planning fallacy. Often, we’re way too optimistic about the time that it takes to complete a project. For example, I’m still writing a paper I expected to be ready for submission in May! In this question, you were awarded full points for assigning at least 3 weeks for the essay, ie. the average of the previous essay completion times.

10. Learning from the past
This is a simple no-trick question that honestly asks whether you learn from your mistakes. I honestly answered that I often do, but sometimes I end up repeating mistakes.

11. BacoNation sales and the ad campaign
This checked your ability to use statistical reasoning. True enough, sales have risen compared to the previous month, but all in all the sales have varied enough to make it plausible that the ad campaign had no effect. In fact, if you pnorm(44.5, mean(data), sd(data)), you get 0.12167, which implies that it’s plausible the September number comes from the same normal distribution. This makes the effect of the ads only somewhat likely.

12. Sitting in the train
So this is the first of the two questions that check how much you value your time. Of course, the point here is that you ought to be consistent. Unfortunately, there may be valid arguments for claiming that you value time on holiday and at home differently, due to differing opportunity costs. See question 20 below for more explanation.

13. Value of time
This question simply asks whether you find it easy or difficult to value your time. Unsurprisingly, the easier you find it the higher your points.

14. One or two meals
Would you rather have one meal now or two meals in a year? This is measuring the discounting of time. Assuming that you’re not starved of food, you presumably should discount meals in the same way as money, since money can obviously buy you meals. See question 21 below for a longer explanation.

15. Continue or quit
Another one of those self-report questions, this is basically asking whether you have fallen into the sunk cost trap.

16. 45 or 90
Here’s another question about time discounting, this time with money. The same assumptions hold as before: we’re assuming you are not in desperate need of money. If that holds, you should discount the same way over all time ranges.

17. Certainty of theory
Can a theory be certain? If you’re a Bayesian (and why wouldn’t you be, right?), you can never set a theory to be 100% certain (let’s ignore tautologies and contradictions here). In a Bayesian framework this would mean that no matter what evidence you observe, the theory can never be proven wrong, because a prior of 1 discounts any evidence for or against it.

18. 100 vs 200
Another discounting question, this time with slightly different amounts of money. Once again, you should discount the same way and choose whatever you chose before. Note that here we are also assuming that 100/200 amounts are close enough to the 40/90 decision – if we had amounts in the millions, that might make a lot of impact.

19. Big assignment
The big assignment vs. small assignments is just a self-report measure to investigate your planning skills.

20. Paying for a task
This question is a sister question to the one where you’re sitting in the train. I presume that the point is that your valuation of one hour should be the same in both question. However, we can question whether the situations are really the same. In one, you’re one holiday, and sitting in a train in a new city has positive value for me. What’s more, on holiday the opportunity costs are different. I’m not really trading off time for working hours, because the point of the holiday mindset is precisely setting aside the possibility of work, so I can enjoy whatever I’m doing – like sitting in a train in a new city. In this question, you’re trying to avoid a task at home, where the opportunity costs of one hour may certainly be different than when you’re on holiday. For example, if you have a job you can do from home, you could be working, or going out with friends, etc.

21. 45 or 90
Well, this is of course part of the other time discounting questions. Here we have the same 45/90 amounts, but the time has been shifted for one year to the future. Again, you should choose whatever you chose before.

All these questions had the similar format:
A dollars in time t vs. B dollars in time t+T

If you’re perfectly rational, you should discount in the same way between times [now, 3 months] and [1 year, 1 year 3 months]. The reason is quite simple: if you’re now willing to wait for the extra three months but not when the lower amount is immediate, you will in the future end up changing your decision. And, if you already know you will change it, why wouldn’t you choose that option already. Hence, you should be consistent. (if you really need an academic citation, here is a good place to start)
 
The score
 So how do these questions make up your score?
If you look at the URL of your results report, you probably see something like
https://www.guidedtrack.com/programs/1q59zh4/run?normalpoints=33&sunkcost=4&planning=3&explanationfreeze=3&probabilistic=4&rhetorical=4&analyzer=3&timemoney=4&intuition=14&future=14&numbers=16&evidence=14&csr=8&enjoyment=0&reportshare=0
You can use that to look at your score by category, for example in my case:
Picture
That’s all for this week, happy rationality (or rationalizing?) to all of you! :)
1 Comment

Test Your Rationality

6/10/2015

0 Comments

 
Picture
As a decision scholar, I’m a firm believer in the benefits of specialization. If someone is really good at doing something, then it’s often better to rely on them in that issue, and focus efforts towards where you’re personally the most beneficial to others and society at large. Of course, this principle has to apply over all agents – including myself. With that in mind, I’m going to make a feature post about something a certain someone else does – and does it much better than me.

Enter Spencer Greenberg. I’ve talked to Spencer over email a couple of times, and he’s really a great and enthusiastic guy. But that’s not the point. The point is that he does a great service to the community by producing awesome tests, which you can use to educate yourself, your partner or anyone you come across about good decision making. What’s even better is that the tests are done with the right kind of mindset: they’re well backed up by actual, hard science. What this means is that the questions make sense – there’s none of that newspaper-clickbait “find your totem animal” kind of stuff. There’s proper, science-backed measuring. Even better, the tests have been written in a way anyone can understand. You don’t need to be a book-loving nerdy scholar to gain some insights!
​
Now, I’ve always wanted to bring something to the world community. And a while ago, I thought maybe I could produce some online tests about decision making. But after seeing these tests, I’ll just tip my hat and say that it’s been done way better than I ever could have! Congrats!
And now, enough of the babbling: go here to test yourself! (For comparison, a reflection of my results can be seen in next week’s post :)
0 Comments

CRT and real-life decision making

19/5/2015

0 Comments

 
The Cognitive Reflection Test (CRT) is a really neat instrument: consisting of just three simple questions, it has shown to be a good predictor of successful decision making in laboratory studies. For example, higher CRT scores have predicted success in probabilistic reasoning, avoiding probabilistic biases, overconfidence, and intertemporal choice. What’s more, the CRT explains success in these tasks over and above the contribution of intelligence, executive function or thinking dispositions.

I can’t properly emphasize how exciting the CRT is. Especially since the CRT is just three questions – making it really easy to administer – it has been lauded as one of the most important findings in decision making research for years. So far, it’s mostly been explored in the lab, but since the success in the lab is huge, and since lab decision making tends to correlate with real-life decision making, it should predict real-life outcomes as well, right?

Well, not exactly, according to a new paper by Juanchich, Dewberry, Sirota, and Narendran.

They had 401 participants – recruited via a marketing agency – fill out the CRT, a decision style measure, a personality measure, and a decision outcomes measure.

The Decision Outcome Inventory consists of 41 questions, which ask whether you’ve experienced a bad outcome such as locked yourself out of a car. The timeframe for the question is the past 10 years. These scores are weighted by the percentage of people avoiding them, since more serious negative outcomes, like jail, are more uncommon. Finally, the weighted average is substracted from 0, yielding a score range from -1 to 0, where a higher score means better decision making.

When they regress the decision outcome measure on the CRT, Big Five personality scores and the decision styles scores, this is what they get:
Picture
What we see here is that the only statistically significant predictors are Extraversion and Conscientiousness. CRT has common variance with the other predictors and thus doesn’t reach significance.

The main result: the CRT explains 2% of the variance in the outcome measure, but only 1% if the other measures are also included. In short, the CRT doesn’t really predict the decision outcomes. What’s going on?

Well, there are a few key candidates for the explanation:

1)      The DOI itself might be argued to be problematic. It is admittedly only half of good decision making: avoiding bad outcomes. But, if you look at the questions, some of those bad outcomes can arise through calculated risk-taking. For example, loaning money and not getting it back or losing 1000$ on stocks can be results of a gamble that was very positive on expected value – not results of bad decisions. Additionally, other items like “you had to spend 500$ fix a car you’d owned for less than half a year” seem to penalize for bad luck: it’s really not your fault if a new car has a manufacturing mistake in there.

2)      Most lab measures of decision making have a lot to do with numeracy. However, the real-life outcomes in the DOI, like forgetting your keys or getting divorced, do not. Perhaps they are more about self-control than numeracy. One explanation thus could be that since the CRT is strongly connected to numeracy, it therefore explains lab outcomes but not the DOI outcomes. 

3)      More worryingly, it could be that lab studies and real-life outcomes are not just very well connected altogether. I don’t think this is the reason, but there have been some studies that failed to found an association with real-life outcomes and some lab measures.


Of these explanations, the first is the best for CRT. If the failure is in the use of the DOI, then CRT is fine by that. The second is a little worrying: it tells us that the CRT is maybe not the magical bullet after all – maybe it’s tapping numeracy and not cognitive reflection. Finally, the third thing would be the worst. If lab studies don’t relate to real outcomes, it of course calls into question the whole culture of doing lab studies like we’ve used to.

I don’t know enough to pass judgment on any of these causes, but at the moment I’m leaning towards it being a toss-up between options 1 and 2. The DOI as a measure is not my favourite: it seems to penalize for things I consider just generally bad luck. From the perspective of the DOI, just sitting at home doing nothing would be good decision making. The option 3 is definitely too strong a conclusion to make based on this paper, or even just a few papers. What I’d like to see is a good meta-analysis of lab-reality correlations: though I’m not sure if that exists.
0 Comments

Good Sources About Decision Making

3/2/2015

0 Comments

 
Everyone knows Daniel Kahneman’s Thinking Fast and Slow. But if you’ve already read that, or are otherwise familiar enough for it to have low marginal benefit, then what could you study to deepen knowledge about decisions? Well, here are a few sources that I’ve found beneficial. To find more, you can check out my resources page!

TED talks

In the modern world, we’re all busy. So if you don’t want to invest tens of hours into books, but just want a quick glimpse with some food for thought, there are of course a few good TED talks around. For example:

Sheena Iyengar: The Art of Choosing

The only well-known scholar so far discussing choice from a multicultural context. Do we all perceive alternatives similarly? Does more choice mean more happiness? With intriguing experiments, Iyengar shows that the answer is: it depends. It depends on the kind of culture you’re from.

Gerd Gigerenzer: The Simple Heuristics that Make Us Smart

Gigerenzer is known as one of the main antagonists of Kahneman. In this talk, he discusses some heuristics and how in his opinion they’re more rational than the classical rationality which we often consider to be the optimal case.

Dan Ariely: Are we in control of our own decisions?

Dan Ariely is a ridiculously funny presenter. For that entertainment value alone, the talk is well worth watching. Additionally, he shows nicely how defaults influence our decisions, and how a complex choice case makes it harder to overcome the status quo bias.

Books

Even though TED talks are inspiring, nothing beats a proper book! With all their data and sources to dig deeper, any of these books is a good starting point for an inquiry into decisions.


Reid Hastie & Robyn Dawes: Rational Choice in an Uncertain World

For a long time, I was annoyed there doesn’t seem to be a good, non-technical introduction into the field of decision making. Kahneman’s book was too long and focused on his own research. Then I came across this beauty. In just a little over 300 pages, Hastie & Dawes go through all the major findings in behavioral decision making, and also throw in a lesson or two about causality and values. Definitely worth a read if you haven’t gotten into decision making before. And even if you have, because then you’ll be able to skim some parts and concentrate on the nuggets most useful for you. 

Jonathan Baron: Thinking and Deciding

Talking about short books – this is not one of them. This is THE book in the field of decision making. A comprehensive edition with over 500 pages, it covers all the major topics: probability, rationality, normative theory, biases, descriptive theory, risk, moral judgment. Of course, there’s much, much more to any of the topics included, but for an overview this book does an excellent job. It’s no secret that this book sits only a meter away from my desk, that’s how often I tend to read it.

Keith Stanovich: The Robot’s Rebellion - Finding Meaning in the Age of Darwin

This book may be 10 years old, but it’s still relevant today. Stanovich describes beautifully the theory of cognitive science around decisions, Systems 1 and 2 and so on. He proceeds to connect this to Dawkinsian gene/meme theory, resulting in a guide to meaning in the scientific and Darwinian era.
0 Comments

Intelligence Doesn't Protect from Biases

16/9/2014

0 Comments

 
Perhaps one of the most common misconceptions about biases is that they only happen to dumb people. However, this is - to be clear - false. I think there are a few reasons why this misconception persists.

Firstly, a lot of bias experiments seem really obvious after the correct answer has been revealed. This plays directly into our hindsight bias – also aptly named the knew-it-all-along effect – in which the answer makes us think “oh, I wouldn’t have fallen into that trap”. Well, as the data shows, you most likely would have.

A second reason is that a popular everyday conception of intelligence implies roughly that “more intelligence = more good stuff”. Unfortunately, this simplistic rule fails to work here. Intelligence in scientific terms is cognitive ability, which is computational power. In terms of biases, lack of power is not the issue. The issue is that we don’t see or notice how to use the power in the right way. It’s like if someone is trying to hammer nails by hitting them with the handle end. Sure, we can say that he needs a bigger hammer, but a reasonable solution would be to use the hammer with the right end.
Picture
Special Offer: The Hammer of Rationality
Stanovich & Stanovich (2010, p. 220)  summarize in their paper why intelligence does not help with rational thinking very much:

[--] while it is true that more intelligent individuals learn more things than less intelligent individuals, much knowledge (and many thinking dispositions) relevant to rationality are picked up rather late in life. Explicit teaching of this mindware is not uniform in the school curriculum at any level. That such principles are taught very inconsistently means that some intelligent people may fail to learn these important aspects of critical thinking. 

In their paper they also tabulate which biases or irrational dispositions have an association with intelligence, and which have not (Stanovich & Stanovich, 2010, p. 221):
Picture
Now I’m not going to go through the list more than this, the point is just to show that there are tons of biases that have no relation to intelligence, and that for the other ones the association is still quite low (.20-.35). In practice, such a low correlation means that intelligence is not a dominant factor: dumb people can be rational and intelligent people can be irrational.

Now, some might feel the lack of association to intelligence a dystopian thought. If intelligence is of no use here, what can we do? To be absolutely clear, I’m not saying that we are doomed to suffer from these biases forever. Even though intelligence does not help, we can still help ourselves by being aware of the biases and learning better reasoning strategies. Most biases arise due to our System 1 heuristics getting out of hand. What we need in those situations is better mindware, complemented by slower and more thorough reasoning. 

Thankfully, that can be learned.
0 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.