Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Deeper Look at the Rationality Test

15/10/2015

0 Comments

 
Okay, so I promised to reveal my own results for the last week’s Rationality test, and also take a deeper look at the questions while I’m at it. So here goes.

You might guess that – based on the fact that rationality is a big theme of this blog – I would receive the score “Rationalist”. Well, you’d be half right. When I was trying out the beta version of the test (with slightly different questions I think), I got “Skeptic”. Also, the more rationalist result of the second try was a lot due to the fact that for some questions I knew what I was supposed to answer. I guess this shows there’s still room for improvement. Anyway, so what are you supposed to answer, and why? I’ll go through the test question-by-question, providing short explanations and links to relevant themes or theories. At the end, I’ll show how the questions relate to the skills and sub-categories.

Question-by-question analysis

1. LA movie
Well, this is just a classic case of sunk costs. You’ve invested time and money into something, but it’s not bringing the utility (or profit, or whatever) that you thought. If you abandon it, you abandon all chance of getting any utility out from the investment. However, as far as rationality is concerned, past costs are not relevant, since you can’t affect them anymore. The only thing that matters is the opportunity cost: should you stay in the movie, or rather do something else. If that something else brings more benefits, rationally you should be doing that.

2. Music or job
This problem is a classic case of how to confuse your opponents in debates. You can see this in politics, like suppose you’re talking about school shootings with a hard-line conservative: “Either we give guns to teachers or we lower the right to bear arms to 8 years old!”. Well, of course you see right away that you’re being presented a false dichotomy: there are many other ways to prevent shootings – like banning all arms altogether. But to skew the debate and try to put you in a position in which you have to accept something you don’t like, your opponent tries to lure you into the these-are-the-only-two-options trap.

3. Doughnut-making machine
Now, this question is basically just simple arithmetic. However, the trick here is that the answer that immediately comes to mind is incorrect, ie. a false System 1 response. Instead, what you need to do is to mentally check that number, see it is wrong, and use System 2 to provide the right answer. The question itself is just a rephrase of one question in the classic Cognitive Reflection Test.

4. Fixating on improbable frightening possibilities
I’m a little puzzled about this question. Sure, I understand the point is that if you’re always fixating on really unlikely bad things, you’re doing something wrong. Still, I find it hard to see anyone would actually be like this!

5. The dead fireman
Now, the point in this question was to see how many possible causes you would think of before deciding on one. The idea is, naturally enough, confirmation bias. We’re too often thinking of a certain explanation, and then immediately jumping to look for confirming evidence. In complex problems, this is a special problem since as we all know, if you torture the numbers enough with statistics, you can make them confess to anything.

6. Things take longer
Well, I presume this simple self-report question is just measuring your susceptibility to the planning fallacy.

7. Bacteria dish
This question has the same idea as the Doughnut-making machine. You get a System 1 answer, suppress it, and (hopefully) answer correctly with System 2. This question is also a rephrase of a question from the Cognitive Reflection Test.

8. Refuting arguments
Being able to argue against other people is a clear sign of rhetorical skills and logical thinking.

9. Budgeting time for the essay
This question checks the planning fallacy. Often, we’re way too optimistic about the time that it takes to complete a project. For example, I’m still writing a paper I expected to be ready for submission in May! In this question, you were awarded full points for assigning at least 3 weeks for the essay, ie. the average of the previous essay completion times.

10. Learning from the past
This is a simple no-trick question that honestly asks whether you learn from your mistakes. I honestly answered that I often do, but sometimes I end up repeating mistakes.

11. BacoNation sales and the ad campaign
This checked your ability to use statistical reasoning. True enough, sales have risen compared to the previous month, but all in all the sales have varied enough to make it plausible that the ad campaign had no effect. In fact, if you pnorm(44.5, mean(data), sd(data)), you get 0.12167, which implies that it’s plausible the September number comes from the same normal distribution. This makes the effect of the ads only somewhat likely.

12. Sitting in the train
So this is the first of the two questions that check how much you value your time. Of course, the point here is that you ought to be consistent. Unfortunately, there may be valid arguments for claiming that you value time on holiday and at home differently, due to differing opportunity costs. See question 20 below for more explanation.

13. Value of time
This question simply asks whether you find it easy or difficult to value your time. Unsurprisingly, the easier you find it the higher your points.

14. One or two meals
Would you rather have one meal now or two meals in a year? This is measuring the discounting of time. Assuming that you’re not starved of food, you presumably should discount meals in the same way as money, since money can obviously buy you meals. See question 21 below for a longer explanation.

15. Continue or quit
Another one of those self-report questions, this is basically asking whether you have fallen into the sunk cost trap.

16. 45 or 90
Here’s another question about time discounting, this time with money. The same assumptions hold as before: we’re assuming you are not in desperate need of money. If that holds, you should discount the same way over all time ranges.

17. Certainty of theory
Can a theory be certain? If you’re a Bayesian (and why wouldn’t you be, right?), you can never set a theory to be 100% certain (let’s ignore tautologies and contradictions here). In a Bayesian framework this would mean that no matter what evidence you observe, the theory can never be proven wrong, because a prior of 1 discounts any evidence for or against it.

18. 100 vs 200
Another discounting question, this time with slightly different amounts of money. Once again, you should discount the same way and choose whatever you chose before. Note that here we are also assuming that 100/200 amounts are close enough to the 40/90 decision – if we had amounts in the millions, that might make a lot of impact.

19. Big assignment
The big assignment vs. small assignments is just a self-report measure to investigate your planning skills.

20. Paying for a task
This question is a sister question to the one where you’re sitting in the train. I presume that the point is that your valuation of one hour should be the same in both question. However, we can question whether the situations are really the same. In one, you’re one holiday, and sitting in a train in a new city has positive value for me. What’s more, on holiday the opportunity costs are different. I’m not really trading off time for working hours, because the point of the holiday mindset is precisely setting aside the possibility of work, so I can enjoy whatever I’m doing – like sitting in a train in a new city. In this question, you’re trying to avoid a task at home, where the opportunity costs of one hour may certainly be different than when you’re on holiday. For example, if you have a job you can do from home, you could be working, or going out with friends, etc.

21. 45 or 90
Well, this is of course part of the other time discounting questions. Here we have the same 45/90 amounts, but the time has been shifted for one year to the future. Again, you should choose whatever you chose before.

All these questions had the similar format:
A dollars in time t vs. B dollars in time t+T

If you’re perfectly rational, you should discount in the same way between times [now, 3 months] and [1 year, 1 year 3 months]. The reason is quite simple: if you’re now willing to wait for the extra three months but not when the lower amount is immediate, you will in the future end up changing your decision. And, if you already know you will change it, why wouldn’t you choose that option already. Hence, you should be consistent. (if you really need an academic citation, here is a good place to start)
 
The score
 So how do these questions make up your score?
If you look at the URL of your results report, you probably see something like
https://www.guidedtrack.com/programs/1q59zh4/run?normalpoints=33&sunkcost=4&planning=3&explanationfreeze=3&probabilistic=4&rhetorical=4&analyzer=3&timemoney=4&intuition=14&future=14&numbers=16&evidence=14&csr=8&enjoyment=0&reportshare=0
You can use that to look at your score by category, for example in my case:
Picture
That’s all for this week, happy rationality (or rationalizing?) to all of you! :)
0 Comments

Test Your Rationality

6/10/2015

0 Comments

 
Picture
As a decision scholar, I’m a firm believer in the benefits of specialization. If someone is really good at doing something, then it’s often better to rely on them in that issue, and focus efforts towards where you’re personally the most beneficial to others and society at large. Of course, this principle has to apply over all agents – including myself. With that in mind, I’m going to make a feature post about something a certain someone else does – and does it much better than me.

Enter Spencer Greenberg. I’ve talked to Spencer over email a couple of times, and he’s really a great and enthusiastic guy. But that’s not the point. The point is that he does a great service to the community by producing awesome tests, which you can use to educate yourself, your partner or anyone you come across about good decision making. What’s even better is that the tests are done with the right kind of mindset: they’re well backed up by actual, hard science. What this means is that the questions make sense – there’s none of that newspaper-clickbait “find your totem animal” kind of stuff. There’s proper, science-backed measuring. Even better, the tests have been written in a way anyone can understand. You don’t need to be a book-loving nerdy scholar to gain some insights!
​
Now, I’ve always wanted to bring something to the world community. And a while ago, I thought maybe I could produce some online tests about decision making. But after seeing these tests, I’ll just tip my hat and say that it’s been done way better than I ever could have! Congrats!
And now, enough of the babbling: go here to test yourself! (For comparison, a reflection of my results can be seen in next week’s post :)
0 Comments

Failures of System 2 in a New Place – in Mannheim

24/9/2015

0 Comments

 
Well, hey there! If you’re reading this – thanks for still following this pipeline J

It’s been a little hectic at this end of the Web. Between two conferences, four paper drafts (no, they’re still not finished), getting married, and moving to Germany there’s been a certain lack of time for this project! Now, however, things are settling back to more or less normal, which thank goodness means I can pick this blog up again.

If you know me IRL, you might have heard, but anyway: I’m spending the next 12 months fortuituously at the University of Mannheim as a visiting PhD student. Needless to say, I’m very excited! The Department of Banking and Finance seems great, and full of awfully nice people. Also, taking a walk around the main building is certainly awe-inspiring:
Picture
A random finding of the last weeks is that going offline is not necessarily bad for productivity. I travel daily to Mannheim by train, which takes about 1,5 hours, depending on how many Gleisstörungen or other delays Deutsche Bahn happens to throw my way. Anyway, since I don’t have a German mobile yet, I’m in the train without a Web connection. Originally, I thought this was going to be a problem, since it’s hard to program anything when you can’t read stackoverflow, it’s hard to read articles since you can’t tap into anyone’s comments on the paper, etc. But, as I found out, it’s also hard to pretend you’re working when you can waste time on Facebook! So far, the daily train rides have worked well for my productivity, resulting in a lot of concentrated reading, data analysis – and this text!

On the other hand, I remember reading somewhere that pretty much any change in the environment increases productivity at first, but the effect wears off in a few weeks. Well, I guess we’ll see about that soon enough!

Another interesting thing is how going abroad shows the importance of System 1. I hadn’t really remembered just how much System 1 is the guiding light in the everyday. I mean, when you buy something from the grocery store, it’s mostly the same stuff as before. When you take a bus, it’s the same bus. When you walk to the gym, it’s where it’s always been and the equipment is exactly the same. However, all this changes with moving abroad.

When I wanted to buy cream for a sauce, I had to spend 10 minutes looking at different packages, trying in vain to determine which kind of cream it holds inside. When I was at the gym, I spent a lot of time looking for the right equipment. And when I take a bus or train anywhere, it takes half an hour to plan everything for the trip – especially since I didn’t have mobile Internet until just a few days ago. So everything where I could’ve normally relied on System 1 is now the business of System 2 instead. So instead of lazily cruising around my day, I spend an inordinate amount of time having to think things through. Having to weigh options and choose carefully. Having to look for information since there’s no schema in my head.

If you like, you could say that this shows how the ultimate rational model is not a good model to strive for. In a certain sense, you could be right. Then, on the other hand, all of the choices in my everyday are very small ones. So from the perspective of a meta-choice strategy, it totally makes sense to relegate those ones to System 1. It really doesn’t matter what cream I buy (at least not very much), so it’s a good heuristic to buy just what I’ve tried before, and what I know will work out. I really don’t want to spend my limited mental choice on rationally comparing the different cream packets. Because – as I’ve seen here – doing that will just tire you out really quickly. Better to rely on heuristics.
0 Comments

CRT and real-life decision making

19/5/2015

0 Comments

 
The Cognitive Reflection Test (CRT) is a really neat instrument: consisting of just three simple questions, it has shown to be a good predictor of successful decision making in laboratory studies. For example, higher CRT scores have predicted success in probabilistic reasoning, avoiding probabilistic biases, overconfidence, and intertemporal choice. What’s more, the CRT explains success in these tasks over and above the contribution of intelligence, executive function or thinking dispositions.

I can’t properly emphasize how exciting the CRT is. Especially since the CRT is just three questions – making it really easy to administer – it has been lauded as one of the most important findings in decision making research for years. So far, it’s mostly been explored in the lab, but since the success in the lab is huge, and since lab decision making tends to correlate with real-life decision making, it should predict real-life outcomes as well, right?

Well, not exactly, according to a new paper by Juanchich, Dewberry, Sirota, and Narendran.

They had 401 participants – recruited via a marketing agency – fill out the CRT, a decision style measure, a personality measure, and a decision outcomes measure.

The Decision Outcome Inventory consists of 41 questions, which ask whether you’ve experienced a bad outcome such as locked yourself out of a car. The timeframe for the question is the past 10 years. These scores are weighted by the percentage of people avoiding them, since more serious negative outcomes, like jail, are more uncommon. Finally, the weighted average is substracted from 0, yielding a score range from -1 to 0, where a higher score means better decision making.

When they regress the decision outcome measure on the CRT, Big Five personality scores and the decision styles scores, this is what they get:
Picture
What we see here is that the only statistically significant predictors are Extraversion and Conscientiousness. CRT has common variance with the other predictors and thus doesn’t reach significance.

The main result: the CRT explains 2% of the variance in the outcome measure, but only 1% if the other measures are also included. In short, the CRT doesn’t really predict the decision outcomes. What’s going on?

Well, there are a few key candidates for the explanation:

1)      The DOI itself might be argued to be problematic. It is admittedly only half of good decision making: avoiding bad outcomes. But, if you look at the questions, some of those bad outcomes can arise through calculated risk-taking. For example, loaning money and not getting it back or losing 1000$ on stocks can be results of a gamble that was very positive on expected value – not results of bad decisions. Additionally, other items like “you had to spend 500$ fix a car you’d owned for less than half a year” seem to penalize for bad luck: it’s really not your fault if a new car has a manufacturing mistake in there.

2)      Most lab measures of decision making have a lot to do with numeracy. However, the real-life outcomes in the DOI, like forgetting your keys or getting divorced, do not. Perhaps they are more about self-control than numeracy. One explanation thus could be that since the CRT is strongly connected to numeracy, it therefore explains lab outcomes but not the DOI outcomes. 

3)      More worryingly, it could be that lab studies and real-life outcomes are not just very well connected altogether. I don’t think this is the reason, but there have been some studies that failed to found an association with real-life outcomes and some lab measures.


Of these explanations, the first is the best for CRT. If the failure is in the use of the DOI, then CRT is fine by that. The second is a little worrying: it tells us that the CRT is maybe not the magical bullet after all – maybe it’s tapping numeracy and not cognitive reflection. Finally, the third thing would be the worst. If lab studies don’t relate to real outcomes, it of course calls into question the whole culture of doing lab studies like we’ve used to.

I don’t know enough to pass judgment on any of these causes, but at the moment I’m leaning towards it being a toss-up between options 1 and 2. The DOI as a measure is not my favourite: it seems to penalize for things I consider just generally bad luck. From the perspective of the DOI, just sitting at home doing nothing would be good decision making. The option 3 is definitely too strong a conclusion to make based on this paper, or even just a few papers. What I’d like to see is a good meta-analysis of lab-reality correlations: though I’m not sure if that exists.
0 Comments

Discussing Rationality

3/3/2015

2 Comments

 
I have a confession to make: I’m having a fight. Well, not a physical one, but an intellectual one, with John Kay’s book Obliquity. It seems to me that we have some differences in our views about rationality.

Kay writes that he used to run an economic consultancy business, and they would sell models to corporations. What he realized later on was that nobody was actually using the models for making decisions, but only for rationalizing them after they were made. So far so good – I can totally believe that happening. But now for the disagreeable part:
They have encouraged economists and other social scientists to begin the process of looking at what people actually do rather than imposing on them models of how economists think people should behave. One popular book adopts the title Predictably Irrational. But this title reflects the same mistake that my colleagues and I made when we privately disparaged our clients for their stupidity. If people are predictably irrational, perhaps they are not irrational at all: perhaps the fault lies not with the world, but with our concept of rationality.
- Obliquity, preface 
Ok, so I’ve got a few things to complain about. First of all, it’s obvious we disagree about rationality. Kay thinks that if you’re predictably irrational, then maybe the label of irrationality is misplaced. I think that if you’re predictably irrational, that’s a two-edged sword. The bad thing is that predictability means you’re irrational in many instances – they are not random errors. But predictability also entails that we can look for remedies – if irrationality is not just random errors, we can search for cures. The second thing I seem to disagree about – based on this snippet – are the causes of irrationality. For Kay, it’s stupidity. For me, it’s a failure of our cognitive system.

Regarding Kay’s conception of rationality, my first response was whaaat?! Unfortunately, that’s really not a very good counterargument. So what’s the deal? In my view, rationality means maximizing your welfare or utility, looked at from a very long-term and immaterial perspective. This means that things like helping out your friend is fine, giving money to charity is fine. Even the giving of gifts is fine, because you can give value to your act of trying to guess at your friend’s preferences. After all, to me this seems to be a big part of gift-giving: when we get a gift that shows insight into our persona, we’re extremely satisfied.

Since Kay is referring explicitly to Dan Ariely’s Predictably Irrational, it might be sound to look at a few cases of (purported) irrationality that it portrays. Here’s a few examples I found in there:

  1. We overvalue free products, choosing them even though a non-free options has better value for money (chapter 3)
  2.  We cannot predict our preferences in a hot emotional state from a cold one (chapter 6)
  3.  We value our possessions higher than other people do, so try to overcharge when selling them (chapter 8)
  4. Nice ambience, brand names etc. make things taste better, but can’t recognize this as the cause (chapter 10)
  5.  We used to perform surgery on osteoarthritis of the knee – later it turned out a sham surgery had the same effect

If Kay wants to say that these cases are alright, that this is perfectly rational behavior, then I don’t really know what one could say to that. With the exception of point 3, I think all cases are obvious irrationalities. The third point is a little bit more complex, since in some cases the endowment effect might be driven by strategic behavior, ie. trying to get the maximum selling price. However, it also includes cases where we give stuff to people at random, with a payout structure that ensures they should ask for their utility-maximizing selling price. But I digress. The point being that if Kay wants to say these examples are okay, then we have a serious disagreement. I firmly believe we’d be better off without these errors and biases. Of course, what we can do about them is a totally different problem – but it seems that Kay is arguing that they are in principle alright.

The second disagreement, as noted above, is about the causes of such behaviors. Kay says the chided their clients ‘stupidity’ for not using the models of rational behavior. Well, I think that most errors arise due to us using System 1 instead of System 2. Our resources are limited, and we’re more often than not paying inadequate attention to what is going on. This makes irrationality not a problem of stupidity, but a failure of our cognitive system. Ok, so intelligence is correlated to some tasks of rational decision making, but for some tasks, there is no correlation  (Stanovich & West, 2000). It's patently clear that intelligence alone will not save you from biases. And that’s why calling irrational people stupid is –for want of a more fitting word – stupid.

Ok, so not a strong start from my perspective for the book, but I’m definitely looking forward to what Kay has to say in later chapters. There’s still a tiny droplet of hope in me that he’s just written the preface unclearly, and he’s really advocating for better decisions. But, there’s still a possibility that he’s just saying weird things. I guess we’ll find out soon enough.
2 Comments

Good Sources About Decision Making

3/2/2015

0 Comments

 
Everyone knows Daniel Kahneman’s Thinking Fast and Slow. But if you’ve already read that, or are otherwise familiar enough for it to have low marginal benefit, then what could you study to deepen knowledge about decisions? Well, here are a few sources that I’ve found beneficial. To find more, you can check out my resources page!

TED talks

In the modern world, we’re all busy. So if you don’t want to invest tens of hours into books, but just want a quick glimpse with some food for thought, there are of course a few good TED talks around. For example:

Sheena Iyengar: The Art of Choosing

The only well-known scholar so far discussing choice from a multicultural context. Do we all perceive alternatives similarly? Does more choice mean more happiness? With intriguing experiments, Iyengar shows that the answer is: it depends. It depends on the kind of culture you’re from.

Gerd Gigerenzer: The Simple Heuristics that Make Us Smart

Gigerenzer is known as one of the main antagonists of Kahneman. In this talk, he discusses some heuristics and how in his opinion they’re more rational than the classical rationality which we often consider to be the optimal case.

Dan Ariely: Are we in control of our own decisions?

Dan Ariely is a ridiculously funny presenter. For that entertainment value alone, the talk is well worth watching. Additionally, he shows nicely how defaults influence our decisions, and how a complex choice case makes it harder to overcome the status quo bias.

Books

Even though TED talks are inspiring, nothing beats a proper book! With all their data and sources to dig deeper, any of these books is a good starting point for an inquiry into decisions.


Reid Hastie & Robyn Dawes: Rational Choice in an Uncertain World

For a long time, I was annoyed there doesn’t seem to be a good, non-technical introduction into the field of decision making. Kahneman’s book was too long and focused on his own research. Then I came across this beauty. In just a little over 300 pages, Hastie & Dawes go through all the major findings in behavioral decision making, and also throw in a lesson or two about causality and values. Definitely worth a read if you haven’t gotten into decision making before. And even if you have, because then you’ll be able to skim some parts and concentrate on the nuggets most useful for you. 

Jonathan Baron: Thinking and Deciding

Talking about short books – this is not one of them. This is THE book in the field of decision making. A comprehensive edition with over 500 pages, it covers all the major topics: probability, rationality, normative theory, biases, descriptive theory, risk, moral judgment. Of course, there’s much, much more to any of the topics included, but for an overview this book does an excellent job. It’s no secret that this book sits only a meter away from my desk, that’s how often I tend to read it.

Keith Stanovich: The Robot’s Rebellion - Finding Meaning in the Age of Darwin

This book may be 10 years old, but it’s still relevant today. Stanovich describes beautifully the theory of cognitive science around decisions, Systems 1 and 2 and so on. He proceeds to connect this to Dawkinsian gene/meme theory, resulting in a guide to meaning in the scientific and Darwinian era.
0 Comments

Rationality is Cumulative

4/11/2014

1 Comment

 
One of the dismissals sometimes offered against rationality is that we don’t need to make optimal decisions, because most decisions don’t really matter that much. So what if you don’t pick the optimal job – if your pick is a job that’s “good enough”, well, that’s probably good enough. The difference between rationality and near-optimality, the argument goes, is not big enough to warrant all this fuss.

Admittedly, in many cases this argument is a good one. It probably makes no sense to spend a lot of energy optimizing the choice of lunch. As for myself, I’ll just pick something that’s tasty enough and healthy enough, without worrying whether there might be a better alternative. In fact, there is considerable evidence that this kind of behavior is better in the long run, when small choices are being considered. For a thorough argument, see for example Barry Schwartz’s book The Paradox of Choice.

But there is a fatal flaw in the argument, which renders it unsuitable for the assumed role of a magic bullet. The argument assumes that all decisions are independent of each other. If this were true, then the differences of rationality and good enough choices – assuming they’re small – don’t build up to a very large extent. A sum of small differences is still a small number. Unfortunately, the independence assumption is not warranted. For example, consider the case of choosing a job. It is plain that this choice has a heavy effect on your future choices concerning family life, work-life balance and career development. An optimal choice in this respect will maximize the opportunities you have in the future, and enable better decision opportunities then. 
Picture
The forked path to success?
That’s why the differences matter. Rational decision making achieves a slightly better outcome over other types of decision making. And since decisions lead to other decisions, the decisions aren’t just independent problems. No, it’s more like compound interest: doing better now can make a difference of several orders of magnitude later. As Robert Nozick writes:
Rationality has a cumulative force. A given rational decision may not be very much better than a less rational one, yet it leads to a new decision situation somewhat different from that the other would have produced; and in this new decision situation a further rational action produces its results and leads to still another decision situation. Over extended time, small differences in rationality compound to produce very different results.
-          - Robert Nozick (1993): Nature of Rationality, p. 175

This idea makes it clear why rationality is important in smaller decisions, too. Firstly, it generates benefits in the long term due to the cumulative effect. Secondly, cultivating rationality whenever possible makes it a habit, a stable way of behavior, and thus also ensures beneficial outcomes over the long period. And habits – if one believes modern psychology – are the things that we usually resort to. As I’ve already argued in relation to weight loss, we have limited willpower. After a hard day at work, we usually don’t have the energy anymore to think very rationally. But – and this is crucial – if we’ve cultivated a habit of rationality, good decisions will be much easier to make. And the more we follow this habit, the easier it becomes.

The habitual behavior is in some way giving hope. I know from the literature – and even more so from personal experience – that we cannot change all our ways at once. Trying to eat better, exercise more, facebook less and read more books at the same time is a plan that’s doomed from the start. There’s just too much to remember, too much to focus on. The same goes for better thinking: it doesn’t make any sense to expect that we’ll be able to optimize all our decisions as soon as we decide to try. No – we’ll still get tired, lose our focus and tons of other things that prevent a good, reflective decision. But by cultivating the habit of rationality, we slowly but surely go towards better decision making – one decision at a time. And after a while, we hopefully notice that the habit has become almost automatic, and making good decisions is not so hard anymore.

And that’s when we’ll really start reaping the benefits from the fact that rationality is cumulative.
1 Comment

Intelligence Doesn't Protect from Biases

16/9/2014

0 Comments

 
Perhaps one of the most common misconceptions about biases is that they only happen to dumb people. However, this is - to be clear - false. I think there are a few reasons why this misconception persists.

Firstly, a lot of bias experiments seem really obvious after the correct answer has been revealed. This plays directly into our hindsight bias – also aptly named the knew-it-all-along effect – in which the answer makes us think “oh, I wouldn’t have fallen into that trap”. Well, as the data shows, you most likely would have.

A second reason is that a popular everyday conception of intelligence implies roughly that “more intelligence = more good stuff”. Unfortunately, this simplistic rule fails to work here. Intelligence in scientific terms is cognitive ability, which is computational power. In terms of biases, lack of power is not the issue. The issue is that we don’t see or notice how to use the power in the right way. It’s like if someone is trying to hammer nails by hitting them with the handle end. Sure, we can say that he needs a bigger hammer, but a reasonable solution would be to use the hammer with the right end.
Picture
Special Offer: The Hammer of Rationality
Stanovich & Stanovich (2010, p. 220)  summarize in their paper why intelligence does not help with rational thinking very much:

[--] while it is true that more intelligent individuals learn more things than less intelligent individuals, much knowledge (and many thinking dispositions) relevant to rationality are picked up rather late in life. Explicit teaching of this mindware is not uniform in the school curriculum at any level. That such principles are taught very inconsistently means that some intelligent people may fail to learn these important aspects of critical thinking. 

In their paper they also tabulate which biases or irrational dispositions have an association with intelligence, and which have not (Stanovich & Stanovich, 2010, p. 221):
Picture
Now I’m not going to go through the list more than this, the point is just to show that there are tons of biases that have no relation to intelligence, and that for the other ones the association is still quite low (.20-.35). In practice, such a low correlation means that intelligence is not a dominant factor: dumb people can be rational and intelligent people can be irrational.

Now, some might feel the lack of association to intelligence a dystopian thought. If intelligence is of no use here, what can we do? To be absolutely clear, I’m not saying that we are doomed to suffer from these biases forever. Even though intelligence does not help, we can still help ourselves by being aware of the biases and learning better reasoning strategies. Most biases arise due to our System 1 heuristics getting out of hand. What we need in those situations is better mindware, complemented by slower and more thorough reasoning. 

Thankfully, that can be learned.
0 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.