Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Yes for more open data in politics

31/3/2015

0 Comments

 
Since we have an election coming up , last weekend I found myself thinking about politics (prompted by some good discussions with friends). How could one improve political decision making?

In modern politics the parliament is a lot like a black box. Every fourth year, we vote in the parliamentary election and select 200 new (or the same old) faces to make decisions about matters national. The problem with this is that us voters, we really don’t know much about what is going on in there. Sure, the media covers some issues, but more often than not the reporting isn’t very good. Including just the outcome of a vote isn’t telling much about anything besides the result. And that is not enough.

If we consider what a good decision is like, it’s pretty clear that is has to be based on something. Sure, you can sometimes get it right by just guessing – but nobody would call that a good decision strategy. A good decision has to be based on evidence. Are politicians using evidence properly, then? At the moment, nobody knows. To really evaluate politicians, we need more than just their voting patterns. We need some inkling about why they chose the way they did. It’s not enough to say “down with cars!” One should at least provide some justification, like “down with cars, because they hurt our health!”  In an even better situation, we would see deeply researched judgments, really explaining how and why the policy voted for is the best way to reach an important goal. For example, someone might say “down with cars, because they hurt health so much that we could their impact on wellbeing is negative”.

Unfortunately, we hardly ever see this kind of reasoning. That’s why I think we need more open decision making, especially regarding the analyses and data that our political decision makers use. If we have more open data, we also have more minds looking at that and evaluating, whether the inferences politicians have made on that are really correct. Publicizing at least a substantial minority of the data used in the political decision making process would invite all kinds of NGOs, interest groups and analysts to really comb through the data. At the moment, none of that is happening regularly. When we do have access to data, it’s often related to decisions that have already been made! For obvious reasons, that’s really not very helpful. I think it’s fair to say that there’s room for improvement, and for moving more towards open, evidence-based policy generation.

The point here is not that politicians would be especially stupid or untrustworthy. No, the point is that they’re people just like you and me (albeit with more support staff). And just like you and me, they make mistakes. That’s why you sanity check your recommendations with your boss before sending them to the client. That’s why I get feedback from my supervisor before I send my paper to a journal. We’re fallible human beings, all of us. But having more people looking at the same things, we can average out many biases and avoid obvious mistakes. To do that in politics, we need more open data.

0 Comments

Decisions as Identity-Based Actions

24/3/2015

0 Comments

 
This semester I had the exciting chance of teaching half of our department’s Behavioral Decision Making course. What especially has gotten me thinking is James March’s book chapter Understanding How Decisions Happen in Organizations, taken from the book Organizational Decision Making, edited by Zur Shapira. Similar points can be found in March’s paper called How Decisions Happen in Organizations.

In the chapter, March presents all kinds of questions and comments directed at organizational scholars. Basically, his main point is that the normative expected utility theory is perhaps not a very good model for describing organizational decisions. No surprises there – modelling organizational politicking through utilities is pretty darn difficult. What did catch my eye was that March has a pretty nice description of how some organizations do muddle through.

This idea concerns decisions as identity-based actions. The starting point is that each member of an organization occasionally faces decision situations. These are then implicitly classified into different classes For example into HR decisions, decisions at strategy meetings, decisions after lunch, etc. The classification depends on the person, of course. What’s key is the next two steps: these classes are associated with different identities, which then are the basis of decisions through matching. This way, the decision gets made by basing the choice on rules and the role, not just on the options at hand.
Picture
So the manager may adopt a different identity when making strategy decisions, than when thinking of who to hire for his team. The decision is not based on a logic of consequence, but rather on a logic of appropriateness – we do what we ought to be doing in our role. “Actions reflect images of proper behavior, and human decision makers routinely ignore their own fully conscious preferences. They act not on the basis of subjective consequences and preferences but on the basis of rules, routines, procedures, practices, identities, and roles” (March, 1997) So rather than starting from the normative question of what they want, and which consequences are then appropriate, decision makers are implicitly asking “Who am I? What should a person in my position be doing?”

I feel that this kind of rule-based or identity-based behavior is a two-edged sword. On the one hand, it offers clear cognitive benefits. A clear identity for a certain class of decisions saves you the trouble of meta-choice: you don’t have to decide how to decide. When the rules coming from that identity are adequate and lead generally to good outcomes, it’s an easy procedure to just follow them and get on with it. On the other hand, the drawbacks are equally clear. Too much reliance leads to the “I don’t know, I just work here” phenomenon, in which people get in too deep in their roles, forgetting that they actually have a mind of their own.

Which way is better, then? Identity-based decisions, or controlled individual actions? Well, I guess the answer looks like the classic academic’s answer: it depends. It depends on the organization and the manner of action: how standardized are the problems that people face, is it necessary to find the best choice option or is satisficing enough, and so on. Of course, it also depends on the capabilities of the people involved: do they have the necessary skills and mindset to handle it if left without guiding rules and identities? Of do they need a little bit more support for their decisions? Questions like these are certainly not easy, but every manager should be asking them one way or another. 
0 Comments

Basic Biases: The Availability Heuristic

17/3/2015

0 Comments

 
The availability heuristic is a bias that arises when we confuse probability with ease of recall. This means that without noticing it, we are actually answering a completely different question than the original one. Instead of answering "how likely is this?" we answer "how easily did this come to mind?". If our experiences about the world would be uniformly and randomly distributed – and covering all possibilities – only then would ease of recall be the same as probability. Of course, this is not the case. With modern media, private experience is not the only source for our thoughts; we read newspapers, blogs, watch TV and consume all kinds of media that tell us what did, could have, or should have happened.

My favourite example of the availability heuristic is related to travelling. Imagine that you’re finally getting to that long-awaited holiday on a paradise island. Your friend drops you off at the airport. As you gather your suitcase and are just about to leave, your friend shouts to you “Have a safe flight!” You say thanks, and proceed to check-in.
Picture
Why this is a good illustration of the availability heuristic is the fact that you – the one getting on the plane – is being reminded to have a safe trip. Whereas in fact, if you look at the numbers, driving a car is actually much more likely to be fatal than flying! The discrepancy is huge: statistical estimates say that it’s around two to five times more likely to have an accident on the way home from the airport than on the flight, though the exact numbers depend a lot on the assumptions of who's driving where. So when we consider the safety of the car versus the plane, it’s very easy to remember examples of planes crashing, or even disappearing altogether. Whereas car accidents are so common that they rarely break the national news barrier.

So it’s not just that availability is a poor guide to probability. In the case of mass media, availability is actually inversely proportional to the probability! After all, papers want to report new, exciting things, and not just car accidents that happen every day. This essentially means that “oh, I saw an article about this in the paper” is not a good guide to the world of things to come.

If you’re a nitpicker (I know I am, so there’s no shame to admitting) you might say that saying safe trip is really not a probability estimation claim. When you say “have a safe trip”, you’re not trying to state that “I believe your mode of travels is statistically more likely to result in death or injury, and I aim to prevent a part of that by this utterance”. No, of course not. Even an economist wouldn’t claim such a thing! It’s more a statement of wishing your friend well, and hoping for a good trip. But still, I find it funny that we use the word “safe” here, in exactly the wrong place.
0 Comments

How Rejection Levels Can Help You

10/3/2015

0 Comments

 
A concept that comes up pretty often in decision research is the one of aspiration levels. They are meant to reflect some kind of preference levels, meaning levels of attributes that the decision maker would like to have in an ideal situation. The idea behind the concept is that such levels can guide both the decision maker and the analyst to look for portions of the alternative space that’s relevant – better to search close to the optimal levels.

Now that’s nice and all, but for practical purposes I think an inverse concept is perhaps even more useful. By inverse I mean rejection levels. Or, as I like to call them, what-the-hell-I’m-absolutely-not-willing-to-accept-that levels. The idea is simple enough: rejection levels signify the worst attribute levels you’re willing to accept. A value worse than that means you’ll discard it immediately and look elsewhere.

The benefit is that if you have many alternatives, rejection levels can be used to make the search space smaller very fast. Imagine you’re buying a bike, and there are two criteria: cost and quality. You probably have some aspiration levels – the ideal bike. That’s reflected in the upper left corner (low price, terrific quality). But that only tells us the portion of the search space with the best alternative, but unfortunately very likely a non-existing one. Looking at the picture below, it’s clear there’s still a lot of search space remaining.
Picture
On the other hand, the rejection levels immediately close off a large portion of the graph. You’re not willing to pay more than 1500 euros for any bike, nor are you ready to accept a bike with a quality rating of less than five. The picture shows how much effort you can save with the rejection levels – there’s many options that are closed off just by setting the levels.
Picture
The trick with rejection levels is that you need to set them before looking at the options. A bike can be bought without issues, but any more complex decision and trouble arises. For example, house buying is of considerable difficulty in itself. And what marketers know is that if the house makes a good first impression, you’re likely to start coming up with reasons for why that house was just so lovely, convenient, and so on. As a result, people tend to exceed their budget after falling in heavy with a single house.

To avoid this, rejection levels are a great technique. If the price goes above the rejection level, you can confidently say thanks, but no thanks and just move on. By making the rejection decisions with a rule that you’ve committed to beforehand is much, much easier than mulling over each and every option you come across.
0 Comments

Discussing Rationality

3/3/2015

2 Comments

 
I have a confession to make: I’m having a fight. Well, not a physical one, but an intellectual one, with John Kay’s book Obliquity. It seems to me that we have some differences in our views about rationality.

Kay writes that he used to run an economic consultancy business, and they would sell models to corporations. What he realized later on was that nobody was actually using the models for making decisions, but only for rationalizing them after they were made. So far so good – I can totally believe that happening. But now for the disagreeable part:
They have encouraged economists and other social scientists to begin the process of looking at what people actually do rather than imposing on them models of how economists think people should behave. One popular book adopts the title Predictably Irrational. But this title reflects the same mistake that my colleagues and I made when we privately disparaged our clients for their stupidity. If people are predictably irrational, perhaps they are not irrational at all: perhaps the fault lies not with the world, but with our concept of rationality.
- Obliquity, preface 
Ok, so I’ve got a few things to complain about. First of all, it’s obvious we disagree about rationality. Kay thinks that if you’re predictably irrational, then maybe the label of irrationality is misplaced. I think that if you’re predictably irrational, that’s a two-edged sword. The bad thing is that predictability means you’re irrational in many instances – they are not random errors. But predictability also entails that we can look for remedies – if irrationality is not just random errors, we can search for cures. The second thing I seem to disagree about – based on this snippet – are the causes of irrationality. For Kay, it’s stupidity. For me, it’s a failure of our cognitive system.

Regarding Kay’s conception of rationality, my first response was whaaat?! Unfortunately, that’s really not a very good counterargument. So what’s the deal? In my view, rationality means maximizing your welfare or utility, looked at from a very long-term and immaterial perspective. This means that things like helping out your friend is fine, giving money to charity is fine. Even the giving of gifts is fine, because you can give value to your act of trying to guess at your friend’s preferences. After all, to me this seems to be a big part of gift-giving: when we get a gift that shows insight into our persona, we’re extremely satisfied.

Since Kay is referring explicitly to Dan Ariely’s Predictably Irrational, it might be sound to look at a few cases of (purported) irrationality that it portrays. Here’s a few examples I found in there:

  1. We overvalue free products, choosing them even though a non-free options has better value for money (chapter 3)
  2.  We cannot predict our preferences in a hot emotional state from a cold one (chapter 6)
  3.  We value our possessions higher than other people do, so try to overcharge when selling them (chapter 8)
  4. Nice ambience, brand names etc. make things taste better, but can’t recognize this as the cause (chapter 10)
  5.  We used to perform surgery on osteoarthritis of the knee – later it turned out a sham surgery had the same effect

If Kay wants to say that these cases are alright, that this is perfectly rational behavior, then I don’t really know what one could say to that. With the exception of point 3, I think all cases are obvious irrationalities. The third point is a little bit more complex, since in some cases the endowment effect might be driven by strategic behavior, ie. trying to get the maximum selling price. However, it also includes cases where we give stuff to people at random, with a payout structure that ensures they should ask for their utility-maximizing selling price. But I digress. The point being that if Kay wants to say these examples are okay, then we have a serious disagreement. I firmly believe we’d be better off without these errors and biases. Of course, what we can do about them is a totally different problem – but it seems that Kay is arguing that they are in principle alright.

The second disagreement, as noted above, is about the causes of such behaviors. Kay says the chided their clients ‘stupidity’ for not using the models of rational behavior. Well, I think that most errors arise due to us using System 1 instead of System 2. Our resources are limited, and we’re more often than not paying inadequate attention to what is going on. This makes irrationality not a problem of stupidity, but a failure of our cognitive system. Ok, so intelligence is correlated to some tasks of rational decision making, but for some tasks, there is no correlation  (Stanovich & West, 2000). It's patently clear that intelligence alone will not save you from biases. And that’s why calling irrational people stupid is –for want of a more fitting word – stupid.

Ok, so not a strong start from my perspective for the book, but I’m definitely looking forward to what Kay has to say in later chapters. There’s still a tiny droplet of hope in me that he’s just written the preface unclearly, and he’s really advocating for better decisions. But, there’s still a possibility that he’s just saying weird things. I guess we’ll find out soon enough.
2 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.