Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Nonlinear life, linear emotions

29/9/2015

3 Comments

 
We are the result of thousands of years of evolution. And like we all know, the modern life didn’t really exist back then, when evolution was pulling the strings and picking our physical and psychological makeup. This is a problem. One only needs to consider the obesity crisis, or our limited sense of understanding statistics to realize that we’re very far from being optimized for our current environment.

One particular example is the nonlinearity of many professions. Take a writer, for example. A writer spends hour after hour, working on the new manuscript with very limited feedback. The feedback he does get, is essentially coming from friends, who have either willingly or through coercion agreed to read the book. Or, if the writer is at least moderately successful, some feedback might even come from a professional editor. But now, consider the income of writers. It is highly nonlinear: some writers  - like J.K. Rowling – have their income counted in the millions. Most, however, make do with a few bucks here and there (or have a “proper” day job, and write at night).

Now, if you ask a writer whether their work is “going well”, or something similar, what could they say? I’m pretty sure that they have actually very little idea how it is going. Pages appear (and then disappear through editing). But the connection to the actual payoff is tenuous at best. Writing today means the book may come out next year – or in 10 years. Furthermore, there is little common knowledge about what makes a book good, or an author successful.

The key, we see here, is that the writer’s life is a nonlinear one. You can’t tell progress from walking backwards, because they look exactly the same. Of course, this is not true of just writers. In fact, this is true for almost any creative profession: artists, scientists, designers, or maybe even business strategists. They’re all living in the same nonlinear worlds: some people earn thousands of times more than others, and there’s very few signs that a result is good – other than its popularity.
​
The actual problem in relation to emotions is that our emotions love linearity. We love to see progress, and we’d like to see it every day. I presume this is why many creative professionals like renovating, knitting, or just something where you do stuff with your hands. Because, once we move from creating ideas to creating physical items, we enter the linear world. When you renovate a room, there’s only so many floorboards to replace – hence linear progress.
Picture
When we don’t get that linear emotional sense of achievement, we become skeptical of our work and progress. For some, it may even become bad enough to get depressed. For others, I think it's just a big rollercoaster. Sometimes, you're over the moon about what's happening - and sometimes, you're having that angry "this isn't fucking working" moment.

Fortunately, I think there are ways around the problem. You can create a – admittedly somewhat false – sense of linear progress. By thinking of actions that you constantly should be doing to improve yourself, you can also construct a sense of moving linearly forward. For example, I have as a goal every workday to do two things: 1) write at least half a page, and 2) read at least one article. Of course, these are not have truly linear payoffs: one day’s writing may be the turning point to a good publication – or just a lot of nonsense. Likewise, one article may be much more vital for me than another.

However, the point being that mentally ticking off these boxes (or physically in Habitica) creates an illusion of linear progress. This is false, like I said above. But, crucially, it helps to create emotional value, because I’m getting a sense of accomplishment every day from it. And even if it’s not true progress, it’s ok, because both of the actions are important enough for a scientist to never be a waste of time. 
3 Comments

Failures of System 2 in a New Place – in Mannheim

24/9/2015

0 Comments

 
Well, hey there! If you’re reading this – thanks for still following this pipeline J

It’s been a little hectic at this end of the Web. Between two conferences, four paper drafts (no, they’re still not finished), getting married, and moving to Germany there’s been a certain lack of time for this project! Now, however, things are settling back to more or less normal, which thank goodness means I can pick this blog up again.

If you know me IRL, you might have heard, but anyway: I’m spending the next 12 months fortuituously at the University of Mannheim as a visiting PhD student. Needless to say, I’m very excited! The Department of Banking and Finance seems great, and full of awfully nice people. Also, taking a walk around the main building is certainly awe-inspiring:
Picture
A random finding of the last weeks is that going offline is not necessarily bad for productivity. I travel daily to Mannheim by train, which takes about 1,5 hours, depending on how many Gleisstörungen or other delays Deutsche Bahn happens to throw my way. Anyway, since I don’t have a German mobile yet, I’m in the train without a Web connection. Originally, I thought this was going to be a problem, since it’s hard to program anything when you can’t read stackoverflow, it’s hard to read articles since you can’t tap into anyone’s comments on the paper, etc. But, as I found out, it’s also hard to pretend you’re working when you can waste time on Facebook! So far, the daily train rides have worked well for my productivity, resulting in a lot of concentrated reading, data analysis – and this text!

On the other hand, I remember reading somewhere that pretty much any change in the environment increases productivity at first, but the effect wears off in a few weeks. Well, I guess we’ll see about that soon enough!

Another interesting thing is how going abroad shows the importance of System 1. I hadn’t really remembered just how much System 1 is the guiding light in the everyday. I mean, when you buy something from the grocery store, it’s mostly the same stuff as before. When you take a bus, it’s the same bus. When you walk to the gym, it’s where it’s always been and the equipment is exactly the same. However, all this changes with moving abroad.

When I wanted to buy cream for a sauce, I had to spend 10 minutes looking at different packages, trying in vain to determine which kind of cream it holds inside. When I was at the gym, I spent a lot of time looking for the right equipment. And when I take a bus or train anywhere, it takes half an hour to plan everything for the trip – especially since I didn’t have mobile Internet until just a few days ago. So everything where I could’ve normally relied on System 1 is now the business of System 2 instead. So instead of lazily cruising around my day, I spend an inordinate amount of time having to think things through. Having to weigh options and choose carefully. Having to look for information since there’s no schema in my head.

If you like, you could say that this shows how the ultimate rational model is not a good model to strive for. In a certain sense, you could be right. Then, on the other hand, all of the choices in my everyday are very small ones. So from the perspective of a meta-choice strategy, it totally makes sense to relegate those ones to System 1. It really doesn’t matter what cream I buy (at least not very much), so it’s a good heuristic to buy just what I’ve tried before, and what I know will work out. I really don’t want to spend my limited mental choice on rationally comparing the different cream packets. Because – as I’ve seen here – doing that will just tire you out really quickly. Better to rely on heuristics.
0 Comments

Don’t allow labels to define your actions

25/6/2015

0 Comments

 
Human minds are quite complex, but they also need to use simplifications. One of the most common ways of simplifying things is to label them. You know, like “oh, that’s someone from political studies, I bet she is [insert your favorite trope about politics students]”. It’s a really easy way of making observing the world simpler. Just connect a lot of labels that you encounter often enough to attributes and actions people from those categories typically exhibit or do. Leftists bashing rich people, Christian conservatives bashing gays, scientists not doing anything with actual impact, etc.

Of course, this is also a really bad thing to do. In the extreme, by allowing the labels to completely take over we become racists, fascists, communists, or just generally inconsiderate people that define other people through their labels. This is a bad thing itself, and we should naturally avoid it. However, so much has been said about this that I don’t think I can bring anything new into the picture. That’s why I want to flip the concept over, and talk about discriminating against yourself.

Just as we apply the labels to other people, we also apply to ourselves. For example, depending on the social environment and my past few weeks of successes – or lack thereof – I tend to perceive myself as a friend, scientist, runner, philosopher, blogger, reader, boyfriend, family, or whatever is important in the current context. But they’re all labels. 
Picture
Labels are a bit like multiple personalities, just not as strong.
The labels are useful in helping me to have a sense of self in the context, allowing me to focus on certain part of myself. Who am I in relation to these people? Why am I here? These are questions we hardly ever ask, because we answer them implicitly through the application of labels. But there’s a crucial question that should not be answered with a label: what should I do?

It’s all too easy to fall into the trap of thinking that “well, I’m the junior in this group, so I should carry the most burden here, working long into the night” or that “well, I’m a professional banker so I can’t really do painting seriously”. When we apply labels to ourselves too indiscriminately and without thinking, we end up constraining ourselves. Instead of spending time learning Javascript during evenings, we may apply the label of a humanist and discard programming as something that humanists just don’t do (which is false, by the way).

It’s true that Javascript may not be the most important thing for an art scholar, for example. But that’s no reason to discard it offhand with simple label identity. Instead, it would be better to evaluate things on their own merits. Sure, if I can’t see programming as anything useful or interesting, I probably am better off doing something else, instead. But if I have a strong interest it – but no obvious usage – it might make sense to give it a try. Who knows what might come out of it? In fact, combinations of different fields are becoming ever more valuable these days.

The point is very simple here: don’t let labels define your actions. Stopping that from happening, that’s the hard part. It’s difficult to notice the implicit labels, because we do it so unconsciously. But if you ask yourself “why am I doing this?” and find yourself answering that “because that’s who I am”, then that’s a definite warning sign.
0 Comments

Pure Importance Says Nothing

26/5/2015

0 Comments

 
”Content of the job is more important than wage”

“Debt reduction trumps financial growth”

“Life is more important than money”

All three above statements are sentenced I could well imagine someone utter in an intelligent discussion. All the statements have one other thing in common: they’re pretty much meaningless.

It’s clear that we can – and often do – make such statements. In itself, there’s nothing wrong about saying that A is more important than B. For example, “the math exam is more important than the history exam” is a perfectly legit way of relating your lack of interest in what happened in the 30 Years’ War. But when it comes to talking about what you want, and how you should distribute your resources, importance statements are meaningless without numbers.

The third case is perhaps the most common one. Presumably the idea is to say that we should never sacrifice human life to gain financially. Of course, that’s flat out wrong. Even if you agreed with that in principle, in practice you’re trading off human life for welfare all the time. When you go to work, you risk getting killed in an accident on the way, but have a chance of getting paid. Buying things from the grocery means someone has risked themselves picking, packing and producing the items – if you really valued their health, you’d grow your own potatoes. In healthcare, we recognize that some treatments are too expensive to offer – the money is better used for other welfare-increasing things, like building roads for instance. Life can be traded for welfare, ie. money.
Picture
The first case also seems clear in intent: you want to have a meaningful job, instead of a becoming a cubicle hamster for a faceless corporation, no matter the wage differential. However, it’s surely not true that meaning of the job is infinitely more important. Would you rather help the homeless for free, or be a hamster for 10 million per hour?

The problem with importance without numbers is that they are hinting at tradeoffs, but grossly misrepresenting what we’re willing to accept. The choice examples involve tradeoffs, and tradeoffs are impossible if one goal is always more important than another. This causes an infinite tradeoff rate, causing you to favor a teeny-tiny probability of loss of life over the GDP of the whole world. Doesn’t sound too reasonable, does it? In fact, Keeney (1992, p.147) calls the lack of attention to range “the most common critical mistake”.

Naturally, we can always say that the examples are ridiculous, surely no one is thinking about such tradeoffs when they say life is more important than money, surely they’re thinking in terms of “sensible situations”. In a sense, I agree. Unfortunately, one’s ridiculous example is another’s plausible one. If you don’t say anything about the range of life and money that you’re talking about, I can’t know what you’re trying to say. It’s just much easier to say it explicitly: life is more important than money, for amounts smaller than 1000 euros, say.

Even this gets us into problems, because now if I originally have a choice problem involving 3000 euros and chance of death, you’d be willing to make some kind of tradeoff. But if I subdivide the issue into three problems, now suddenly human life always wins. If you think about utility functions, you can see how this can quickly become a problem. But the situation is still better than not having any ranges at all. Even better would be to assign a tradeoff ration that’s high but not infinite. 
0 Comments

CRT and real-life decision making

19/5/2015

0 Comments

 
The Cognitive Reflection Test (CRT) is a really neat instrument: consisting of just three simple questions, it has shown to be a good predictor of successful decision making in laboratory studies. For example, higher CRT scores have predicted success in probabilistic reasoning, avoiding probabilistic biases, overconfidence, and intertemporal choice. What’s more, the CRT explains success in these tasks over and above the contribution of intelligence, executive function or thinking dispositions.

I can’t properly emphasize how exciting the CRT is. Especially since the CRT is just three questions – making it really easy to administer – it has been lauded as one of the most important findings in decision making research for years. So far, it’s mostly been explored in the lab, but since the success in the lab is huge, and since lab decision making tends to correlate with real-life decision making, it should predict real-life outcomes as well, right?

Well, not exactly, according to a new paper by Juanchich, Dewberry, Sirota, and Narendran.

They had 401 participants – recruited via a marketing agency – fill out the CRT, a decision style measure, a personality measure, and a decision outcomes measure.

The Decision Outcome Inventory consists of 41 questions, which ask whether you’ve experienced a bad outcome such as locked yourself out of a car. The timeframe for the question is the past 10 years. These scores are weighted by the percentage of people avoiding them, since more serious negative outcomes, like jail, are more uncommon. Finally, the weighted average is substracted from 0, yielding a score range from -1 to 0, where a higher score means better decision making.

When they regress the decision outcome measure on the CRT, Big Five personality scores and the decision styles scores, this is what they get:
Picture
What we see here is that the only statistically significant predictors are Extraversion and Conscientiousness. CRT has common variance with the other predictors and thus doesn’t reach significance.

The main result: the CRT explains 2% of the variance in the outcome measure, but only 1% if the other measures are also included. In short, the CRT doesn’t really predict the decision outcomes. What’s going on?

Well, there are a few key candidates for the explanation:

1)      The DOI itself might be argued to be problematic. It is admittedly only half of good decision making: avoiding bad outcomes. But, if you look at the questions, some of those bad outcomes can arise through calculated risk-taking. For example, loaning money and not getting it back or losing 1000$ on stocks can be results of a gamble that was very positive on expected value – not results of bad decisions. Additionally, other items like “you had to spend 500$ fix a car you’d owned for less than half a year” seem to penalize for bad luck: it’s really not your fault if a new car has a manufacturing mistake in there.

2)      Most lab measures of decision making have a lot to do with numeracy. However, the real-life outcomes in the DOI, like forgetting your keys or getting divorced, do not. Perhaps they are more about self-control than numeracy. One explanation thus could be that since the CRT is strongly connected to numeracy, it therefore explains lab outcomes but not the DOI outcomes. 

3)      More worryingly, it could be that lab studies and real-life outcomes are not just very well connected altogether. I don’t think this is the reason, but there have been some studies that failed to found an association with real-life outcomes and some lab measures.


Of these explanations, the first is the best for CRT. If the failure is in the use of the DOI, then CRT is fine by that. The second is a little worrying: it tells us that the CRT is maybe not the magical bullet after all – maybe it’s tapping numeracy and not cognitive reflection. Finally, the third thing would be the worst. If lab studies don’t relate to real outcomes, it of course calls into question the whole culture of doing lab studies like we’ve used to.

I don’t know enough to pass judgment on any of these causes, but at the moment I’m leaning towards it being a toss-up between options 1 and 2. The DOI as a measure is not my favourite: it seems to penalize for things I consider just generally bad luck. From the perspective of the DOI, just sitting at home doing nothing would be good decision making. The option 3 is definitely too strong a conclusion to make based on this paper, or even just a few papers. What I’d like to see is a good meta-analysis of lab-reality correlations: though I’m not sure if that exists.
0 Comments

Stairs vs. Elevators: Applying Behavioral Science

12/5/2015

0 Comments

 
So, last week I had the fantastic opportunity of participating in the #behavioralhack event, organized by Demos Helsinki and Granlund. The point of the seminar was applying behavioral science, energy expertise and programming skills to reduce energy consumption in old office building. We formed five different groups consisting of behavioral scholars, energy experts and coders. Our group focused on the old conundrum of how to get people to use the stairs more, and elevators less.

The first observation from us was that – apart from just shutting down the elevators altogether – there is unlikely to be a one-size-fits-all magic bullet to solve this. On the other hand, we know from research that people are very susceptible to the environment. Running mostly with System 1, we tend to do what fits together with the environment. And, unfortunately, our environments support elevators much more than stairs.
Picture
Thinking about our own workplaces, we quickly discovered all sorts of features of the environment that support elevator use, but not stairs:

  1. The restaurant menu is in the elevator
  2. There’s a mirror (apparently many women use this to check their hair when arriving)
  3. The carpets for cleaning your feet direct you to the elevator
  4. The staircase might smell, or be badly lit
  5. You can get stuck in the staircase if you forget your keycard

All these features make the elevator easier or more comfortable than the stairs. Considering that the elevator has a comfort factor advantage from the start, small wonder people refrain from using the stairs!

All in all, our solution proposal was quite simply a collection of such small items. Since the point of the seminar was to look for cheap solutions, we just proposed a sign, pointing to the elevator and stairs, with “encouraging” imagery to associate stairs with better fitness. Fixing the above list so that the stairs also include a mirror and a menu also cost almost nothing. In fact, the advantage can even be reversed: remove the mirror etc. from the elevator, and replace them with just a poster saying that walking one flight of stairs a year equals a few pounds of fat loss (it does).

For a heavier solution version, we noted that you could make the stairs vs. elevators a company wide competition, by for example tracking people in the hallways with wifi, Bluetooth etc. Additionally, stairways could have screens showing the recent news, comics, funny pictures, or anything that fits with the company culture. On the other hand, we said that probably most of the change can already be achieved with the above cheap suggestions, and so ended up presenting that as the main point.

From a meta point of view, I really had a lot of fun! It was great to apply behavioral science to a common problem – and I was surprised with the amount and quality of ideas we had. Combining people from different fields and backgrounds turned out to be a really good thing. I know it’s a kind of platitude, but I really now appreciate the fact that novices can create big insights by asking even really basic questions, since they come without any of the theory-ladenness of academic expertise :) I have to say that a fun and competent team made for a great evening!
0 Comments

Choice Overload May Not Exist

21/4/2015

2 Comments

 
Have you ever heard about the jam study, by Iyengar & Lepper? If you haven’t, the study goes like this. Customers came into a grocery store, and saw a tasting booth for jams. On one day, the customers saw 6 different flavors of jam, and on another day they saw 24. So we have two groups, and we’re interested about how their behavior differs with the size of the choice set. Both customer groups could freely sample from the provided flavors, and after the tasting, they got a discount coupon for jams. So which group bought jam more often? Intuitively, we would expect the second group to buy more, since they got more options. However, this isn’t what happened. The one who got more options bought in fact less!

The study is by far the most famous example of choice overload: having more options making choice harder. The explanation for the effect has traditionally been that once we have too many options, we are so overwhelmed we rather give up altogether, and don’t choose anything.

But now, I have a crisis of faith. It’s looking like one of the most famous effects of behavioral decision making might not exist.

I came across a meta-analysis done by Scheibehenne, Greifeneder and Todd in 2010. The result of the meta-analysis? It looks very much possible that the choice overload doesn’t exist. All of the studies that have looked into it have an average effect of XY.12, which is essentially nothing. For me, this is big news, since I’ve used the choice overload as an example in several lectures, and I’ve also mentioned in here before. And now they’re telling me that it could be just noise that these studies are following. If we look at the funnel plot from their meta-analysis, the data seems to fit pretty well to the null that the grand mean would be zero. But (and there’s always a but in science), there does seem to be a group of studies on the right side of the funnel, finding an effect of around d>0.2, which indicates the existence of choice overload. What could this mean?
Picture
Well, Scheibehenne et al. did what any proficient meta-analyst would do, and regressed the effects on the parameters of studies. Most of the things didn’t have an effect, but they found some meaningful things:
Picture
Main points:
  • the result being published in a journal makes the effect size larger
  • subjects having expertise makes the effect smaller
(I’m ignoring the consumption variable here because that’s basically driven by one study)

The first what we should expect: science definitely has a publication bias. If your study found a significant effect, it’s much more likely to get published. If your study showed no effects, the reviewers might find the study uninteresting, demoting it to your desk drawer instead.

The second thing also seems to support general intuitions about the matter. For example, the in the jam study, all of the options were jams generally commonly bought. If they had included lemon curd, for example, a lot of people would just get that without looking at the other options. This supports the idea that choice overload could exist especially in situations, in which we are quite unfamiliar with the options. The finding that prior preferences decrease the choice overload effect is actually a good thing: it shows that the variation in studies is not just driven by random noise (Chernev, Böckenholt & Goodman, 2010).

So what’s the conclusion here: does choice overload exist or not? Scheibehenne et al. say that it certainly looks like choice overload is not a very robust phenomenon. However, the group of studies on the right side of the funnel complicates things. They are probably partly due to publication bias, but it’s certainly possible that there are conditions that facilitate choice overload, but were not captured in the meta-analysis.

I’m still reeling a bit from this finding. It certainly shows that popular science doesn’t catch up very fast: I’ve seen the jam study dozens of times in books, TED talks and presentations about decision making. It's such a famous finding that I even spent a full post describing it. But I had never even once seen the meta-analysis, despite it being already almost five years old. What can I say, whoops?

There’s still no grand truth to whether choice overload is real or not, but it certainly is not looking so outrageous after this paper. Perhaps it exists, perhaps not. Time will tell, but for now we can stick to “the effect might exist”, instead of “this is a really strong behavioral effect that changes decision making everywhere”, which is how some authors painted it.
2 Comments

Which Outside View? The Reference Class Problem

14/4/2015

0 Comments

 
One of the most sensible and applicable pieces of advice in the decision making literature is to take the outside view. Essentially, this means getting outside your own frame and looking at the statistical data of what has happened before.

For example, suppose you’re planning to put together a new computer from parts you order online. You’ve ordered the parts, and feel that this time you know most of the common hiccups of building the machine. You estimate that it will take you two weeks to complete. However, in the past you’ve built three computers – and they took 3, 5 and 4 weeks, respectively. Once the parts came in later than expected, once you were at work too much to manage the build and once you had some issues that needed resolving. But this time is different!

Now, the inside view says you feel confident that you’ve learnt from you mistakes. Therefore, estimating less build time than in history seems to make sense. The outside view, on the other hand, says that even if you have learnt something, there have always been hiccups of some kind – so that is likely to happen again. Hence, the outside view would estimate your build time to be around the average of your historical record.

In such a simple case it’s quite easy to see why taking the outside view is sensible, especially now that I’ve painted the inside view as a sense of “I’m better than before”. Unfortunately, real world is not this clean, but much messier. In the real world, the question is not should you use the outside view (you should), but which one?  The problem is that you’ve often got several options.

For example, suppose you were recently appointed as a project manager in a company, and you’ve led projects for a year now. Two months ago, your team got a new integration specialist. Now, you’re trying to think how much time it would be to install a new system to a very large corporate client. You’d like to use the outside view, but don’t know which one. What’s the reference point? All projects you’ve ever led? All projects you’ve led in this company? All projects with the new integration specialist? All projects for a very large client?

As we see, picking the outside view to use is not easy. In fact, this problem – a deep philosophical problem in frequentist statistics – is known in statistics and philosophy as the reference class problem. All the possible reference class in this example make some sense. The problem is that of causality: you have incomplete knowledge about which attributes impact your success, and how much. Does it matter that you have a new integration specialist? Are these projects very similar to ones you’ve done at the previous company? How much do projects differ by client size? If you can answer all these questions, you’d know which reference class to use. But if you knew the answers to these, you probably won’t need the outside view in the first place! So what can you do?

A practical suggestion: use several reference classes. If the estimates from these differ by a lot, then the situation is difficult to estimate. But hopefully finding this out improves your sense of what are the drivers of success for the project. If  the estimates don’t diverge, then it doesn’t really matter which outside view you pick, so you can be more confident of the estimate.
0 Comments

Yes for more open data in politics

31/3/2015

0 Comments

 
Since we have an election coming up , last weekend I found myself thinking about politics (prompted by some good discussions with friends). How could one improve political decision making?

In modern politics the parliament is a lot like a black box. Every fourth year, we vote in the parliamentary election and select 200 new (or the same old) faces to make decisions about matters national. The problem with this is that us voters, we really don’t know much about what is going on in there. Sure, the media covers some issues, but more often than not the reporting isn’t very good. Including just the outcome of a vote isn’t telling much about anything besides the result. And that is not enough.

If we consider what a good decision is like, it’s pretty clear that is has to be based on something. Sure, you can sometimes get it right by just guessing – but nobody would call that a good decision strategy. A good decision has to be based on evidence. Are politicians using evidence properly, then? At the moment, nobody knows. To really evaluate politicians, we need more than just their voting patterns. We need some inkling about why they chose the way they did. It’s not enough to say “down with cars!” One should at least provide some justification, like “down with cars, because they hurt our health!”  In an even better situation, we would see deeply researched judgments, really explaining how and why the policy voted for is the best way to reach an important goal. For example, someone might say “down with cars, because they hurt health so much that we could their impact on wellbeing is negative”.

Unfortunately, we hardly ever see this kind of reasoning. That’s why I think we need more open decision making, especially regarding the analyses and data that our political decision makers use. If we have more open data, we also have more minds looking at that and evaluating, whether the inferences politicians have made on that are really correct. Publicizing at least a substantial minority of the data used in the political decision making process would invite all kinds of NGOs, interest groups and analysts to really comb through the data. At the moment, none of that is happening regularly. When we do have access to data, it’s often related to decisions that have already been made! For obvious reasons, that’s really not very helpful. I think it’s fair to say that there’s room for improvement, and for moving more towards open, evidence-based policy generation.

The point here is not that politicians would be especially stupid or untrustworthy. No, the point is that they’re people just like you and me (albeit with more support staff). And just like you and me, they make mistakes. That’s why you sanity check your recommendations with your boss before sending them to the client. That’s why I get feedback from my supervisor before I send my paper to a journal. We’re fallible human beings, all of us. But having more people looking at the same things, we can average out many biases and avoid obvious mistakes. To do that in politics, we need more open data.

0 Comments

Decisions as Identity-Based Actions

24/3/2015

0 Comments

 
This semester I had the exciting chance of teaching half of our department’s Behavioral Decision Making course. What especially has gotten me thinking is James March’s book chapter Understanding How Decisions Happen in Organizations, taken from the book Organizational Decision Making, edited by Zur Shapira. Similar points can be found in March’s paper called How Decisions Happen in Organizations.

In the chapter, March presents all kinds of questions and comments directed at organizational scholars. Basically, his main point is that the normative expected utility theory is perhaps not a very good model for describing organizational decisions. No surprises there – modelling organizational politicking through utilities is pretty darn difficult. What did catch my eye was that March has a pretty nice description of how some organizations do muddle through.

This idea concerns decisions as identity-based actions. The starting point is that each member of an organization occasionally faces decision situations. These are then implicitly classified into different classes For example into HR decisions, decisions at strategy meetings, decisions after lunch, etc. The classification depends on the person, of course. What’s key is the next two steps: these classes are associated with different identities, which then are the basis of decisions through matching. This way, the decision gets made by basing the choice on rules and the role, not just on the options at hand.
Picture
So the manager may adopt a different identity when making strategy decisions, than when thinking of who to hire for his team. The decision is not based on a logic of consequence, but rather on a logic of appropriateness – we do what we ought to be doing in our role. “Actions reflect images of proper behavior, and human decision makers routinely ignore their own fully conscious preferences. They act not on the basis of subjective consequences and preferences but on the basis of rules, routines, procedures, practices, identities, and roles” (March, 1997) So rather than starting from the normative question of what they want, and which consequences are then appropriate, decision makers are implicitly asking “Who am I? What should a person in my position be doing?”

I feel that this kind of rule-based or identity-based behavior is a two-edged sword. On the one hand, it offers clear cognitive benefits. A clear identity for a certain class of decisions saves you the trouble of meta-choice: you don’t have to decide how to decide. When the rules coming from that identity are adequate and lead generally to good outcomes, it’s an easy procedure to just follow them and get on with it. On the other hand, the drawbacks are equally clear. Too much reliance leads to the “I don’t know, I just work here” phenomenon, in which people get in too deep in their roles, forgetting that they actually have a mind of their own.

Which way is better, then? Identity-based decisions, or controlled individual actions? Well, I guess the answer looks like the classic academic’s answer: it depends. It depends on the organization and the manner of action: how standardized are the problems that people face, is it necessary to find the best choice option or is satisficing enough, and so on. Of course, it also depends on the capabilities of the people involved: do they have the necessary skills and mindset to handle it if left without guiding rules and identities? Of do they need a little bit more support for their decisions? Questions like these are certainly not easy, but every manager should be asking them one way or another. 
0 Comments
<<Previous
Forward>>

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.