Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Basic Biases: The Planning Fallacy

27/10/2014

0 Comments

 
I figured it would probably take a bit more than an hour to write up this post – reality intervened and two hours were exceeded. This is probably an experience many people share, whether from school, work, or personal life. Everything takes much longer than expected. Even though I’ve written countless essays, blog posts and papers – and they’ve all seemed to take commonly longer than estimated – I still didn’t see this coming. There’s a term in decision research for this phenomenon: the planning fallacy. In short, it means we are quite idiotic planners: we never see the delays and problems coming.
Picture
The planning fallacy is ubiquitous to the point of being scary. As Jonathan Baron recounts in his exquisitely fantastic book Thinking and Deciding (pp. 387-388):
Projects of all sorts, from massive works of construction to students’ papers to professors’ textbooks, are rarely completed in the amount of time originally estimated. Cost overruns are much more common than underruns (to the point where my spelling checker thinks that only the former is an English word). In 1957, for example, the Sydney (Australia) Opera House was predicted to cost $17 million and to be completed in 1963. A scaled-down version opened in 1973, at a cost of $102 million! 
When it comes to our predictions of completion times, there seems to be no such thing as learning from history. It seems that every project we begin is considered only as a single instance, not as belonging to a category.

In fact, herein lies one of the keys to overcoming the problem. Relating a single project to a reference class of projects encourages statistical, data-driven thinking and reveals the delays to us. In an experiment (Buehler et al., 1994), students were much better at predicting when they were going to complete assignments when they were told to think about the past and connect it to the present assignment.

In my opinion, there are two kinds of errors underlying the planning fallacy:

1)      Thinking we will have more time or energy in the future
2)      Thinking we have prevented the errors that have happened before

The first case is especially related to personal projects. All the time there are several projects demanding attention. When we predict the future, we tend to believe that we will have more free time in the future than now. Since all those meetings, unpredictable surprises and events we just want to be at our not yet in view, the future looks promisingly empty. However, once it arrives it is very likely to look much like the present: the Sunday afternoon I was supposed to use for fixing my bike and catch up on reading was spent over a delightful brunch that we agreed on just a few days in advance. And so on. It never turns out like you planned.
Picture
Plan vs. reality
In the second case – and I think this might be more relevant in complex projects – we believe that we have in fact learned from history, that we’ve plugged the gaps that proved our downfall in previous projects. Unfortunately, this is exactly what we thought previously, too! Surprising problems in project are surprising exactly because they are not the same errors as before. As before, just round the corner is something we didn’t think of, something that catches us off guard. If we only remembered that it’s always been like this…

Of course, I’m deliberately painting a gloomy picture here. We do learn from our mistakes and tend not to make the same mistakes again. But the point is that there will probably always be new mistakes – and this ought to be reflected in planning. I don’t think anybody is better off from the fact that projects run over their deadlines almost certainly. For example, by some estimates only 1% of military purchases are delivered on time and budget (Griffin & Buehler, 1999)! There’s certainly room for improvement!

And that improvement is possible. According to Buehler et al. (1994), others are less susceptible to the planning fallacy than the actors themselves. In simple terms: those not invested in the plan are more likely to have a conservative view of the completion time. Unfortunately, they are also likely to err in the opposite direction by stating too slow completion times! So, it’s probably a feasible idea to estimate the actual time by averaging your estimate and the outsider’s estimate. This is hardly optimal, but it’s better than relying on a single biased estimate.

So the recipe for better plans seems quite straightforward:

-          stopping to think
-          considering past projects and their results
-          getting an outsider opinion and aggregating estimates
0 Comments

Benefits of Decision Analysis

19/10/2014

0 Comments

 
Why is decision analysis a good idea in the first place? Why should we focus on making some decisions supported by careful modelling, data gathering and analysis? Here, I provide some arguments as to why decision analysis is beneficial. Of course, not all decisions benefit from it: some considerations are too unimportant to warrant much analysis, and some might be simple enough to not need it. But then again, many problems are important, or complex, or politically hot. For these problems, decision analysis can be especially beneficial.

Identification of the best alternative

The main point of decision analysis (DA) is of course to arrive at the best possible alternative, or even a “good enough” one. This is essentially the focus on most discussions of DA, and therefore I won’t dwell on it more. How to determine the best feasible option is a very hard problem in its own right, deserving a book of its own.

Identification of objectives

Book examples of decision analysis start from a defined problem, and the point is to somehow satisfactorily solve it. Reality, however, starts from a different point. The first problem in reality is defining the problem itself. In fact, as a few classic books in DA emphasize, formulating the problem is one of the hardest and most important steps of DA. Much of the benefit of DA comes from forcing us to formulate the problem carefully, and preventing us from pretending to solve complex dynamic issues by intuition alone.
Picture
The first step of decision analysis!
Creation of new alternatives

Many descriptions of DA also assume that the alternatives are already there, and that the tricky part is comparing them. Unfortunately, in actual circumstances the decision maker or his supporters are commonly responsible for coming up with alternatives, too. This is likewise critical for success, because alternatives that won’t be chosen include the ones you didn’t think of – no matter how good they would be. Duke University’s DA guru Professor Keeney has emphasized this heavily.

Analysis of complex causal relationships

It goes without saying that many issues are complex and difficult to solve – that’s why decision analysis is used, after all. A benefit of thinking the model properly through is that it can reveal some of our unvetted assumptions, even radically changing our perception of an issue. For example, I was once involved in a project setting up a new logistics center for a company. Their goal was to increase customer satisfaction by shorter delivery times. After careful analysis it turned out that the new center wouldn’t reduce delivery time by very much. So someone thought that “wow, delivery time must be really important for the customers to warrant this” and looked up the satisfaction survey data. Well, it turned out it wasn’t very important: current deliveries were well within the limit defined by customers as satisfactory. In fact, it was clear from the surveys that to increase satisfaction they ought to be doing something else entirely, like improving customer service or product quality! It sure was an interesting finding, but it took some time to convince the directors that logistics really wasn’t their problem.
Picture
A little analysis needed.
Quantification of subjective knowledge

Somewhat related to the previous example, many analyses come up against the problem of uncertain or vague knowledge. Especially organizations have a habit of being full of people very knowledgeable about the business and its environment, but this knowledge isn’t really anywhere for the development department to use. It seems to go something like this. First, the analyst finds out he needs some data on, say, failure rate of delivery cars. The analyst asks a Business Development Manager, who doesn’t know anything, and tells the analyst to use some estimate. The analyst doesn’t know anything either, so ends up interviewing some delivery people, uncovering subjective, unquantified knowledge about the actual failure rate. There’s nothing wrong with subjective knowledge, it’s just that it’s of no use if the DM isn’t aware of it! By uncovering and quantifying subjective knowledge in the organization, the analysis can actually benefit the company very much also in the long term, since now they have even more knowledge to base their future decisions on.

Creation of a common decision framework

Speaking of the future, one final benefit of DA is that it provides the decision maker with a decision framework, a model to replicate next time when faced with a similar decision. This is especially beneficial in organizations, since they get stuck in meta-level issues: arguing about how to make decisions in the first place.
In the best case, DA can provide an almost ready-made framework to follow, so that the managers can focus on actually making the decision. However, it’s important to recognize that different decisions have different stakeholders and take that into account. For example, a new logistics center may be an issue mostly about operational efficiency, but a new factory demands the inclusion of environmental and labour organizations. Just taking a previously used DA framework does not ensure it’s a good fit with the new problem. But the DA frame can be something to start from, which can help in reducing political conflicts between stakeholders. In fact, there’s nothing to prevent using DA from different perspectives. For example, DA has been used successfully in problems such as oil industry regulation, or moving from a segregated schooling system to a racially integrated one. Both politically hot examples can be found in Edwards’ and von Winterfeldt’s classic.

I guess if you wanted to summarize the benefits of DA in a sentence, it could look something like this: creating structure and helping to use it. So, in fact what it does is it helps us to think better as we are forced to consider things more thoroughly and explicitly. It’s a methods that helps us to deal with uncertainty and still make a decision.
0 Comments

More Alternatives = Choice Overload

13/10/2014

0 Comments

 
Often when you’re roaming around the hallways of a supermarket, you may come across marketing display of some new product, with tasting included. What follows the tasting is an invitation to buy (with a special price!), choosing from a selection of their products. With some brands, the selection might be 20 flavors, with others just one.

The surprising finding: giving people more alternatives is not necessarily a good idea.

In a study by Iyengar and Lepper, two sets of participants were given the opportunity to taste jams in a gourmet food store. One group of people could sample from six flavors, while the other sampled from 24 (including the six the other group had). The result: people sampling from the smaller set were more likely to buy.
Picture
Classical economics would presuppose that more choice options can never be worse. Suppose you are a person who would choose the blackcurrant jam out of the six alternatives. If you’re given 18 other jams to sample from, sure, you might find one you find to be even better. But you could always still pick the blackcurrant. But as the experiment shows, this isn’t the way it works. Since less people bought jam in the second condition, there must be people who would have made a choice from the first set, but were overwhelmed and chose the status quo of no jam with the larger set.

Emphatically, what I’m not saying is that ever fewer alternatives would be always good. Who would want to eat in a restaurant with only one food? No, the point is that there’s a sweet spot of alternatives: too few and too many are both detrimental. With too few alternatives, many customers will not find a product to their liking – thus reducing sales. With too many alternatives, customers suffer from choice overload and give up altogether. Of, even if they manage to pick something out from the options, they will be less satisfied with their choice (Iyengar & Lepper, 2000).

What that sweet spot is, well, that depends on who is making the decisions and in what context. For example, a single mom with two kids in a grocery store won’t have a lot of extra energy to consider 24 different jams. On the other hand, an executive building a new factory may well have a system to evaluate tens or even hundreds of locations.

Like a writer has to think about his audience, so a marketer must think of the decision makers. Are they likely to have supporting software or assistants aiding in the decision? Will they choose the product after a careful analysis, or on a whim after drinks?

I think in terms of marketing advice the research points to a clear conclusion. Whether you’re in B2C or B2B business, keep your sortiment concise enough. Too many alternatives only lead to choice overload. If your website lists all the 20 different parameters I can change about the consulting project, I’m likely to just get confused and choose the status quo: no business with you. It’s wiser to just make a compelling case for the few main project types you do; we can haggle about the niggardly details later.
0 Comments

Nudging Yourself to Better Choices

7/10/2014

2 Comments

 
A study of the different biases and human irrationality may at times look like a depressing task. After all, one is mostly finding out all the ways we screw up, all the ways we behave unoptimally and just make stupid decisions. Well, thankfully, the same findings can be used in another direction – helping us to make wiser, sounder decisions. This is usually called nudging, a term coined in Thaler’s and Sunstein’s prize-winning book Nudge.

At the heart of nudging is the idea that we don’t have unlimited amounts of free will and energy. No, we get lazy, tired, worn out and sometimes just don’t pay attention. However, to coerce people would be immoral. We all have our right to choose – no matter how bad the choice. That’s why nudging focuses on the choice architecture. That means changing the decision situation so that people will in fact choose better, i.e. they are more likely choose what they want in the long term, instead of succumbing to the willpower or attention deficits in the immediate situation. It’s like building hallways that make more sense, and lead you more directly to where you want to go. You can still choose to go someplace else, getting what you (usually) want has just been made a little easier.
Picture
In need of a little nudging?
Thaler’s and Sunstein’s book focuses on the implications of nudging for public policy. But in this post, I’ll take a narrower perspective, just looking at how you can nudge yourself to better decisions.

The main finding from the last decades is that we have two main ways to make choices. The first is System 1, which is a fast, associative and unreflective way. System 1 is the one we use most of the time, because it’s easy and requires little effort. System 2, on the other hand, is slow, reflective, and requires a lot of effort. That’s one big reason why we cannot use System 2 all the time. As it stands, System 1 is quite error-prone: with bad decision architecture, it can focus on wrong cues and lead to really stupid choices. But with a good architecture, choosing is smooth sailing. Choosing with System 2, on the other hand, is tough and effortful, but should in most cases lead to a good choice.

This very rough and simplified theory leads to two main ways to nudge: improving the architecture for a better System 1 choice, or engaging System 2 for the choice. Both are legitimate and powerful options. Which to use – well, that depends on the context. Let’s look at some known examples:

The 20 second rule

You’re at home, watching your favorite TV show with pleasure. As often is the case, you feel a slight twinge of hunger – a snacking hunger. What do you eat? Usually, at this point people go to the kitchen and get somethinIg that’s in easy reach and doesn’t need preparing – like chocolate, or chips. What if the chips were on the top shelf? Would you still get them?
Picture
Still, it's just a nudge - when there's a will, there's a way...
That’s the point of the 20 second rule: you’re more likely to choose something requiring little effort. Just having the chips on the top shelf is likely to stop you from getting them, just like placing the scones out of reach at a meeting will decrease their consumption heavily. This is such a common tip that there are tons of examples: laying out your running gear for the morning, hiding the remote to read books, or setting up a site blocker that you can set to require a time-consuming task before you can launch Facebook. All these have the same aim: guiding your System 1 towards choices you would – in a more energized and reflective mood – approve as the better ones.

Default routines

A variation of the 20 second rule is to create default routines. That means creating patterns, which will be beneficial for you and which you will execute even when tired. For example, our PhD seminars have time and again told us to write in the morning, every day you come to work. For one thing, writing is important, and this pattern ensures I’ll have time for it. For a second thing – and I think this is even more important – having writing as a default routine ensures I’ll start writing even when tired, confused or just “not feeling like it”. But usually, once I get off the ground, I’ll be in the mood. 
Picture
Ready to write any moment now!
Another example is a guy from SF I once talked to. He had this habit of always cutting up about 500g of vegetables when he arrived from work. Having done that, it was easy to blend them into a smoothie or make a salad. And having them already cut up usually meant he ate them, too, since he wouldn’t want to waste food. I thought this was ingenious!

Blocking easy cues

For engaging System 2, it can help to block cues that System 1 would like to use. For example, a known problem is the halo effect, meaning perceiving one good attribute will cause us to evaluate other attributes more highly, too. For example, people tend to think better looking people are also more intelligent. If you’re evaluating project proposals, you could hide the names of the proposers and evaluate the proposals just on their own terms. Having the names visible might influence you in a bad way. After all, you wouldn’t want to approve a project just because it’s been proposed by a colleague you like to play tennis with? Or, to remove the effect of visual design, have the proposals submitted on a template, so they all look alike (a lot of foundations seem to do this). Making decisions based on template proposals without names is going to be harder - but that’s the point. Necessarily, you will have to focus on the content, since System 1 doesn’t have a lot to go on anymore. And, being a diligent person, your System 2 choices will outperform the System 1 choices.

So, as a wrap-up, here are the two main pathways to nudging towards better choices:

1.       Helping System 1 to better options by better choice architecture

2.       Engaging System 2 by blocking System 1

Which option to go for depends on the case. The more complex the decision at hand, the better option 2 is going to be. In contrast, the more often a choice situation occurs, the more sense it makes to use System 1 on that, saving energy.
2 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.