Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

The Nonlinear life as a Random Walk

27/11/2015

3 Comments

 
​The past two months, I’ve been completing University of Michigan’s fantastic Model Thinking course, available for free on Coursera. There’s so much to love about the modern world: you can learn interesting things through quality teaching, no matter where you are (well, you need a wifi), no matter when. And it doesn’t cost a cent!

Anyway, the course had a section about Random Walks, and it got me thinking. A while back I wrote about how the nonlinear life and our linear emotions aren’t exactly optimally suited to each other. Your brain craves signs of progress, so it could reward you with a burst of feel-good chemicals. Unfortunately, the nonlinear life doesn’t work like that. Often, you can spend days or weeks slaving away at the office/studio/whatever, not really moving forward – or even taking two steps back for each move forward. Despite the hours that you put in, the article/thesis/design never seems to be finished, making you question whether you’re really cut out for this kind of job. Perhaps you’d do the world a favor by setting your sights lower and working as a sales clerk instead.

Now, while watching one of the course lectures, I suddenly realized that the creative nonlinear work is exactly a random walk! I don’t claim this to be a unique insight or anything – I’m sure many of you have realized this before. But for the fun of it, it might be a nice exercise to show with a random walk model how the nonlinear life functions. At least in my own case, models often help to see the bigger picture, and forget about the noise in the short term. And who knows, maybe this will help to quell those linear emotions, too.

So, a random walk is very simple. In this case, let’s assume that we have a project that has a goal we’re trying to reach. Arbitrarily, let’s say that the completion means we reach a threshold of 100 points. Of course, these numbers are completely make-believe and I pulled them from my magical hat. Now, further, let’s assume that each unit of time – say 1 unit equals 1 day – means we have three possibilities: make progress, stay where we are, or take steps backward. In my personal experience, this is an ok model for work: sometimes you’re actually making progress, and things move smoothly. Sometimes, though, you’re actually hurting your project, for example by programming bugs into the software, which need to be fixed later on (just happened to me two weeks ago). Most often, though, you’re trying your best, but nothing seems to work. Maybe you’re stuck in a dead end with your idea, and need to change tack. Maybe you’re burdened with silly tasks that have nothing to do with the project. Well, I’m sure we all have these kinds of days.
So let’s again use my magical hat and pull out some probabilities for these options. Let’s say you have a 5% chance of making a great jump forwards (10 points), 25% chance of making 3 points of progress, 55% chance of getting stuck (0 points), 10% chance of making a mistake (-2 points), and a 5% chance of doing serious damage (-6 points). Now we just simulate these across and get a graph that shows your cumulative progress towards the goal (yes I'm doing this in Excel):
Picture
​So, in the graph there are several periods when it’s just going downhill, or plateauing for several time periods. Even though the numbers are really made up, I feel the above graph is actually a pretty decent example of how the nonlinear work often feels. However, there’s still the additional complication: the emotions.

Suppose that our emotions work as follows. If you’re making progress, you feel good. And this is mostly irrespective of how much progress you’re making. Suppose the same holds for drawbacks – it hurts, but it hurts almost as much to look for a bug for two hours or the full day. Finally, I’ll assume that if you’re not moving anywhere, you inherit the feeling from the day before. Now, I realize this is probably not how emotions really work (we’re often annoyed by our administrative duties, for example). But on the other hand, when I have a day I have spent at a dull seminar, I seem to find myself looking back a bit to evaluate the progress. The “inherit from t-1” rule tries to describe this: I feel good if the past has been good, and I feel annoyed if the past wasn’t successful. Why just t-1 and not the actual level? Well, I’ve also found that it’s really hard to evaluate how far the project actually is, which makes that option unrealistic. And when looking back, our memories are much stronger from the immediate past than the long-gone part. In short, I’m modeling here the short-sightedness. The actual progress-emotions payoff table looks like this:
Picture
So with these assumptions, we get the following graph portraying emotions:
Picture
Now this is pretty interesting! You can see how 1) there’s a lot of fluctuations back and forth, and 2) how there’s still “runs”, ie. the same emotional state tends to linger for a while. If you run the numbers, with this particular string of successes and failures you get 99 positive time periods and 51 negative ones, out of the total 150 periods I ran the simulation for. I think the above graph is quite a good summary of how the nonlinear life often feels: you love you’re job, but you’re not above hating it when things are not going well.

A final word of warning: this was of course just one simulated outcome. With the exact same parameters, you can get project outcomes that never finish, that run into negative progress, that finish in less than 30 periods, etc. They are not very nice for terms of a presentation, but also capture the great amount of uncertainty in a nonlinear project. Sometimes it just falls apart, and after 50 periods you’re back to exactly where you started. Or that a project you thought takes 6 weeks takes 16 weeks instead. Well, I’m sure everyone has had these experiences.
3 Comments

You Are Irrational, I Am Not

29/10/2015

3 Comments

 
The past month or so I’ve been reading Taleb’s Black Swan again, now for the second time. I’m very much impressed by his ideas, and the forceful in-your-face way that he writes. It’s certainly not a surprise that the book has captivated the minds of traders, businesspeople and other practitioners. The book is extremely good, even good enough to recommend it as a decision making resource. Taleb finds a cluster of biases (or more exactly, puts together research from other people to paint the picture), producing a sobering image of just how pervasive our neglect of Black Swans is in our society. And, he’s a hilariously funny writer to boot.

But.

Unfortunately, Taleb – like everyone else – succumbs in the same trap we all do. He’s very adept at poking other people about their biases, but he completely misses some blind spots of his own. Now, this is not evident in the Black Swan itself – the book is very well conceptualized and a rare gem in the clarity of what it is as a book and what it isn’t. The problem only becomes apparent in the following, monstrous volume Antifragile. When reading that one a few years ago, I remember being appalled – no, even outraged – by Taleb’s lack of critical thought towards his own framework. In the book, one gets the feeling that the barbell strategy is everywhere, and explains everything from financial stability to nutrition to child education. For example, he says:
​I am personally completely paranoid about certain risks, then very aggressive with others. The rules are: no smoking, no sugar (particularly fructose), no motorcycles, no bicycles in town [--]. Outside of these I can take all manner of professional and personal risks, particularly those in which there is no risk of terminal injury. (p. 278)
I don’t know about you, but I really find it hard to derive “no biking” from the barbell strategy.

​Ok, back to seeking out irrationality. Taleb certainly does recognize that ideas can have positive and negative effects. Regarding maths, at a point Taleb says:
[Michael Atiyah] enumerated applications in which mathematics turned out to be useful for society and modern life [--]. Fine. But what about areas where mathematics led us to disaster (as in, say, economics or finance, where it blew up the system)? (p.454)
My instant thought when reading the above paragraph was: “well, what about the areas where Taleb’s thinking totally blows us up?”

Now the point is not to pick on Taleb personally. I really love his earlier writing. I’m just following his example, and taking a good, personified example of a train of thought going off track. He did the same in the Black Swan, for example by picking on Merton as an example of designing models based on wrong assumptions, and in a wider perspective of models-where-mathematics-steps-outside-reality. In my case, I’m using Taleb as an example of the ever present danger of critiquing other people’s irrationality, while forgetting to look out for your own.
​
Now, the fact that we are better at criticizing others than ourselves is not exactly new. After all, even the Bible (I would’ve never guessed I’ll be referencing that on this blog!) said: “Why do you see the speck that is in your brother’s eye, but do not notice the log that is in your own eye?”
In fact, in an interview in 2011, Kahneman said something related:
I have been studying this for years and my intuitions are no better than they were. But I'm fairly good at recognising situations in which I, or somebody else, is likely to make a mistake - though I'm better when I think about other people than when I think about myself. My suggestion is that organisations are more likely than individuals to find this kind of thinking useful.
If I interpret this loosely, it seems to be saying the same thing as the Bible quote – just in reverse! Kahneman seems to think – and I definitely concur – that seeing your own mistakes is damn difficult, but seeing others’ blunders is easier. Hence, it makes sense for organizations to try to form a culture, where it’s ok to say that someone has a flaw in their thinking. Have a culture that prevents you explaining absolutely everything with your pet theory.
3 Comments

Nonlinear life, linear emotions

29/9/2015

3 Comments

 
We are the result of thousands of years of evolution. And like we all know, the modern life didn’t really exist back then, when evolution was pulling the strings and picking our physical and psychological makeup. This is a problem. One only needs to consider the obesity crisis, or our limited sense of understanding statistics to realize that we’re very far from being optimized for our current environment.

One particular example is the nonlinearity of many professions. Take a writer, for example. A writer spends hour after hour, working on the new manuscript with very limited feedback. The feedback he does get, is essentially coming from friends, who have either willingly or through coercion agreed to read the book. Or, if the writer is at least moderately successful, some feedback might even come from a professional editor. But now, consider the income of writers. It is highly nonlinear: some writers  - like J.K. Rowling – have their income counted in the millions. Most, however, make do with a few bucks here and there (or have a “proper” day job, and write at night).

Now, if you ask a writer whether their work is “going well”, or something similar, what could they say? I’m pretty sure that they have actually very little idea how it is going. Pages appear (and then disappear through editing). But the connection to the actual payoff is tenuous at best. Writing today means the book may come out next year – or in 10 years. Furthermore, there is little common knowledge about what makes a book good, or an author successful.

The key, we see here, is that the writer’s life is a nonlinear one. You can’t tell progress from walking backwards, because they look exactly the same. Of course, this is not true of just writers. In fact, this is true for almost any creative profession: artists, scientists, designers, or maybe even business strategists. They’re all living in the same nonlinear worlds: some people earn thousands of times more than others, and there’s very few signs that a result is good – other than its popularity.
​
The actual problem in relation to emotions is that our emotions love linearity. We love to see progress, and we’d like to see it every day. I presume this is why many creative professionals like renovating, knitting, or just something where you do stuff with your hands. Because, once we move from creating ideas to creating physical items, we enter the linear world. When you renovate a room, there’s only so many floorboards to replace – hence linear progress.
Picture
When we don’t get that linear emotional sense of achievement, we become skeptical of our work and progress. For some, it may even become bad enough to get depressed. For others, I think it's just a big rollercoaster. Sometimes, you're over the moon about what's happening - and sometimes, you're having that angry "this isn't fucking working" moment.

Fortunately, I think there are ways around the problem. You can create a – admittedly somewhat false – sense of linear progress. By thinking of actions that you constantly should be doing to improve yourself, you can also construct a sense of moving linearly forward. For example, I have as a goal every workday to do two things: 1) write at least half a page, and 2) read at least one article. Of course, these are not have truly linear payoffs: one day’s writing may be the turning point to a good publication – or just a lot of nonsense. Likewise, one article may be much more vital for me than another.

However, the point being that mentally ticking off these boxes (or physically in Habitica) creates an illusion of linear progress. This is false, like I said above. But, crucially, it helps to create emotional value, because I’m getting a sense of accomplishment every day from it. And even if it’s not true progress, it’s ok, because both of the actions are important enough for a scientist to never be a waste of time. 
3 Comments

Which Outside View? The Reference Class Problem

14/4/2015

0 Comments

 
One of the most sensible and applicable pieces of advice in the decision making literature is to take the outside view. Essentially, this means getting outside your own frame and looking at the statistical data of what has happened before.

For example, suppose you’re planning to put together a new computer from parts you order online. You’ve ordered the parts, and feel that this time you know most of the common hiccups of building the machine. You estimate that it will take you two weeks to complete. However, in the past you’ve built three computers – and they took 3, 5 and 4 weeks, respectively. Once the parts came in later than expected, once you were at work too much to manage the build and once you had some issues that needed resolving. But this time is different!

Now, the inside view says you feel confident that you’ve learnt from you mistakes. Therefore, estimating less build time than in history seems to make sense. The outside view, on the other hand, says that even if you have learnt something, there have always been hiccups of some kind – so that is likely to happen again. Hence, the outside view would estimate your build time to be around the average of your historical record.

In such a simple case it’s quite easy to see why taking the outside view is sensible, especially now that I’ve painted the inside view as a sense of “I’m better than before”. Unfortunately, real world is not this clean, but much messier. In the real world, the question is not should you use the outside view (you should), but which one?  The problem is that you’ve often got several options.

For example, suppose you were recently appointed as a project manager in a company, and you’ve led projects for a year now. Two months ago, your team got a new integration specialist. Now, you’re trying to think how much time it would be to install a new system to a very large corporate client. You’d like to use the outside view, but don’t know which one. What’s the reference point? All projects you’ve ever led? All projects you’ve led in this company? All projects with the new integration specialist? All projects for a very large client?

As we see, picking the outside view to use is not easy. In fact, this problem – a deep philosophical problem in frequentist statistics – is known in statistics and philosophy as the reference class problem. All the possible reference class in this example make some sense. The problem is that of causality: you have incomplete knowledge about which attributes impact your success, and how much. Does it matter that you have a new integration specialist? Are these projects very similar to ones you’ve done at the previous company? How much do projects differ by client size? If you can answer all these questions, you’d know which reference class to use. But if you knew the answers to these, you probably won’t need the outside view in the first place! So what can you do?

A practical suggestion: use several reference classes. If the estimates from these differ by a lot, then the situation is difficult to estimate. But hopefully finding this out improves your sense of what are the drivers of success for the project. If  the estimates don’t diverge, then it doesn’t really matter which outside view you pick, so you can be more confident of the estimate.
0 Comments

Decisions people Face: Results of the Survey

17/2/2015

1 Comment

 
So, last week we did a survey with Tuomas Lahtinen about the difficult decisions people face in the coming year. We got a total of 22 responses. What now follows is some analysis of those responses. Tuomas has yesterday already done a fantastic job of looking at the open question responses, and also categorizing the results. You can see from the figure below (copied from Tuomas’s analysis) that most decisions are related to one’s career. Given that we promoted the questionnaire on Facebook, and like our friends, we’re just the age of finishing up our studies and entering or having just entered work life – it’s hardly a surprise.

Since Tuomas already took a good look at the responses, I’m not going to repeat that. Instead, I’ll do what any analyst always does: wrangle the data for any other useful nuggets of information. So if you’re interested in the responses to the open questions, I direct you to Tuomas’s analysis.
Picture
Okay, so with 22 responses the data is of course not exactly scientific quality, but let’s look at some averages nevertheless. Below are averages to the questions “How well have you figured out the objectives/alternatives/consequence/when to make the decision” by category of the problem. To make the data a little more robust, I’ve combined Education under Career, and took one of the three questions under Other and also put it under Career.
Picture
The finding here seems to be that alternatives are better known than objectives, consequences or time. I guess this makes sense, since often alternatives are mostly a matter of browsing the internet and finding out what’s available. On the other hand, shouldn’t objectives be even easier? After all, to find your objectives, you just need to take a look inside your own thinking and find out what you value. Well, it seems that is not the easiest part of the problem.

What I find interesting is the comparison of career and family. Career alternatives and decision time seem to be relatively well known, but objectives not so much. With family, the situation is the exact opposite: objectives are clear, but consequences and decision time are very vague. This is probably due to the fact that many family decisions we can still afford to put off for several years, whereas many career choices at this point demand our choice by a certain date.

What other interesting patterns can we find? Well, here’s one:
Picture
So, what the correlation matrix shows, is that knowing when the decision should be made seems to go together with better knowledge of alternatives and consequences. Of course, with the data I can’t know whether the causality runs this way – but relying on the fact that people are prone to procrastination – I believe that it’s at least plausible on the face. Deadlines make us focus on the problem, which quite likely helps us to find out better what kinds of alternatives and consequences there are.

All in all, I’m surprised how low the values  in Table 1 are, especially since the measure was a self-report one. It looks to me as if respondents are comparing themselves to a perfect world, which is a tad unfair. We will never have perfect information, and uncertainty is something we’ll just have to tolerate to a degree. Of course, when there are information-laden variables that can help you, you ought to measure. But even after all the measurements you could do, there’s still going to be uncertainty. So what’s one to do? Well, I recommend heeding the advice of Reid Hastie and Robyn Dawes:

“[--] our advice is to strive for systematic external representations of the judgment and decision situations you encounter: Think graphically, symbolically, and distributionally. If we can make ourselves think analytically, and take the time to acquire the correct intellectual tools, we have the capability to think rationally.” – Rational Choice in an Uncertain World, p. 334

So, in short. Recognize that you can’t have it all. Decide what it is that you want, and then apply focused, analytical thinking to reach that. 
1 Comment

Measure Right and Cheap: Overcoming “we need more information”

1/12/2014

0 Comments

 
Ah, information. The One Thing that will solve all your problems, and make all the hard decisions for you. Or so many keep thinking: if I only had more information… Of course, in many ways, this is exactly right. More information does equal better decisions, as long as the information is – sorry for the pun – informative. Unfortunately, in many cases we either acquire the wrong information, or pass by getting the right kind of data, thinking that it’s too costly.

Thinking about that, I have a hypothesis about why the feeling “we need more information” persists:

  1. Even with information, hard decisions are still hard
  2. Information is of the wrong kind
  3. Thinking information costs too much

Even with information, hard decisions are still hard

This is really not very surprising, but there’s a common thread linking all hard decisions: they are hard. If they were easy, you wouldn’t be sitting there, thinking about the problem. No, you’d be back at home, or enjoying a run, or whatever. Decisions are hard for two main reasons: uncertainty and tradeoffs. Uncertainty makes decisions hard, but it can be mitigated with measurements. But what about those pesky cases when you can’t measure? Well, I’m going to say it flat out: there are no such cases. Sure, you can rarely get perfect certainty, but usually you can reduce uncertainty by a whole lot.

The second problem, that of tradeoffs, is the true culprit for hard decisions’ existence. Often we’re faced with situations, in which one option is more certain, but another has more potential profit. For example, when I run a race, I can start with a slower pace or harder pace. The slower pace is safer: I’ll definitely finish. The hard start pace, in contrast, is more risky: my reachable time at finish is better, but I run the risk of cramps and might not finish at all. Tradeoffs are annoying in the sense that there’s often nothing you can do about it, no measurement will save you. If you’re thinking between a cheap but ugly car, and an expensive but fancier one, what could you measure? No, you’ll just have to make up your mind about what you value.
Picture
Iron Man, Hulk, or Spider-Man? Why not all three?
Information is of the wrong kind

According to a management joke, there are two kinds of information: what we need, and what we have. I think there’s some truth in this. 
Picture
A fundamental problem with information is that not all things are equally straightforward to measure. It’s quite a lot more difficult to measure employee motivation, and a lot easier to measure the number of defect products in a batch. For this reason, a lot of companies end up measuring just the latter. It’s just so much easier, so shouldn’t we focus our efforts on that? Well, not necessarily. It’s not the cheapest measurements you ought to do, but the ones with the most impact. In his book How To Measure Anything Doug Hubbard tells that he was shocked by companies measurements: many were measuring the easy things, and had left several variables with a large impact completely unquantified! As Hubbard explains (p. 111):
The highest-value measurements almost always are a bit of surprise to the client. Again and again, I found that clients used to spend a lot of time, effort, and money measuring things that just didn’t have a high information value while ignoring variables that could significantly affect real decisions.
Thinking information costs too much

It’s an honest mistake, thinking that if you have a lot of uncertainty, you need a lot of information to help you. But, in fact, the relationship is exactly the inverse. The more uncertainty you have, the less information you need to improve the situation. If you’re Jon Snow, just spending a moment looking around will improve things!

I think this mistake has to do with looking for perfect information. Sure, the gap to perfect information is much larger here. But the point is that if you know next to nothing, you get to pick the low-hanging fruit and improve the situation with those very cheap pieces of information, while in a more advanced situation with less uncertainty, you’d need more and more complex and expensive measurements.

For example, many startups face the following question in the beginning: Is there demand for our product? In the beginning, they know almost nothing. They probably feel good about the product, but that’s not really much data. An expensive way of getting data would be to hire a marketing research firm, do a study or two about the demand, burning tens of thousands in the process. A cheaper way: call a few potential customers, or go to the market and set up a stand. You won’t have perfect information, but you’ll know a lot more than you did just a while ago! It’s good to see that the entrepreneurship literature has taken this to heart, and guys like Eric Ries are teaching also bigger companies that more costly doesn’t always equal better. Or even if it would, maybe it’s still unnecessary. Simple measurements go a long way.
0 Comments

Benefits of Decision Analysis

19/10/2014

0 Comments

 
Why is decision analysis a good idea in the first place? Why should we focus on making some decisions supported by careful modelling, data gathering and analysis? Here, I provide some arguments as to why decision analysis is beneficial. Of course, not all decisions benefit from it: some considerations are too unimportant to warrant much analysis, and some might be simple enough to not need it. But then again, many problems are important, or complex, or politically hot. For these problems, decision analysis can be especially beneficial.

Identification of the best alternative

The main point of decision analysis (DA) is of course to arrive at the best possible alternative, or even a “good enough” one. This is essentially the focus on most discussions of DA, and therefore I won’t dwell on it more. How to determine the best feasible option is a very hard problem in its own right, deserving a book of its own.

Identification of objectives

Book examples of decision analysis start from a defined problem, and the point is to somehow satisfactorily solve it. Reality, however, starts from a different point. The first problem in reality is defining the problem itself. In fact, as a few classic books in DA emphasize, formulating the problem is one of the hardest and most important steps of DA. Much of the benefit of DA comes from forcing us to formulate the problem carefully, and preventing us from pretending to solve complex dynamic issues by intuition alone.
Picture
The first step of decision analysis!
Creation of new alternatives

Many descriptions of DA also assume that the alternatives are already there, and that the tricky part is comparing them. Unfortunately, in actual circumstances the decision maker or his supporters are commonly responsible for coming up with alternatives, too. This is likewise critical for success, because alternatives that won’t be chosen include the ones you didn’t think of – no matter how good they would be. Duke University’s DA guru Professor Keeney has emphasized this heavily.

Analysis of complex causal relationships

It goes without saying that many issues are complex and difficult to solve – that’s why decision analysis is used, after all. A benefit of thinking the model properly through is that it can reveal some of our unvetted assumptions, even radically changing our perception of an issue. For example, I was once involved in a project setting up a new logistics center for a company. Their goal was to increase customer satisfaction by shorter delivery times. After careful analysis it turned out that the new center wouldn’t reduce delivery time by very much. So someone thought that “wow, delivery time must be really important for the customers to warrant this” and looked up the satisfaction survey data. Well, it turned out it wasn’t very important: current deliveries were well within the limit defined by customers as satisfactory. In fact, it was clear from the surveys that to increase satisfaction they ought to be doing something else entirely, like improving customer service or product quality! It sure was an interesting finding, but it took some time to convince the directors that logistics really wasn’t their problem.
Picture
A little analysis needed.
Quantification of subjective knowledge

Somewhat related to the previous example, many analyses come up against the problem of uncertain or vague knowledge. Especially organizations have a habit of being full of people very knowledgeable about the business and its environment, but this knowledge isn’t really anywhere for the development department to use. It seems to go something like this. First, the analyst finds out he needs some data on, say, failure rate of delivery cars. The analyst asks a Business Development Manager, who doesn’t know anything, and tells the analyst to use some estimate. The analyst doesn’t know anything either, so ends up interviewing some delivery people, uncovering subjective, unquantified knowledge about the actual failure rate. There’s nothing wrong with subjective knowledge, it’s just that it’s of no use if the DM isn’t aware of it! By uncovering and quantifying subjective knowledge in the organization, the analysis can actually benefit the company very much also in the long term, since now they have even more knowledge to base their future decisions on.

Creation of a common decision framework

Speaking of the future, one final benefit of DA is that it provides the decision maker with a decision framework, a model to replicate next time when faced with a similar decision. This is especially beneficial in organizations, since they get stuck in meta-level issues: arguing about how to make decisions in the first place.
In the best case, DA can provide an almost ready-made framework to follow, so that the managers can focus on actually making the decision. However, it’s important to recognize that different decisions have different stakeholders and take that into account. For example, a new logistics center may be an issue mostly about operational efficiency, but a new factory demands the inclusion of environmental and labour organizations. Just taking a previously used DA framework does not ensure it’s a good fit with the new problem. But the DA frame can be something to start from, which can help in reducing political conflicts between stakeholders. In fact, there’s nothing to prevent using DA from different perspectives. For example, DA has been used successfully in problems such as oil industry regulation, or moving from a segregated schooling system to a racially integrated one. Both politically hot examples can be found in Edwards’ and von Winterfeldt’s classic.

I guess if you wanted to summarize the benefits of DA in a sentence, it could look something like this: creating structure and helping to use it. So, in fact what it does is it helps us to think better as we are forced to consider things more thoroughly and explicitly. It’s a methods that helps us to deal with uncertainty and still make a decision.
0 Comments

Why Must We Handle Uncertainty?

2/9/2014

0 Comments

 
There’s a really odd comment that I’ve sometimes heard about dealing with uncertainty. The comment goes somewhat along the lines like “oh you know, uncertainty is a problem now, but once we have good AI and algorithms our systems will be much more accurate”. I don’t think this is a very good argument for discrediting decision methods that try to grapple with uncertainty.

Why is it that we need decision methods and procedures for dealing with such situations? Why not just produce certainty and just base our decisions on that?

The annoying answer to this is that it simply costs too much. Reducing uncertainty is possible, but the more you reduce it, the more expensive it gets. To be exact, we can say that the marginal cost of uncertainty rises.

Consider an example from industrial production. Let’s say you have a production line that churns out really nice hiking boots. Unfortunately, there are some production errors every once in a while, as your line manager kindly tells you. But there is a level of uncertainty in the estimate: he is not sure how many faulty shoes will be produced in each production batch. To reduce uncertainty, you can take all kinds of measures. For example, you can hire a team of employees to inspect some of the manufactured shoes and discard any faulty ones. However, this does not eliminate uncertainty: after all, they cannot inspect every shoe. But wait, you can do more! To reduce uncertainty even more, you hire even more inspectors so that they can inspect every single shoe produced. Surely now there is no uncertainty left?

Well, unfortunately, there is. The inspectors are only human – they make errors too. So every once in a while, while one of your beloved inspectors is thinking about the upcoming football match of the evening, a faulty shoe escapes his gaze. Undaunted, you resolve to eliminate uncertainty, and fit the production line with an expensive machine inspection system. The system checks every shoe that passes the human inspectors so that they are double-checked. Surely now each produced shoe is good to go? Most days they are, until a programming error in the machine causes a problem: a shoe in an unconventional orientation is actually faulty yet passes undetected through the machine. In a fit, you eliminate the marketing department and use their funding to eliminate the uncertainty in production faults once and for all…
Picture
Uh oh, another unforeseen cause of faulty shoes!
As the example shows, reduction of uncertainty gets progressively more and more expensive every round. The more you’ve invested in it, the more you have to invest for a further reduction in uncertainty. What’s even worse – and this argument borders on the philosophical – there is practically no such thing as elimination of uncertainty. Whatever systems you come up with, there’s always a way for something really unforeseen to happen: a power failure incapacitates your inspection system, a burglar changes their settings, a meteor strikes at an inopportune time. The cause in itself is irrelevant. The point is that there’s always something you didn't anticipate.


The conclusion? There will always be some uncertainty.

And what’s more: since we have limited funds, there’s a practical limit for reducing uncertainty. At that point, we must use methods that can cope with uncertainty, because there are no other alternatives anymore.

This inevitability of facing uncertainty is why we need decision makers equipped with proper methods. Decades of behavioral decision research show (more about this in later posts) that humans are really not very good intuitive statisticians. Once you have many variables with various levels of uncertainty, there’s practically no way to make good decisions based on gut and intuition alone. What we need is methods and frameworks that simplify and aggregate information – but then again, not by too much – which we can then feed to the decision maker for processing. 
0 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.