Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

You Are Irrational, I Am Not

29/10/2015

3 Comments

 
The past month or so I’ve been reading Taleb’s Black Swan again, now for the second time. I’m very much impressed by his ideas, and the forceful in-your-face way that he writes. It’s certainly not a surprise that the book has captivated the minds of traders, businesspeople and other practitioners. The book is extremely good, even good enough to recommend it as a decision making resource. Taleb finds a cluster of biases (or more exactly, puts together research from other people to paint the picture), producing a sobering image of just how pervasive our neglect of Black Swans is in our society. And, he’s a hilariously funny writer to boot.

But.

Unfortunately, Taleb – like everyone else – succumbs in the same trap we all do. He’s very adept at poking other people about their biases, but he completely misses some blind spots of his own. Now, this is not evident in the Black Swan itself – the book is very well conceptualized and a rare gem in the clarity of what it is as a book and what it isn’t. The problem only becomes apparent in the following, monstrous volume Antifragile. When reading that one a few years ago, I remember being appalled – no, even outraged – by Taleb’s lack of critical thought towards his own framework. In the book, one gets the feeling that the barbell strategy is everywhere, and explains everything from financial stability to nutrition to child education. For example, he says:
​I am personally completely paranoid about certain risks, then very aggressive with others. The rules are: no smoking, no sugar (particularly fructose), no motorcycles, no bicycles in town [--]. Outside of these I can take all manner of professional and personal risks, particularly those in which there is no risk of terminal injury. (p. 278)
I don’t know about you, but I really find it hard to derive “no biking” from the barbell strategy.

​Ok, back to seeking out irrationality. Taleb certainly does recognize that ideas can have positive and negative effects. Regarding maths, at a point Taleb says:
[Michael Atiyah] enumerated applications in which mathematics turned out to be useful for society and modern life [--]. Fine. But what about areas where mathematics led us to disaster (as in, say, economics or finance, where it blew up the system)? (p.454)
My instant thought when reading the above paragraph was: “well, what about the areas where Taleb’s thinking totally blows us up?”

Now the point is not to pick on Taleb personally. I really love his earlier writing. I’m just following his example, and taking a good, personified example of a train of thought going off track. He did the same in the Black Swan, for example by picking on Merton as an example of designing models based on wrong assumptions, and in a wider perspective of models-where-mathematics-steps-outside-reality. In my case, I’m using Taleb as an example of the ever present danger of critiquing other people’s irrationality, while forgetting to look out for your own.
​
Now, the fact that we are better at criticizing others than ourselves is not exactly new. After all, even the Bible (I would’ve never guessed I’ll be referencing that on this blog!) said: “Why do you see the speck that is in your brother’s eye, but do not notice the log that is in your own eye?”
In fact, in an interview in 2011, Kahneman said something related:
I have been studying this for years and my intuitions are no better than they were. But I'm fairly good at recognising situations in which I, or somebody else, is likely to make a mistake - though I'm better when I think about other people than when I think about myself. My suggestion is that organisations are more likely than individuals to find this kind of thinking useful.
If I interpret this loosely, it seems to be saying the same thing as the Bible quote – just in reverse! Kahneman seems to think – and I definitely concur – that seeing your own mistakes is damn difficult, but seeing others’ blunders is easier. Hence, it makes sense for organizations to try to form a culture, where it’s ok to say that someone has a flaw in their thinking. Have a culture that prevents you explaining absolutely everything with your pet theory.
3 Comments

Test Your Rationality

6/10/2015

0 Comments

 
Picture
As a decision scholar, I’m a firm believer in the benefits of specialization. If someone is really good at doing something, then it’s often better to rely on them in that issue, and focus efforts towards where you’re personally the most beneficial to others and society at large. Of course, this principle has to apply over all agents – including myself. With that in mind, I’m going to make a feature post about something a certain someone else does – and does it much better than me.

Enter Spencer Greenberg. I’ve talked to Spencer over email a couple of times, and he’s really a great and enthusiastic guy. But that’s not the point. The point is that he does a great service to the community by producing awesome tests, which you can use to educate yourself, your partner or anyone you come across about good decision making. What’s even better is that the tests are done with the right kind of mindset: they’re well backed up by actual, hard science. What this means is that the questions make sense – there’s none of that newspaper-clickbait “find your totem animal” kind of stuff. There’s proper, science-backed measuring. Even better, the tests have been written in a way anyone can understand. You don’t need to be a book-loving nerdy scholar to gain some insights!
​
Now, I’ve always wanted to bring something to the world community. And a while ago, I thought maybe I could produce some online tests about decision making. But after seeing these tests, I’ll just tip my hat and say that it’s been done way better than I ever could have! Congrats!
And now, enough of the babbling: go here to test yourself! (For comparison, a reflection of my results can be seen in next week’s post :)
0 Comments

Don’t allow labels to define your actions

25/6/2015

0 Comments

 
Human minds are quite complex, but they also need to use simplifications. One of the most common ways of simplifying things is to label them. You know, like “oh, that’s someone from political studies, I bet she is [insert your favorite trope about politics students]”. It’s a really easy way of making observing the world simpler. Just connect a lot of labels that you encounter often enough to attributes and actions people from those categories typically exhibit or do. Leftists bashing rich people, Christian conservatives bashing gays, scientists not doing anything with actual impact, etc.

Of course, this is also a really bad thing to do. In the extreme, by allowing the labels to completely take over we become racists, fascists, communists, or just generally inconsiderate people that define other people through their labels. This is a bad thing itself, and we should naturally avoid it. However, so much has been said about this that I don’t think I can bring anything new into the picture. That’s why I want to flip the concept over, and talk about discriminating against yourself.

Just as we apply the labels to other people, we also apply to ourselves. For example, depending on the social environment and my past few weeks of successes – or lack thereof – I tend to perceive myself as a friend, scientist, runner, philosopher, blogger, reader, boyfriend, family, or whatever is important in the current context. But they’re all labels. 
Picture
Labels are a bit like multiple personalities, just not as strong.
The labels are useful in helping me to have a sense of self in the context, allowing me to focus on certain part of myself. Who am I in relation to these people? Why am I here? These are questions we hardly ever ask, because we answer them implicitly through the application of labels. But there’s a crucial question that should not be answered with a label: what should I do?

It’s all too easy to fall into the trap of thinking that “well, I’m the junior in this group, so I should carry the most burden here, working long into the night” or that “well, I’m a professional banker so I can’t really do painting seriously”. When we apply labels to ourselves too indiscriminately and without thinking, we end up constraining ourselves. Instead of spending time learning Javascript during evenings, we may apply the label of a humanist and discard programming as something that humanists just don’t do (which is false, by the way).

It’s true that Javascript may not be the most important thing for an art scholar, for example. But that’s no reason to discard it offhand with simple label identity. Instead, it would be better to evaluate things on their own merits. Sure, if I can’t see programming as anything useful or interesting, I probably am better off doing something else, instead. But if I have a strong interest it – but no obvious usage – it might make sense to give it a try. Who knows what might come out of it? In fact, combinations of different fields are becoming ever more valuable these days.

The point is very simple here: don’t let labels define your actions. Stopping that from happening, that’s the hard part. It’s difficult to notice the implicit labels, because we do it so unconsciously. But if you ask yourself “why am I doing this?” and find yourself answering that “because that’s who I am”, then that’s a definite warning sign.
0 Comments

Which Outside View? The Reference Class Problem

14/4/2015

0 Comments

 
One of the most sensible and applicable pieces of advice in the decision making literature is to take the outside view. Essentially, this means getting outside your own frame and looking at the statistical data of what has happened before.

For example, suppose you’re planning to put together a new computer from parts you order online. You’ve ordered the parts, and feel that this time you know most of the common hiccups of building the machine. You estimate that it will take you two weeks to complete. However, in the past you’ve built three computers – and they took 3, 5 and 4 weeks, respectively. Once the parts came in later than expected, once you were at work too much to manage the build and once you had some issues that needed resolving. But this time is different!

Now, the inside view says you feel confident that you’ve learnt from you mistakes. Therefore, estimating less build time than in history seems to make sense. The outside view, on the other hand, says that even if you have learnt something, there have always been hiccups of some kind – so that is likely to happen again. Hence, the outside view would estimate your build time to be around the average of your historical record.

In such a simple case it’s quite easy to see why taking the outside view is sensible, especially now that I’ve painted the inside view as a sense of “I’m better than before”. Unfortunately, real world is not this clean, but much messier. In the real world, the question is not should you use the outside view (you should), but which one?  The problem is that you’ve often got several options.

For example, suppose you were recently appointed as a project manager in a company, and you’ve led projects for a year now. Two months ago, your team got a new integration specialist. Now, you’re trying to think how much time it would be to install a new system to a very large corporate client. You’d like to use the outside view, but don’t know which one. What’s the reference point? All projects you’ve ever led? All projects you’ve led in this company? All projects with the new integration specialist? All projects for a very large client?

As we see, picking the outside view to use is not easy. In fact, this problem – a deep philosophical problem in frequentist statistics – is known in statistics and philosophy as the reference class problem. All the possible reference class in this example make some sense. The problem is that of causality: you have incomplete knowledge about which attributes impact your success, and how much. Does it matter that you have a new integration specialist? Are these projects very similar to ones you’ve done at the previous company? How much do projects differ by client size? If you can answer all these questions, you’d know which reference class to use. But if you knew the answers to these, you probably won’t need the outside view in the first place! So what can you do?

A practical suggestion: use several reference classes. If the estimates from these differ by a lot, then the situation is difficult to estimate. But hopefully finding this out improves your sense of what are the drivers of success for the project. If  the estimates don’t diverge, then it doesn’t really matter which outside view you pick, so you can be more confident of the estimate.
0 Comments

Yes for more open data in politics

31/3/2015

0 Comments

 
Since we have an election coming up , last weekend I found myself thinking about politics (prompted by some good discussions with friends). How could one improve political decision making?

In modern politics the parliament is a lot like a black box. Every fourth year, we vote in the parliamentary election and select 200 new (or the same old) faces to make decisions about matters national. The problem with this is that us voters, we really don’t know much about what is going on in there. Sure, the media covers some issues, but more often than not the reporting isn’t very good. Including just the outcome of a vote isn’t telling much about anything besides the result. And that is not enough.

If we consider what a good decision is like, it’s pretty clear that is has to be based on something. Sure, you can sometimes get it right by just guessing – but nobody would call that a good decision strategy. A good decision has to be based on evidence. Are politicians using evidence properly, then? At the moment, nobody knows. To really evaluate politicians, we need more than just their voting patterns. We need some inkling about why they chose the way they did. It’s not enough to say “down with cars!” One should at least provide some justification, like “down with cars, because they hurt our health!”  In an even better situation, we would see deeply researched judgments, really explaining how and why the policy voted for is the best way to reach an important goal. For example, someone might say “down with cars, because they hurt health so much that we could their impact on wellbeing is negative”.

Unfortunately, we hardly ever see this kind of reasoning. That’s why I think we need more open decision making, especially regarding the analyses and data that our political decision makers use. If we have more open data, we also have more minds looking at that and evaluating, whether the inferences politicians have made on that are really correct. Publicizing at least a substantial minority of the data used in the political decision making process would invite all kinds of NGOs, interest groups and analysts to really comb through the data. At the moment, none of that is happening regularly. When we do have access to data, it’s often related to decisions that have already been made! For obvious reasons, that’s really not very helpful. I think it’s fair to say that there’s room for improvement, and for moving more towards open, evidence-based policy generation.

The point here is not that politicians would be especially stupid or untrustworthy. No, the point is that they’re people just like you and me (albeit with more support staff). And just like you and me, they make mistakes. That’s why you sanity check your recommendations with your boss before sending them to the client. That’s why I get feedback from my supervisor before I send my paper to a journal. We’re fallible human beings, all of us. But having more people looking at the same things, we can average out many biases and avoid obvious mistakes. To do that in politics, we need more open data.

0 Comments

How Rejection Levels Can Help You

10/3/2015

0 Comments

 
A concept that comes up pretty often in decision research is the one of aspiration levels. They are meant to reflect some kind of preference levels, meaning levels of attributes that the decision maker would like to have in an ideal situation. The idea behind the concept is that such levels can guide both the decision maker and the analyst to look for portions of the alternative space that’s relevant – better to search close to the optimal levels.

Now that’s nice and all, but for practical purposes I think an inverse concept is perhaps even more useful. By inverse I mean rejection levels. Or, as I like to call them, what-the-hell-I’m-absolutely-not-willing-to-accept-that levels. The idea is simple enough: rejection levels signify the worst attribute levels you’re willing to accept. A value worse than that means you’ll discard it immediately and look elsewhere.

The benefit is that if you have many alternatives, rejection levels can be used to make the search space smaller very fast. Imagine you’re buying a bike, and there are two criteria: cost and quality. You probably have some aspiration levels – the ideal bike. That’s reflected in the upper left corner (low price, terrific quality). But that only tells us the portion of the search space with the best alternative, but unfortunately very likely a non-existing one. Looking at the picture below, it’s clear there’s still a lot of search space remaining.
Picture
On the other hand, the rejection levels immediately close off a large portion of the graph. You’re not willing to pay more than 1500 euros for any bike, nor are you ready to accept a bike with a quality rating of less than five. The picture shows how much effort you can save with the rejection levels – there’s many options that are closed off just by setting the levels.
Picture
The trick with rejection levels is that you need to set them before looking at the options. A bike can be bought without issues, but any more complex decision and trouble arises. For example, house buying is of considerable difficulty in itself. And what marketers know is that if the house makes a good first impression, you’re likely to start coming up with reasons for why that house was just so lovely, convenient, and so on. As a result, people tend to exceed their budget after falling in heavy with a single house.

To avoid this, rejection levels are a great technique. If the price goes above the rejection level, you can confidently say thanks, but no thanks and just move on. By making the rejection decisions with a rule that you’ve committed to beforehand is much, much easier than mulling over each and every option you come across.
0 Comments

Good Sources About Decision Making

3/2/2015

0 Comments

 
Everyone knows Daniel Kahneman’s Thinking Fast and Slow. But if you’ve already read that, or are otherwise familiar enough for it to have low marginal benefit, then what could you study to deepen knowledge about decisions? Well, here are a few sources that I’ve found beneficial. To find more, you can check out my resources page!

TED talks

In the modern world, we’re all busy. So if you don’t want to invest tens of hours into books, but just want a quick glimpse with some food for thought, there are of course a few good TED talks around. For example:

Sheena Iyengar: The Art of Choosing

The only well-known scholar so far discussing choice from a multicultural context. Do we all perceive alternatives similarly? Does more choice mean more happiness? With intriguing experiments, Iyengar shows that the answer is: it depends. It depends on the kind of culture you’re from.

Gerd Gigerenzer: The Simple Heuristics that Make Us Smart

Gigerenzer is known as one of the main antagonists of Kahneman. In this talk, he discusses some heuristics and how in his opinion they’re more rational than the classical rationality which we often consider to be the optimal case.

Dan Ariely: Are we in control of our own decisions?

Dan Ariely is a ridiculously funny presenter. For that entertainment value alone, the talk is well worth watching. Additionally, he shows nicely how defaults influence our decisions, and how a complex choice case makes it harder to overcome the status quo bias.

Books

Even though TED talks are inspiring, nothing beats a proper book! With all their data and sources to dig deeper, any of these books is a good starting point for an inquiry into decisions.


Reid Hastie & Robyn Dawes: Rational Choice in an Uncertain World

For a long time, I was annoyed there doesn’t seem to be a good, non-technical introduction into the field of decision making. Kahneman’s book was too long and focused on his own research. Then I came across this beauty. In just a little over 300 pages, Hastie & Dawes go through all the major findings in behavioral decision making, and also throw in a lesson or two about causality and values. Definitely worth a read if you haven’t gotten into decision making before. And even if you have, because then you’ll be able to skim some parts and concentrate on the nuggets most useful for you. 

Jonathan Baron: Thinking and Deciding

Talking about short books – this is not one of them. This is THE book in the field of decision making. A comprehensive edition with over 500 pages, it covers all the major topics: probability, rationality, normative theory, biases, descriptive theory, risk, moral judgment. Of course, there’s much, much more to any of the topics included, but for an overview this book does an excellent job. It’s no secret that this book sits only a meter away from my desk, that’s how often I tend to read it.

Keith Stanovich: The Robot’s Rebellion - Finding Meaning in the Age of Darwin

This book may be 10 years old, but it’s still relevant today. Stanovich describes beautifully the theory of cognitive science around decisions, Systems 1 and 2 and so on. He proceeds to connect this to Dawkinsian gene/meme theory, resulting in a guide to meaning in the scientific and Darwinian era.
0 Comments

Might Anonymity Help Devil’s Advocacy?

27/1/2015

0 Comments

 
One of the important biases in business is the sunk cost fallacy – the tendency to throw good money after bad. For example, you’ve spend tens of thousands on developing a new product, but it’s still not working. A common thing to do is go on with the development simply because “you’ve already spent so much on it”. However, what should matter is the future: is more money likely to make it happen? The past is irrelevant – that money has already been spent.

Surely, watching over employees should reduce this problem?

Unfortunately, not necessarily. While some research tends to show is that being accountable for your choices makes you less susceptible to sunk cost fallacy, sometimes accountability makes the effect even worse! Research is mixed on this, but for now I’ll accept that accountability is not the magic bullet. Well, there have been other ideas for reducing the fallacy.
Picture
A common suggestion (for example, see Kahneman’s HBR paper) for improving the situation is to have somebody in the team play devil’s advocate, in effect trying to poke holes in whatever plan anyone proposes. For example, McCarthy et al. (1993) propose that entrepreneurs get outside advice on whether to try to expand their business, since “[e]ntrepreneurs should recognize that the escalation bias tendency is likely to occur”. What I’m concerned is that in the political environment of a larger company, such devil’s advocacy might not be very effective. The devil’s advocate has to face the fact that she may be the only one trying to argue against the decisions, and so may be perceived negatively, no matter how hard we try to dissociate her persona from the role. Furthermore, having to disagree may be so uncomfortable for some people that they’ll just pretend to be devil’s advocate – thus not really deeply challenging, but just presenting superficial questions. If all other team members are excited enough, nobody might notice.
Picture
Thus, I’ve started thinking that perhaps the devil’s advocate role might be better with anonymity. Getting outside advice is good, but perhaps getting outside anonymous advice is better. The person to complete the “devil’s report” could be from the team, or from the outside – although if it’s from the team then I guess she might not be motivated to do it properly. But for an outsider, anonymity ensures that your image stays good, and also that you don’t necessarily have to be at the meeting (always a good thing). On the other hand, personified devil’s advocacy ensures that the team has to face the issues and actually resolve them – they can’t just throw the devil’s report into the bin. So ultimately I think the choice between anonymity and personified devilry rests on what you need the most: the hardest counterargument anyone can produce, or a person who makes sure that you actually answer all the counterarguments. 
0 Comments

Two Simple Concepts to Improve Everyday Decisions

20/1/2015

0 Comments

 
Discussions around decision making often tend to lead to the question “How can I leverage this in my own life?” Unfortunately, behavioral results are not the easiest to apply in the everyday. Sure, knowing about biases is good, especially when you’re making that big decision. But in all fairness, loss aversion or the representativeness heuristic are not usually the biggest worries.

For me, personally, the biggest worries revolve around one question: Is this really worth it? And no, I don’t mean that my mode of being is an existential crisis. What I mean is that I often find myself asking whether this particular activity is worth my investment of time and energy. This meta-level monitoring function is a direct result of the two following concepts.

Opportunity cost


If you’ve studies economics or business, you’ve surely heard of this. If you haven’t – well, you might be missing one important hammer in the toolbox of good thinking. As a concept, opportunity cost is really simple. The opportunity cost of any product, device or activity is what you don’t get instead. For example, if I go to the gym for an hour, I’m giving up the chance to watch an episode of House, for example. Of course, there are all kinds of activities one is giving up for that hour, but ultimately what matters is the best opportunity given up – that’s the opportunity cost.
Picture
You're giving up WHAT to read this?!
Why I consider this to be important is that it’s the ultimate foundation for optimization. When one thinks about activities in terms of opportunity costs, it makes concrete the constraint that we all experience: time. No matter how rich or powerful you are, there’s always going to be that nagging limit of 24 hours a day. So it pays to think about whether something is really worth your precious time.

Marginal benefit

Marginal benefit (or utility) is also quite simple. The marginal benefit of something is the benefit you get by consuming an extra unit of that good. For example, at the moment of writing this, the marginal benefit of a hamburger would be quite high, since at the moment I’m pretty hungry. What’s important is that the marginal benefit changes over time – it’s never constant. One burger is good, and two maybe even better, but add more and more burgers on my desk and I’ll hardly be any happier. In fact, anything over three burgers is a cost to me, since I can’t possibly eat all that – I’ll just have to carry them to the garbage!
Picture
Please, no more burgers!
What makes marginal benefit powerful is the idea that even though I’m enjoying something, it doesn’t mean I should take in all that I can. A night out is great fun, but perhaps after a few pints the marginal benefit often plummets quite fast – you can try this by staying in the bar for extra two hours next time. Just remember to evaluate the situation next morning! ;)

These two concepts help you to ask two things. How much are you getting out of this? What could you get instead? And if the answer is that there’s something more you want instead –well, that’s a wonderful result! At least now you know what you want! :) Or, well, until the marginal benefit decreases, at least…
0 Comments

Measure Right and Cheap: Overcoming “we need more information”

1/12/2014

0 Comments

 
Ah, information. The One Thing that will solve all your problems, and make all the hard decisions for you. Or so many keep thinking: if I only had more information… Of course, in many ways, this is exactly right. More information does equal better decisions, as long as the information is – sorry for the pun – informative. Unfortunately, in many cases we either acquire the wrong information, or pass by getting the right kind of data, thinking that it’s too costly.

Thinking about that, I have a hypothesis about why the feeling “we need more information” persists:

  1. Even with information, hard decisions are still hard
  2. Information is of the wrong kind
  3. Thinking information costs too much

Even with information, hard decisions are still hard

This is really not very surprising, but there’s a common thread linking all hard decisions: they are hard. If they were easy, you wouldn’t be sitting there, thinking about the problem. No, you’d be back at home, or enjoying a run, or whatever. Decisions are hard for two main reasons: uncertainty and tradeoffs. Uncertainty makes decisions hard, but it can be mitigated with measurements. But what about those pesky cases when you can’t measure? Well, I’m going to say it flat out: there are no such cases. Sure, you can rarely get perfect certainty, but usually you can reduce uncertainty by a whole lot.

The second problem, that of tradeoffs, is the true culprit for hard decisions’ existence. Often we’re faced with situations, in which one option is more certain, but another has more potential profit. For example, when I run a race, I can start with a slower pace or harder pace. The slower pace is safer: I’ll definitely finish. The hard start pace, in contrast, is more risky: my reachable time at finish is better, but I run the risk of cramps and might not finish at all. Tradeoffs are annoying in the sense that there’s often nothing you can do about it, no measurement will save you. If you’re thinking between a cheap but ugly car, and an expensive but fancier one, what could you measure? No, you’ll just have to make up your mind about what you value.
Picture
Iron Man, Hulk, or Spider-Man? Why not all three?
Information is of the wrong kind

According to a management joke, there are two kinds of information: what we need, and what we have. I think there’s some truth in this. 
Picture
A fundamental problem with information is that not all things are equally straightforward to measure. It’s quite a lot more difficult to measure employee motivation, and a lot easier to measure the number of defect products in a batch. For this reason, a lot of companies end up measuring just the latter. It’s just so much easier, so shouldn’t we focus our efforts on that? Well, not necessarily. It’s not the cheapest measurements you ought to do, but the ones with the most impact. In his book How To Measure Anything Doug Hubbard tells that he was shocked by companies measurements: many were measuring the easy things, and had left several variables with a large impact completely unquantified! As Hubbard explains (p. 111):
The highest-value measurements almost always are a bit of surprise to the client. Again and again, I found that clients used to spend a lot of time, effort, and money measuring things that just didn’t have a high information value while ignoring variables that could significantly affect real decisions.
Thinking information costs too much

It’s an honest mistake, thinking that if you have a lot of uncertainty, you need a lot of information to help you. But, in fact, the relationship is exactly the inverse. The more uncertainty you have, the less information you need to improve the situation. If you’re Jon Snow, just spending a moment looking around will improve things!

I think this mistake has to do with looking for perfect information. Sure, the gap to perfect information is much larger here. But the point is that if you know next to nothing, you get to pick the low-hanging fruit and improve the situation with those very cheap pieces of information, while in a more advanced situation with less uncertainty, you’d need more and more complex and expensive measurements.

For example, many startups face the following question in the beginning: Is there demand for our product? In the beginning, they know almost nothing. They probably feel good about the product, but that’s not really much data. An expensive way of getting data would be to hire a marketing research firm, do a study or two about the demand, burning tens of thousands in the process. A cheaper way: call a few potential customers, or go to the market and set up a stand. You won’t have perfect information, but you’ll know a lot more than you did just a while ago! It’s good to see that the entrepreneurship literature has taken this to heart, and guys like Eric Ries are teaching also bigger companies that more costly doesn’t always equal better. Or even if it would, maybe it’s still unnecessary. Simple measurements go a long way.
0 Comments
<<Previous

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.