Bias Hunter
  • Home
  • Blog
  • Resources
  • Contact

Ethical Algorithms

27/12/2016

2 Comments

 
In a wonderful and very interesting turn of events, ethical algorithms are suddenly all the rage. Cathy O’Neil wrote a book called Weapons of Math Destruction, in which she went through a couple of interesting case examples of how algorithms can work in an unethical and destructive fashion. Her examples came from the US, but that the phenomenon doesn’t limit itself on the other side of the pond.

In fact, just a month ago, the Economist reported on the rise of credit cards in China. The consumption habits in China are becoming closer to resembling Western ones, including the use of credit cards. And where you have credit cards, you also have credit checks. But how do you show your creditworthiness, if you haven’t had credit?

Enter Sesame Credit, a rating firm. According to the Economist, they rely on “users’ online-shopping habits to calculate their credit scores. Li Yingyun, a director, told Caixin, a magazine, that someone playing video games for ten hours a day might be rated a bad risk; a frequent buyer of nappies would be thought more responsible.” Another firm called China Rapid Finance relies on looking at users’ social connections and payments. My guess would be that their model predicts your behavior based on the behavior of your contacts. So if you happen to be connected to a lot of careless spend-a-holics, too bad for you.
​
Without even getting to the privacy aspects of such models, one concerning aspect – and this is the main thrust of O’Neil’s book – is that these kinds of models can discriminate heavily based completely on aggregate behavior. For example, if CRF:s model sees your friends spending and not paying their bills, they might classify you as a credit risk, and not give you a credit card. And if there is little individual data about you, this kind of aggregate data can form the justification of the whole decision. Needless to say, it’s quite unfair that you can be denied credit – even when you’re doing everything right – just because of your friends’ behavior.
Picture
Four credit ratings, coming down hard.
Now, O’Neil’s book is full of similar cases. To be honest, the idea is quite straightforward. The typical signs of an unethical model (in O’Neil’s terms, a Weapon of Math Destruction) has a few signs: 1) they have little to no feedback to learn from, and 2) they make decisions based on aggregate data. The second one was already mentioned, but the first one seems even more damning.

A good example of the first kind is generously provided by US education systems. Now, in the US, rankings of schools are all the rage. Such rankings are defined in with a complicated equation, that takes into account how well outgoing students do. And of course, the rankings drive the better students to the better schools. However, the model never actually learns any of the variables and their importance from data – these are all defined by pulling them from the administrators’, programmers’, and politicians’ collective hats. What could go wrong? What happens with systems like these, is that the ranking becomes a self-fulfilling prophecy, and that changing how the ranking is calculated becomes impossible, because the schools that do well are obviously up in arms about any changes.

This whole topic of discrimination in algorithms is actually gaining some good traction. In fact, people at Google are taking notice. In a paper that was recently presented at NIPS, the authors argue that what is needed is a concept of equality of opportunity in supervised learning. The idea is simple: if you have two groups, (like two races, or rich and poor, etc.) in both groups the true positive rate should be the same. In the context of loans, for example, this means that of all those who could pay back loans, the same percentage of people are given a loan. So if groups A and B have 800 and 100 people that could pay the loan back, and your budget can account a loan to 100 people, then 88 in group A and 11 in group B would get the loan offer (both having 11% loan offer rate).
​
Mind you, this isn’t the only possible or useful concept for reducing discrimination. Other useful ones group-unaware and demographic parity. A group-unaware algorithm discards the group variable, and uses the same threshold for both groups. But for loans, depending on the group distributions, this might lead to one group getting less loan offers. A demographic parity algorithm, on the other hand, looks at how many loans each group gets. In the case of loans, this would be quite silly, but the concept might be more useful when allocating representatives for groups, because you might want each group to have the same number of representatives, for example.
Anyway, there’s a really neat interactive graphic about these, I recommend you to check it out. You can find it here.
2 Comments

You Are Irrational, I Am Not

29/10/2015

3 Comments

 
The past month or so I’ve been reading Taleb’s Black Swan again, now for the second time. I’m very much impressed by his ideas, and the forceful in-your-face way that he writes. It’s certainly not a surprise that the book has captivated the minds of traders, businesspeople and other practitioners. The book is extremely good, even good enough to recommend it as a decision making resource. Taleb finds a cluster of biases (or more exactly, puts together research from other people to paint the picture), producing a sobering image of just how pervasive our neglect of Black Swans is in our society. And, he’s a hilariously funny writer to boot.

But.

Unfortunately, Taleb – like everyone else – succumbs in the same trap we all do. He’s very adept at poking other people about their biases, but he completely misses some blind spots of his own. Now, this is not evident in the Black Swan itself – the book is very well conceptualized and a rare gem in the clarity of what it is as a book and what it isn’t. The problem only becomes apparent in the following, monstrous volume Antifragile. When reading that one a few years ago, I remember being appalled – no, even outraged – by Taleb’s lack of critical thought towards his own framework. In the book, one gets the feeling that the barbell strategy is everywhere, and explains everything from financial stability to nutrition to child education. For example, he says:
​I am personally completely paranoid about certain risks, then very aggressive with others. The rules are: no smoking, no sugar (particularly fructose), no motorcycles, no bicycles in town [--]. Outside of these I can take all manner of professional and personal risks, particularly those in which there is no risk of terminal injury. (p. 278)
I don’t know about you, but I really find it hard to derive “no biking” from the barbell strategy.

​Ok, back to seeking out irrationality. Taleb certainly does recognize that ideas can have positive and negative effects. Regarding maths, at a point Taleb says:
[Michael Atiyah] enumerated applications in which mathematics turned out to be useful for society and modern life [--]. Fine. But what about areas where mathematics led us to disaster (as in, say, economics or finance, where it blew up the system)? (p.454)
My instant thought when reading the above paragraph was: “well, what about the areas where Taleb’s thinking totally blows us up?”

Now the point is not to pick on Taleb personally. I really love his earlier writing. I’m just following his example, and taking a good, personified example of a train of thought going off track. He did the same in the Black Swan, for example by picking on Merton as an example of designing models based on wrong assumptions, and in a wider perspective of models-where-mathematics-steps-outside-reality. In my case, I’m using Taleb as an example of the ever present danger of critiquing other people’s irrationality, while forgetting to look out for your own.
​
Now, the fact that we are better at criticizing others than ourselves is not exactly new. After all, even the Bible (I would’ve never guessed I’ll be referencing that on this blog!) said: “Why do you see the speck that is in your brother’s eye, but do not notice the log that is in your own eye?”
In fact, in an interview in 2011, Kahneman said something related:
I have been studying this for years and my intuitions are no better than they were. But I'm fairly good at recognising situations in which I, or somebody else, is likely to make a mistake - though I'm better when I think about other people than when I think about myself. My suggestion is that organisations are more likely than individuals to find this kind of thinking useful.
If I interpret this loosely, it seems to be saying the same thing as the Bible quote – just in reverse! Kahneman seems to think – and I definitely concur – that seeing your own mistakes is damn difficult, but seeing others’ blunders is easier. Hence, it makes sense for organizations to try to form a culture, where it’s ok to say that someone has a flaw in their thinking. Have a culture that prevents you explaining absolutely everything with your pet theory.
3 Comments

Decisions as Identity-Based Actions

24/3/2015

0 Comments

 
This semester I had the exciting chance of teaching half of our department’s Behavioral Decision Making course. What especially has gotten me thinking is James March’s book chapter Understanding How Decisions Happen in Organizations, taken from the book Organizational Decision Making, edited by Zur Shapira. Similar points can be found in March’s paper called How Decisions Happen in Organizations.

In the chapter, March presents all kinds of questions and comments directed at organizational scholars. Basically, his main point is that the normative expected utility theory is perhaps not a very good model for describing organizational decisions. No surprises there – modelling organizational politicking through utilities is pretty darn difficult. What did catch my eye was that March has a pretty nice description of how some organizations do muddle through.

This idea concerns decisions as identity-based actions. The starting point is that each member of an organization occasionally faces decision situations. These are then implicitly classified into different classes For example into HR decisions, decisions at strategy meetings, decisions after lunch, etc. The classification depends on the person, of course. What’s key is the next two steps: these classes are associated with different identities, which then are the basis of decisions through matching. This way, the decision gets made by basing the choice on rules and the role, not just on the options at hand.
Picture
So the manager may adopt a different identity when making strategy decisions, than when thinking of who to hire for his team. The decision is not based on a logic of consequence, but rather on a logic of appropriateness – we do what we ought to be doing in our role. “Actions reflect images of proper behavior, and human decision makers routinely ignore their own fully conscious preferences. They act not on the basis of subjective consequences and preferences but on the basis of rules, routines, procedures, practices, identities, and roles” (March, 1997) So rather than starting from the normative question of what they want, and which consequences are then appropriate, decision makers are implicitly asking “Who am I? What should a person in my position be doing?”

I feel that this kind of rule-based or identity-based behavior is a two-edged sword. On the one hand, it offers clear cognitive benefits. A clear identity for a certain class of decisions saves you the trouble of meta-choice: you don’t have to decide how to decide. When the rules coming from that identity are adequate and lead generally to good outcomes, it’s an easy procedure to just follow them and get on with it. On the other hand, the drawbacks are equally clear. Too much reliance leads to the “I don’t know, I just work here” phenomenon, in which people get in too deep in their roles, forgetting that they actually have a mind of their own.

Which way is better, then? Identity-based decisions, or controlled individual actions? Well, I guess the answer looks like the classic academic’s answer: it depends. It depends on the organization and the manner of action: how standardized are the problems that people face, is it necessary to find the best choice option or is satisficing enough, and so on. Of course, it also depends on the capabilities of the people involved: do they have the necessary skills and mindset to handle it if left without guiding rules and identities? Of do they need a little bit more support for their decisions? Questions like these are certainly not easy, but every manager should be asking them one way or another. 
0 Comments

Discussing Rationality

3/3/2015

2 Comments

 
I have a confession to make: I’m having a fight. Well, not a physical one, but an intellectual one, with John Kay’s book Obliquity. It seems to me that we have some differences in our views about rationality.

Kay writes that he used to run an economic consultancy business, and they would sell models to corporations. What he realized later on was that nobody was actually using the models for making decisions, but only for rationalizing them after they were made. So far so good – I can totally believe that happening. But now for the disagreeable part:
They have encouraged economists and other social scientists to begin the process of looking at what people actually do rather than imposing on them models of how economists think people should behave. One popular book adopts the title Predictably Irrational. But this title reflects the same mistake that my colleagues and I made when we privately disparaged our clients for their stupidity. If people are predictably irrational, perhaps they are not irrational at all: perhaps the fault lies not with the world, but with our concept of rationality.
- Obliquity, preface 
Ok, so I’ve got a few things to complain about. First of all, it’s obvious we disagree about rationality. Kay thinks that if you’re predictably irrational, then maybe the label of irrationality is misplaced. I think that if you’re predictably irrational, that’s a two-edged sword. The bad thing is that predictability means you’re irrational in many instances – they are not random errors. But predictability also entails that we can look for remedies – if irrationality is not just random errors, we can search for cures. The second thing I seem to disagree about – based on this snippet – are the causes of irrationality. For Kay, it’s stupidity. For me, it’s a failure of our cognitive system.

Regarding Kay’s conception of rationality, my first response was whaaat?! Unfortunately, that’s really not a very good counterargument. So what’s the deal? In my view, rationality means maximizing your welfare or utility, looked at from a very long-term and immaterial perspective. This means that things like helping out your friend is fine, giving money to charity is fine. Even the giving of gifts is fine, because you can give value to your act of trying to guess at your friend’s preferences. After all, to me this seems to be a big part of gift-giving: when we get a gift that shows insight into our persona, we’re extremely satisfied.

Since Kay is referring explicitly to Dan Ariely’s Predictably Irrational, it might be sound to look at a few cases of (purported) irrationality that it portrays. Here’s a few examples I found in there:

  1. We overvalue free products, choosing them even though a non-free options has better value for money (chapter 3)
  2.  We cannot predict our preferences in a hot emotional state from a cold one (chapter 6)
  3.  We value our possessions higher than other people do, so try to overcharge when selling them (chapter 8)
  4. Nice ambience, brand names etc. make things taste better, but can’t recognize this as the cause (chapter 10)
  5.  We used to perform surgery on osteoarthritis of the knee – later it turned out a sham surgery had the same effect

If Kay wants to say that these cases are alright, that this is perfectly rational behavior, then I don’t really know what one could say to that. With the exception of point 3, I think all cases are obvious irrationalities. The third point is a little bit more complex, since in some cases the endowment effect might be driven by strategic behavior, ie. trying to get the maximum selling price. However, it also includes cases where we give stuff to people at random, with a payout structure that ensures they should ask for their utility-maximizing selling price. But I digress. The point being that if Kay wants to say these examples are okay, then we have a serious disagreement. I firmly believe we’d be better off without these errors and biases. Of course, what we can do about them is a totally different problem – but it seems that Kay is arguing that they are in principle alright.

The second disagreement, as noted above, is about the causes of such behaviors. Kay says the chided their clients ‘stupidity’ for not using the models of rational behavior. Well, I think that most errors arise due to us using System 1 instead of System 2. Our resources are limited, and we’re more often than not paying inadequate attention to what is going on. This makes irrationality not a problem of stupidity, but a failure of our cognitive system. Ok, so intelligence is correlated to some tasks of rational decision making, but for some tasks, there is no correlation  (Stanovich & West, 2000). It's patently clear that intelligence alone will not save you from biases. And that’s why calling irrational people stupid is –for want of a more fitting word – stupid.

Ok, so not a strong start from my perspective for the book, but I’m definitely looking forward to what Kay has to say in later chapters. There’s still a tiny droplet of hope in me that he’s just written the preface unclearly, and he’s really advocating for better decisions. But, there’s still a possibility that he’s just saying weird things. I guess we’ll find out soon enough.
2 Comments

Good Sources About Decision Making

3/2/2015

0 Comments

 
Everyone knows Daniel Kahneman’s Thinking Fast and Slow. But if you’ve already read that, or are otherwise familiar enough for it to have low marginal benefit, then what could you study to deepen knowledge about decisions? Well, here are a few sources that I’ve found beneficial. To find more, you can check out my resources page!

TED talks

In the modern world, we’re all busy. So if you don’t want to invest tens of hours into books, but just want a quick glimpse with some food for thought, there are of course a few good TED talks around. For example:

Sheena Iyengar: The Art of Choosing

The only well-known scholar so far discussing choice from a multicultural context. Do we all perceive alternatives similarly? Does more choice mean more happiness? With intriguing experiments, Iyengar shows that the answer is: it depends. It depends on the kind of culture you’re from.

Gerd Gigerenzer: The Simple Heuristics that Make Us Smart

Gigerenzer is known as one of the main antagonists of Kahneman. In this talk, he discusses some heuristics and how in his opinion they’re more rational than the classical rationality which we often consider to be the optimal case.

Dan Ariely: Are we in control of our own decisions?

Dan Ariely is a ridiculously funny presenter. For that entertainment value alone, the talk is well worth watching. Additionally, he shows nicely how defaults influence our decisions, and how a complex choice case makes it harder to overcome the status quo bias.

Books

Even though TED talks are inspiring, nothing beats a proper book! With all their data and sources to dig deeper, any of these books is a good starting point for an inquiry into decisions.


Reid Hastie & Robyn Dawes: Rational Choice in an Uncertain World

For a long time, I was annoyed there doesn’t seem to be a good, non-technical introduction into the field of decision making. Kahneman’s book was too long and focused on his own research. Then I came across this beauty. In just a little over 300 pages, Hastie & Dawes go through all the major findings in behavioral decision making, and also throw in a lesson or two about causality and values. Definitely worth a read if you haven’t gotten into decision making before. And even if you have, because then you’ll be able to skim some parts and concentrate on the nuggets most useful for you. 

Jonathan Baron: Thinking and Deciding

Talking about short books – this is not one of them. This is THE book in the field of decision making. A comprehensive edition with over 500 pages, it covers all the major topics: probability, rationality, normative theory, biases, descriptive theory, risk, moral judgment. Of course, there’s much, much more to any of the topics included, but for an overview this book does an excellent job. It’s no secret that this book sits only a meter away from my desk, that’s how often I tend to read it.

Keith Stanovich: The Robot’s Rebellion - Finding Meaning in the Age of Darwin

This book may be 10 years old, but it’s still relevant today. Stanovich describes beautifully the theory of cognitive science around decisions, Systems 1 and 2 and so on. He proceeds to connect this to Dawkinsian gene/meme theory, resulting in a guide to meaning in the scientific and Darwinian era.
0 Comments

    RSS Feed

    Archives

    December 2016
    November 2016
    April 2016
    March 2016
    February 2016
    November 2015
    October 2015
    September 2015
    June 2015
    May 2015
    April 2015
    March 2015
    February 2015
    January 2015
    December 2014
    November 2014
    October 2014
    September 2014
    August 2014

    Categories

    All
    Alternatives
    Availability
    Basics
    Books
    Cognitive Reflection Test
    Conferences
    Criteria
    Culture
    Data Presentation
    Decision Analysis
    Decision Architecture
    Defaults
    Emotions
    Framing
    Hindsight Bias
    Improving Decisions
    Intelligence
    Marketing
    Mindware
    Modeling
    Norms
    Nudge
    Organizations
    Outside View
    Phd
    Planning Fallacy
    Post Hoc Fallacy
    Prediction
    Preferences
    Public Policy
    Rationality
    Regression To The Mean
    Sarcasm
    Software
    Status Quo Bias
    TED Talks
    Uncertainty
    Value Of Information
    Wellbeing
    Willpower

Powered by Create your own unique website with customizable templates.