“Debt reduction trumps financial growth”
“Life is more important than money”
All three above statements are sentenced I could well imagine someone utter in an intelligent discussion. All the statements have one other thing in common: they’re pretty much meaningless.
It’s clear that we can – and often do – make such statements. In itself, there’s nothing wrong about saying that A is more important than B. For example, “the math exam is more important than the history exam” is a perfectly legit way of relating your lack of interest in what happened in the 30 Years’ War. But when it comes to talking about what you want, and how you should distribute your resources, importance statements are meaningless without numbers.
The third case is perhaps the most common one. Presumably the idea is to say that we should never sacrifice human life to gain financially. Of course, that’s flat out wrong. Even if you agreed with that in principle, in practice you’re trading off human life for welfare all the time. When you go to work, you risk getting killed in an accident on the way, but have a chance of getting paid. Buying things from the grocery means someone has risked themselves picking, packing and producing the items – if you really valued their health, you’d grow your own potatoes. In healthcare, we recognize that some treatments are too expensive to offer – the money is better used for other welfare-increasing things, like building roads for instance. Life can be traded for welfare, ie. money.
The problem with importance without numbers is that they are hinting at tradeoffs, but grossly misrepresenting what we’re willing to accept. The choice examples involve tradeoffs, and tradeoffs are impossible if one goal is always more important than another. This causes an infinite tradeoff rate, causing you to favor a teeny-tiny probability of loss of life over the GDP of the whole world. Doesn’t sound too reasonable, does it? In fact, Keeney (1992, p.147) calls the lack of attention to range “the most common critical mistake”.
Naturally, we can always say that the examples are ridiculous, surely no one is thinking about such tradeoffs when they say life is more important than money, surely they’re thinking in terms of “sensible situations”. In a sense, I agree. Unfortunately, one’s ridiculous example is another’s plausible one. If you don’t say anything about the range of life and money that you’re talking about, I can’t know what you’re trying to say. It’s just much easier to say it explicitly: life is more important than money, for amounts smaller than 1000 euros, say.
Even this gets us into problems, because now if I originally have a choice problem involving 3000 euros and chance of death, you’d be willing to make some kind of tradeoff. But if I subdivide the issue into three problems, now suddenly human life always wins. If you think about utility functions, you can see how this can quickly become a problem. But the situation is still better than not having any ranges at all. Even better would be to assign a tradeoff ration that’s high but not infinite.