Follow up to “What is Utilitarianism?”.
I’ve talked about what utilitarianism is and why you might like it. But what would it look like in practice, in the real world? A few people have been suggesting that in order for utilitarianism to actually work we have to be utility calculating robots with no desire except to maximize utility in every situation.
However, there’s are two big problems with that: first, you’re an error-prone human, not a perfect utilitarian robot, so you’re prone to make mistakes and therefore accidentally cause things to be worse. Second, you’re a human with normal human psychology, not a perfect utility robot, so you’re prone to care about things other than working a 120 hour work week solely dedicated to, say, ending malaria.
So what gives? Is utilitarianism over? Doomed to be something nice but impractical? No. Instead, you should take your human nature into account and do the best you can. In fact, doing the best you can instead of trying to be a perfect utilitarian actually better maximizes utility, because if you try to be a perfect utilitarian robot, you’ll fail, and the consequences of failure will be worse than if you tried hard.
R. M. Hare, in writing Moral Thinking: It’s Levels, It’s Methods, It’s Point comes up with two different levels upon which we can think about ethics – the intuitive and the critical.
At the intuitive level (which Hare affectionally personifies as the “prole”), we think of ethics in terms of what is practical and workable for the every day, where we face a flurry of decisions. Here, we’ll have to rely on basically being automatic with our existing desires, and just do what tends to come naturally. On the intuitive level, we operate on habit and attitudes with little thinking. For example, I made an intuitive-level decision to write this blog post, and I have made an intuitive-level decision to watch some television after finishing and then go to bed. At no point on the intuitive level do I think about whether what I am doing is maximizing utility; instead, I just do it.
At the critical level (which Hare affectionally personifies as the “Archangel”), we think of ethics in full, deliberate reflection. We reflect upon our current habits, desires, intuitions, attitudes, and plans and ask if they are truly maximizing utility. For example, I might notice that while I plan on watching television and going bed, I could perhaps make the world an even better place if I instead put an extra hour instead into making this essay more compelling, or tried to convince a friend to donate money, or made some money on Mechanical Turk and donated it some place I could be confident it would do good work.
People who think we should calculate utility make it sound like we should operate on the critical level all the time, but this is impossible. Instead, I try to operate on the critical level and reflect as much as I can, but I’ll still be spending much of my life on the intuitive level. Upon reflection, I’ve decided to stick with my television-and-sleep plan because I just want to relax and have more energy to put into tomorrow, and I’m skeptical that I’ll make enough money from Mechanical Turk anyway.
If I focused all my energy into doing the best I can, I run the risk of burning out, and then I won’t accomplish anything worthwhile. If I focused all my energy into planning 100% of the time and never relied on automatic behavior, I’d go insane from making too many unimportant decisions. I do need to take some breaks, and that’s ok. I do need to act on impulse sometimes, and that’s ok. In fact, both of these things are more than ok, but actually the optimal thing to do.
(All this being said, however, I’d offer two caveats: first, I think the average person would benefit from spending much more time on the critical level. Second, I think that people tend to take more breaks than they actually need. So I’d actually suggest that most people should act more overtly utilitarian than they currently do, though it would be bad to try to be a perfect utilitarian.)
But how can we be prepared to operate at the intuitive level? The answer comes from the idea of a heuristic, which is a convenient rule-of-thumb that is easy to follow, and usually right. Heuristics for utilitarianism come in the form of the next part of our toolkit; rule utilitarianism. Rule utilitarianism says that what you should do is that which follows a set of rules, where the rules are whatever set of rules best maximizes utility.
“Don’t lie” is a pretty good rule. Generally, people don’t like being lied to, and lies deliberately make it harder for them to operate on accurate information, thus making it harder for them to be successful. And if they find out about your lie, even more bad things will happen. Thus, “don’t lie” becomes a great example of a rule that should be adopted for use on the intuitive-level. Thus, when I’m working day-to-day, I’ll just remember never to lie to people; in fact, I won’t even want to.
However, these rules shouldn’t be taken as absolute; there certainly are some obscure scenarios in which lying would be a good thing. The classic example is one in which you come across someone you know who wants to kill your brother, and then asks you to tell him where your brother is. Even if you know where your brother currently is, you should lie and either give a fake location or say you don’t know – you can’t risk your brother’s life merely for the sake of telling the truth! This kind of operation requires a quick kick up to the critical level, but should be fine. Indeed, if you are concerned, you could modify your intuitive-level rule to be “don’t lie, except in the case of murderers”.
On the other hand, you still want to take your intuitive-level rules seriously and break them only seldolmly and in the most bizarre of scenarios. You can’t just break the rule whenever it appears to be a marginally beneficial thing to do, because you’re fallible, it’s too easy to trick yourself, and there’s too much risk from being wrong.
Indeed, most often the rule breaking is left to a duly elected government to do very carefully. For example, instead of allowing the poor to break the rule of “don’t steal” when they are in extreme need and allowing everyone to be a utility-maximizing Robin Hood vigilante, we have the government legally “steal” from everyone in the form of coercive taxation, and then use this money to see to it that the poor are treated without the need to steal for themselves. We then can make sure there’s enough oversight to make sure the government is careful about doing this, and things turn out pretty ok.
Heuristics and Human Rights
One form of the way these kinds of heuristics play out in the everyday is human rights. Belief in God nonwithstanding, human rights are super-strong collective agreements about rules that are to not be broken, under nearly any circumstance. As Scott Alexander, points out writing for The Consequentialism FAQ, this can go a bit haywire, and finding a good balance is important. (Not to mention that the name “human rights” itself seems to diminish “nonhuman animal rights”…)
The problem with rights is that everyone disagrees about which rights people have, and it can just create an irresolvable argument where one person says “Unborn fetuses have a right to life!” and another person says “Woman have a right to control their bodies!” and then we’re stuck.
Recognizing these as heuristics allows us a way out. Alexander gives the example of freedom of travel – this is generally a good heuristic because we want to be able to go places without restriction, and that makes our lives better – however, we wouldn’t grant this right to little children or prisoners, because that would make things worse. Wagering into abortion debates, we’d have to jump to the critical level and see what costs and benefits come out of an abortion, and then make our decisions there.
On the other hand, the decision to restrict freedoms is done extra carefully. For example, it seems like it would be a good idea to restrict the speech of racists, neo-Nazis, etc. However, we’ve come to realize over time that we’re bad at figuring out who should be silenced, and thus it’s better to go with “never deny freedom of speech on the basis of the content of that speech” heuristic and stick with it, no matter what.
So what does a utilitarian do in every day life? A utilitarian does not directly calculate utility and desire only utility, because a utilitarian is a fallible human agent with other competing desires and the inability to put 100% of his or her efforts into caclulating utility. Therefore, the utilitarian agent will rely on indirect utilitarianism in the form of two-level utilitarianism using rules and heuristics to decide what to do, rather than calculate the best action out of all possible actions every time.
Many people see rule utilitarianism, two-level utilitarianism, indirect utilitarianism, etc. and think that all these utilitarianisms are coming in full force and no one is seeking to maximize total well-being anymore. I want to make it clear that this doesn’t mean the “command” to maximize total well-being has been discarded completely. Instead, it is this decision procedure to ignore utility calculations in these kinds of scenarios that is being done out of a specific desire to maximize total well-being.
It’s the other way around – people want to maximize total well-being, but recognize that they can’t do so by just thinking really hard about what will do so every minute of their lives. Thus, what we are settling on is specific decision procedures that explain how we should implement the goal of total well-being maximization in our daily lives.