Fyfe-Hurford Debate: My Round One

Over at Felicifia, Alonzo Fyfe, blogger at Atheist Ethicist and The Secularite, and I had a debate on “Which is a better theory of utilitarianism – Alonzo Fyfe’s desirism or Peter Hurford’s two-level utilitarianism?”. I wanted to reprint that debate here. Here’s the second entry (of eight). This entry was written by me. This is a continuiation of the previous entry written by Alonzo.

Introduction

I want to first spend some of my precious word count mentioning how much of an honor it is to be engaging with Alonzo Fyfe on this important issue. In many ways I owe Alonzo a lot for where I am today. In 2011, I was a huge fan of desirism. I learned many aspects of moral philosophy from reading Fyfe and became interested in ethics due to following Alonzo’s journey to figure out the right thing. Perhaps ironically the first place I learned about the “two level utilitarianism” was from Fyfe himself.

In this debate, I offer “two-level utilitarianism” as a better view of utilitarianism. However, I agree with Alonzo that this is more of a friendly discussion than a debate. I must admit it might be hard for Alonzo to give up desirism after spending so much of his life defending it, just as it would be hard for me to give up utilitarianism after having a blog called “The Everyday Utilitarian”. But I rest assured that we are both beholden to the truth more than our particular brands of utilitarianism and believe we can be trusted to update our beliefs in the face of new arguments.

What is “Two-Level Utilitarianism”?

In 1981, R. M Hare wrote a book called Moral Thinking: It’s Levels, Method, and Point, in which he defends preference utilitarianism in theory, but argues that the human condition requires two levels of utilitarianism in practice – an intuitive level of rule utilitarianism and a critical level of act utilitarianism.

Hare notes that throughout much of our life we operate at an intuitive level where we don’t have the time to think through a situation and we have to act on habit and internalized thinking. At this intuitive level, we implement rule utilitarianism, where we internalize a set of rules and follow them.

However, Hare argued that, as often as is possible, we should re-evaluate these rules for ourselves and engage in full and deliberate reflection. At this critical level we implement act utilitarianism and perhaps resolve a particularly difficult moral problem and figure out ways to update our moral rules so that they better guide us to maximize utility in our intuitive moments.

What Makes One Utilitarianism Better Than Another?

In this debate, we seek to find the “best” version of utilitarianism. But, in doing so, we need some method to make judgments about what makes a theory of utilitarianism better than another. Alonzo did not overtly outline any specific criteria, so I thought I would suggest one: if the best utilitarian act is the act that maximizes utility and the best utilitarian rule is the rule that maximizes utility, I suggest that the best version of utilitarianism is the version that maximizes utility.

This principle of meta-utilitarianism makes sense – utilitarianism should not be self-defeating when followed, so we therefore should adopt whatever theory actually accomplishes utilitarian ends.

What is the Case for Two-Level Utilitarianism?

I call “stereotypical act utilitarianism the naïve strawman theory of utilitarianism that we should, in practice, be straightforward utilitarians exactly as described in the textbook – from the moment we wake up to the moment we go to sleep, we should analyze every option available to us and choose only that which maximizes utility.

Many theorists have pointed out that stereotypical act utilitarianism doesn’t work because of problems like how difficult it is to calculate utility for everything and how we will often suffer from imperfect information. Faced with these methodological and motivational problems to utilitarianism, I suggest we abandon stereotypical act utilitarianism in practice while keeping it in theory.

Kahneman (1973) points out that we only have a finite amount of energy to allocate to each decision in our life. Research in psychology has put forth “dual processing theory” where we have a “System I” style of thought that is quick, reflexive, and intuitive and a “System II” style of thought that is slow, reflective, and analytical. This dual processing theory fits two-level utilitarianism perfectly.

If we’re realistic, we have to be comfortable with the fact that many of our decisions will be intuitive. Indeed, making an intuitive snap decision without fully calculating out all the options will actually be the act utilitarian thing to do when we account for the fact that these calculations drain our energy and take time away from our ability to focus on other things.

However, that doesn’t mean we should stop there. In fact, I think that while we shouldn’t strive to be act utilitarians at all times, the vast majority of people would become better at producing utility if they were more reflective and self-aware, not less. When we have time to reflect, we can deliberately engage in “System II” processes (or operate at the critical level, as Hare calls it) and reflect upon our rules to sharpen them.

This divide provides us with the need for a two-level utilitarianism in order to fit it pragmatically. It’s also this divide between situations that prevents rule utilitarianism from collapsing into act utilitarianism, which was a worry of Alonzo’s.

What Makes Rules Better Than Desires?

Desirism asks us to desire that which we have many and strong reasons to desire. From a moral standpoint, this means adopting desires that help satisfy the desires of others. However, it is far harder to alter your desires on a fundamental level than it is to maintain a desire to follow the best rules (out of a desire to be moral, perhaps) and then alter the rules you possess. There’s also a concern that desires do not exist, while rules certainly do exist.

Moreover, desirism does not elaborate on the role of reflection like two-level utilitarianism does. What process are we engaging in when we decide what desires to adopt? How do we change our own desires? In answering these two questions, how is desirism different from rule utilitarianism?

What is Utility? Don’t Know, But Not Intrinsic Value.

One area where our moral systems seem to diverge is in attitudes toward utility. Alonzo is fond of how desirism need only operate on desires and does not need any “intrinsic value” or “value-laden term”. I agree with Alonzo in his critique of brain-state theories of value. Personally, I have not yet found a satisfactory reduction of utility that is sufficiently rigorous and resolves moral dilemmas in a way that satisfies me.

Luckily, for now, the utilitarian choices are pretty clear pragmatically without a robust theory of utility. What’s important for the sake of this debate is that, like desirism, my version of two-level utilitarianism also does not rely on any “intrinsic value” or spooky stuff in order to work. Instead, utility matters because people (and other beings capable of experience) matter, and these lives matter to me simply because I have a desire to make their lives better. I consider myself to be a moral anti-realist and have therefore constructed an anti-realist version of utilitarianism.

What Ought We Do in an Exotic Situation?

Another possible divergence is the “10000 Sadist Problem”. Because this situation is so bizarre and unlike situations we encounter in everyday life, it’s likely that our normal rules and intuitions might fail to maximize utility. Therefore, we should go to the critical level if time allows.

First, I’d like to mention that the better utilitarian theory is the one that maximizes utility, not the one that gives the response to hypothetical scenarios that satisfies people’s intuitions. This is what Luke Muelhauser calls the wrong test for moral theories, precisely because we don’t have a reason to expect the correct moral answer to always be the intuitive one.

That being said, I don’t think two-level utilitarianism would have you give up the kid to the 10000 sadists. Many people think that utilitarianism suggests that the needs of the many outweigh the needs of the few. This isn’t precisely right, because utilitarianism also takes the strength of the need into account. In this case, the need of the kid to avoid being tortured is intensely strong. So strong, I’d suspect, that it still outweighs the satisfaction that the 10000 sadists get. Generally, pain hurts a lot more than pleasure satisfies both in intensity and duration, and therefore can be expected to dominate in this calculation.

Just How Utilitarian is Desirism, Anyway?

Does Desirism Actually Maximize? Alonzo suggests that desirism resolves the 10000 Sadists by suggesting we look at the desire to torture and ask if we need that desire. Because we don’t possess reasons for a desire to torture, we can safely discard it.

However, who possesses the reasons for having or discarding desires? The 10000 sadists themselves certainly have many reasons to keep their desire to torture – they get satisfaction from doing so. The “10000 Sadists Problem” attacks utilitarianism’s maximization process, arguing (naïvely) that the utility gained by the 10000 sadists outweighs the utility lost by the tortured kid. In order to discard the desire to torture, desirism must go against this grain of maximization and do something else. I’m not sure what this process is.

Why Does Desirism Not Count Future People? Secondly, from a theoretical standpoint, I worry that desirism does not take desires into account that do not yet exist. Perhaps Alonzo feels this is a feature and not a bug, and certainly “person-affecting” theories of utilitarianism are held among many theorists. However, most utilitarian theories believe that future people matter just as much as present people, and we shouldn’t discriminate against them in our analysis just because they don’t yet exist. I don’t know what Alonzo’s reasoning is for making this distinction into a morally relevant one.

Also, while I don’t believe matching intuitions is important, this desire to help future people does match our intuitions – think of the desire to curb global warming or tackle large social challenges that will only benefit people several generations from now. The very desire to leave a better Earth for future generations presupposes that future generations matter.

Does Desirism Hear The Strongest Needs? Lastly, from a pragmatic standpoint, I worry that desirism does not do a good enough job identifying which desires are the most numerous and most strong. Right now, I believe one of the most pressing needs is extreme poverty in the developing world. Two-level utilitarianism explicitly states we should evaluate needs objectively, and therefore brings extreme poverty to our attention. However, the desires of these people to be free from extreme poverty don’t speak very loudly and don’t weigh much on the minds of us Americans. When we Americans think of desires we have strong reasons to support, we think of those desires that speak to us loudly and end up supporting many first-world causes. I think this is a utilitarian mistake and would hope the utilitarian theory we adopt guides people easily to this understanding.

Conclusion

I offer two-level utilitarianism as the utilitarian theory that is better suited to maximize utility when implemented in practice. I’ve argued in favor of two-level utilitarianism because it (a) fits with findings in psychology that we have two processing systems and limited attention to allocate to every event in the world.

This theory is more practical than desirism because (b) rules are more malleable than desires, (c) the role of reflection in desirism is less clear, (d) desires may not exist, and (e) two-level utilitarianism makes it clearer impartial attitudes matter when considering what problems ought to be prioritized morally.

Additionally, I’ve argued that desirism falls short of utilitarianism because it (f) has a strange attitude toward maximization and (g) does not take the welfare of future people into account.

Lastly, I’ve answered Alonzo’s problem with utility by agreeing with him and arguing that two-level utilitarianism also only relies on entities that exist. I also answered Alonzo’s 10000 Sadist Challenge by arguing that it is the wrong test for a moral theory. And I answered Alonzo’s concern that rule utilitarianism collapses into act utilitarianism by showing the impossibility of being act utilitarian all the time.