The Problems with Utilitarianism

I originally wrote this essay in 2014 or 2015 in a Chinese buffet in Athens, Georgia. I've changed some of it and am re-adding it here. I talk about the issues with Utilitarianism and a bad book by Sam Harris.


Utilitarianism

At a dumb intuitive level, the "ethical" idea of [Utilitarianism]{.dfn} in principle gets pretty close to what most people reflexively want from social-political affairs: the greatest good for the greatest number of people—who doesn't want that?

The problem is that that intuitive idea is incoherent. It sounds good, but there's not really such a thing as "the greatest good for the greatest number of people." If there were, it wouldn't even be actionable.

"Maximizing"

So the first problem is one any mathematician will realize right off the bat: it's rarely possible to maximize a function for two variables.

If we had the means, we could maximize (1) the amount of good in society or (2) the number of people who feel that good, but nearly certainly not both (if we can it's a bizarre coincidence).

It's sort of like saying you want to find a house with the highest available altitude and the lowest available price; the highest house might not have the lowest price and vice versa, the same way the way of running society which maximizes happiness is nearly certainly not be the way which maximizes all individuals' happiness.

There are some classic moral puzzles that bring this out: Let's say there's a city where basically everyone is in absolute ecstasy, but their ecstasy can only take place if one particular person in the city is in intense and indescribable pain. Or to put it another way, to maximize my happiness, we might need to make everyone in the world my slave and allow me to rule as I please. Although this might maximize my happiness, it might not maximize anyone else's (if it does however, we might want to consider it).

The Well-being of Conscious Creatures

So I recently read Sam Harris's The Moral Landscape which is either a failed attempt to bring Utilitarianism back to life or a misguided book simply ignorant of what the problems with it were. I don't actually recall Harris using the term "utilitarianism," although that is really just what he's arguing for.

Harris repeats one mantra basically every paragraph of the book: "the well-being of conscious creatures—the well-being of conscious creatures—the well-being of conscious creatures." In addition to being repetitive, the term is problematic for important reasons. So Harris wants our Utilitarian engineers to maximize "the well-being of conscious creatures," but the problem is we can't just add up enjoyment in the first place. There's no way of taking my enjoyment of candy, subtracting the pain of a broken nose and adding/subtracting an existential crisis or two.

Now his hope is eventually we'll understand the neurology of the brain enough to do just that. I don't take Harris for a fool, and he does have a Ph.D. in neuroscience (obviously I am being sarcastic), but I think he's ignoring all the important problems either to appeal to a public audience or just to convince himself. We can study the neurology of feelings and get readings of neural activity, but objective neural activity is certainly not subjective experience. Twice as much neural activity doesn't mean "twice" the subjective experience.

We can no better look at brain activation to understand subjective experience any better than we can look at the hot parts of a computer to see what it's doing.

You can't do math with feelings

Of course one of the problems of qualia/subjective experience is that they are necessarily unquantifiable: imagine how you felt the last time you got a present you really enjoyed—now imagine yourself feeling exactly twice as happy—now 1.5 times as happy—now 100 times as happy.

You can't do it, and even if you could, you couldn't compare that experience with other experiences—you can't really understand what it means to be as happy as you were sad a month ago, and that prevents us from actually adding up your experiences into one number to be maximized.

But again even if we could it would be impossible to add that number up with someone else's experience. Humans have different subjective experiences: caffeine affects me demonstrably different than other people, but I can't quantify that; some people are more affected by pain (to my understanding, women seem to have a neurology more pain-prone than men), but how can we precisely relate the precise ratios of every individual person?

And of course, although Harris wants to maximize "the well-being of conscious creatures," we have no clue what kinds of conscious experiences define animal life, or how many animals are "conscious" in any recognizable sense. As Thomas Nagel noted, we can't even begin to imagine what it's like to be a bat, but to quantify their experiences and compare them to our own? Forget about it!

Douglas Adams in his Hitchhiker's Guide to the Galaxy presented the idea of a genetically engineered cow which not only was made to be able to speak, but to enjoy the prospect of being eaten and encourage others to kill and eat him. Experience itself is not some kind of thing arbiter of morality. Pain, in fact, might be a negligible or incomplete guide to what is not good. Children have to put up with being drug around to do many things they don't enjoy. That doesn't mean some immorality in anything.

The philosophical problems here are so endless as to make any kind of objective application of Utilitarianism based on neuroscience far beyond even fancy. I will be so bold as to say that this will simply never be possible, regardless of what chips Elon Musk wants to put in your brain.

To repeat:

Utilitarianism isn't just impossible, it's impossible every step of the way.

To be clear, these are not technological problems that a future totalitarian government might be able to "solve." There really is no coherent sense in which we can put a number to a certain feeling of happiness and subtract from that another person's feeling of unhappiness. Qualia are qualia. It's like subtracting the sound of an airplane from the color blue.

What Utilitarianism really is

Anyway, the tradition of Utilitarianism was always a failure, but it's an interesting sign of the times. The Enlightenment was a time of some (less than usually thought) scientific advancement and the idea was that as we began to understand the nature of the body and the stars and everything else, we could fully understand too human society.

Eventually we could engineer and control them all. But as fast as we learn things about the world, even faster do complications arise and we end up "[restoring nature's] ultimate secrets to that obscurity, in which they ever did and ever will remain" in Hume's words.

The only really unfortunate thing is that the ruling class of the West either doesn't know or does care. There's a cynical sense in which they are attempting to re-engineer or "Build Back Better®️" the world on Utilitarian principles where every decision is determined to be acceptable by some centralized utilitarian calculus.

Read related articles:
Philosophy · Science · Politics · Tradition