We are the Utility Monsters
A short exploration of an ironic tension in popular moral reasoning
One of the main objections people have to utilitarianism is the unpalatable implications of there being a Utility Monster: if there were a being that derived so much intense pleasure from devouring human beings, such that it outweighs the disutility of our being eaten, utilitarianism implies that we are morally obligated to feed humans to this monster. But this seems to be absoutely horrific! How could you possibly justify doing such horrible things to us humans simply because there is a being who experiences life more intensely than us?
Suppose this intuition is right. I now want you to consider something different: whether it is a massive moral issue that billions of fish get their skin ripped off as they suffocate to death every day, or that hundreds of billions of shrimp are getting their eyes blugeoned before they die a slow, agonizing death, all because we want to experience the pleasure, convenience, and (super contestable) health benefits of eating salmon and shrimp.
One of the popular-level objections to caring about these small creatures is that they are nothing like us. In particular, the objection goes something like this: we humans operate at such a higher level of consciousness. We can experience an unbelievably visceral array of rich sensations, from pleasure to pain, reflect on our own existence, speak in complex ways, and do mathematics. Salmon and shrimp can do none of this. So, it is perfectly okay for us to farm salmon and shrimp for our own benefit—for our convenience, pleasure, and health.
I hope it is clear why I am bringing these two issues—the issue of the Utility Monster, and the issue of the suffering of small creatures—together. The commonsense response is strikingly similar in both cases: just in opposite directions. We reject feeding humans to the Utility Monster because we find it repugnant to kill humans simply in virtue of the intensity of pleasure that some other being might experience. And yet, we also reject the importance of the suffering of small creatures because we prioritize our “higher” conscious experiences of pleasure. In short, those who eat shrimp and salmon are treating themselves as Utility Monsters. And yet, they deny the possibility that there could ever be a Utility Monster who should consume them.
I think the takeaway from this tension is this: if you think that you are justified in eating farmed shrimp and salmon because of the differences in our levels of consciousness, you should probably have no problem with Utility Monsters.
Now, there is a clarification I should issue here. I am not saying that all opponents of fish welfare must endorse the prospect of Utility Monsters. There are other objections to animal welfare that do not have any straightforward implications for Utility Monsters: you might think that, e.g., shrimp are totally non-sentient, such that there just is no suffering present on shrimp farms. Of course, this objection is bad for a whole array of other reasons (namely, even if we had good reason to think non-sentience were true, any plausible theory of empirical and moral uncertainty should counsel against the chance of mass torture. Also, its really really implausible). But, that sort of reason to not care about fish welfare won’t generate this worry about Utility Monsters.
Instead, what I have in mind is the view that shrimp and fish are much less conscious than us, and so our practice of eating them for our own pleasure, convenience, and health is justified. If it is justified to farm creatures less conscious than us, you should think that it is likely justified for creatures much more conscious than us to also farm us.
What might defenders of this view, who want to avoid the prospect of Utility Monsters, say in response? The natural objection will be this: there is something special about humans. In particular, we are persons. And once you’ve reached the level of human persons, you can no longer sacrifice them for the pleasure, convenience, and health of some other being, no matter how much cognitively superior it may be. In other words, once you get a magical, special moral property: the property of infungibility.
But I think this objection fails for a few reasons.
First, it faces a very troublesome Sorites series. Start with a human. Now, beside her is another creature that is almost exactly identical to the human, except that this creature is ever so slightly less conscious. It can feel things at almost the exact same level, compute things at almost the exact same level, reason at almost the exact same level of abstraction, understand language at almost the exact same proficiency, etc. Call this creature C2. We can then imagine another creature that is almost exactly identical to C2, except that this creature is ever so slightly less conscious. It can feel things at almost the exact same level, compute things at almost the exact same level, reason at almost the exact same level of abstraction, understand language at almost the exact same proficiency, etc. Call this creature C3. We can generate a remarkably long series of such creatures, until we get to shrimp: <Human, C2, C3, C4, …, Shrimp>. The opponents of fish welfare we are considering are those that say humans are infungible, but shrimp are fungible. But this seems to imply that there is some pair in the series where creature Cn is sufficiently conscious and important that it is infungible, and yet creature Cn+1, which is almost exactly like Cn, is suddenly not infungible. The idea that something as morally deep as whether you are infinitely valuable would be contingent on such slight changes seems to border on absurd!
To see the force of this argument, consider that if you think there is some moral circle of “moral creatures with rights,” you have to draw the boundaries of that circle in ways that make rights contingent on almost indistinguishable differences.
Second, I suspect that most people don’t really believe that humans are infungible — i.e., that humans can never be sacrificed for the pleasure, convenience, and health of some other being. It is rather implausible to suggest that we should not kill a human being if it was absolutely necessary to save 10 trillion other human beings from agony and death. But if it is more important to prevent 10 trillion other humans’ suffering, why is it not more important to prevent a single being from suffering 10 trillion times more pain? Perhaps it will be objected that humans are fungible for suffering, but they are not fungible for more goodness. But I suspect we do not really act in that way at all. Consider, as a single example, the highway system: we know that this will cause at least one innocent death every year (and, realistically many, many, more). And yet, no one thinks this is a reason to not have highways: the foundational role they serve in our economy far outweighs the deaths they cause.1 But if deaths are worth having for the pleasure and goodness of a society of hundreds of millions of people, why wouldn’t the same be true for the pleasure and goodness of a creature that would experience hundreds of millions of times more pleasure than all of human society?
So, yeah, we are utility monsters.2
Sometimes an objection is made that we consent to the highway system by driving on it, or by being in a society that reaps its benefits. But simply consider whether it would be better for our society to not exist, or for our society to instead exist and yet be such that every year 1 random person is killed. If it is up to you which state of affairs is actualized, it seems clearly better that you actualize the latter state of affairs.
I should also say that I don’t think our farming shrimp is anything at all like the actual Utility Monster hypothetical. We obviously do not derive nearly enough pleasure to compensate for the mass torture that is the farmed shrimp and salmon industry.



Pardon me for sounding like a rube, but how does one go from "suffering is bad for the sufferer" to "suffering is bad _simpliciter_"? I've never understood this deduction.
I'll also quibble about terms like "implausible" — I find such words very subjective. What is implausible to you might be entirely plausible to me. Our pre-rational seemings are just _doxai_ (sensu Aristotle). They're typically not the sort of facts for which we have Wittgensteinian certainty, and can safely be questioned. A thoroughgoing deontologist who is an absolutist and thinks persons are a natural kind will resist the sorites problem you highlighted and the doxa that it is licit to sacrifice one human being for ten trillion others. Why is that implausible? What first principle makes it implausible?
Unfortunately, you got low-key scooped by my smart philosophy friend…
https://open.substack.com/pub/simplereflections/p/we-are-utility-monsters?r=48gsp&utm_medium=ios