Discussion about this post

User's avatar
Silas Abrahamsen's avatar

Very interesting post, and I certainly agree that this issue is (somewhat) separate from normative ethical issues! Though a few thoughts/objections:

1) I suppose I don't feel the pull of the initial intuition pump too strongly. Perhaps I'm just lost in the sauce, but the more I think about repugnant-style conclusions, the less I find them plausible, and something something scope-neglect.

However, denying the premises leading to these kinds of conclusions remains very counter-intuitive, even when I consider them more. This is more of a report than an argument though.

2) You say that the worry about external circumstances mattering isn't so bad once we realize its objective value rather than subjective value. I don't see this. Objective value matters in terms of what we ought to do, and it's still very absurd to me that what happens on distant parts of the universe can determine whether I should alleviate a headache or have some chance of stopping someone from getting tortured.

You give an example of egalitarianism. Maybe this is idiosyncratic, but I just think that that example shows that axiological egalitarianism is wrong! (Likewise for the other things you mention that have this issue.) Surely whether I should benefit John doesn't depend on welfare levels outside the observable universe or in the distant past!

Now things external to the situation you're considering can of course matter insofar as what you do might make some difference to what happens elsewhere or vice versa. The trouble is when completely seperate things that could never make any difference still have extreme importance for our decision-making. This just seems completely implausible for me!

3) Isn't your theory very sensitive to how we individuate kinds of harms? If we characterize headaches broadly, there might be very many, and the marginal headache is not very bad, however characterizing it in a fine-grained way, the marginal headache will be as bad as any other, since it will probably differ somewhat from other instances.

I have a hard time seeing any principled way of individuating kinds of harms other than as finely as possible, but doing so doesn't capture the intuitions you want, as we can run the torture vs. 1000 headaches counterexample with minutely differing headaches.

4) Small point, but it seems like if we live in a multiverse, infinite Universe or some similar theory, we get the result that nothing we do matters axiologically (or only thing we do regarding unbounded kinds of values matter axiologically) as the marginal value of our actions is 0 or infinitesimal. (This objection might be countered by your response to the "much worse than" objection.)

5) Finally, just a question: I wonder why you're a fanatic given this view? If value is bounded, it's surely not true that any small probability of value V can be outweighed by the magnitude of V?

That's a lot, and I suspect you have good things to say about all of them! Ultimately it's a matter of reflective equilibrium, and I just don't feel the initial pull away from unbounded value strongly enough to justify what I see to be extremely heavy costs.

Expand full comment
Tyler Seacrest's avatar

Thanks for the great post! I have a couple comments:

1. I do think a logarithmic scale is maybe the right way to go. For example, if we intuitively feel a headache is a 2, and torture is 1000, then the "true" values are 10^2 and 10^1000. True, we can get enough headaches to outweigh the torture, but it takes 10^998 which is basically infinity. Also, for money, earthquakes, sound loudness, city size, etc., how big each feels is the log of the absolute size, so moral utility would just be another in a long list of examples of how human intuition works on a log scale.

2. Still, I love the idea of trying to get bounded aggregation to work mathematically. Here is my idea: write both numbers in base 2, adding leading zeros so they have the same number of digits to the left of the decimal. Then, use the digits of the smaller number to replace the zeros in the digits of the larger number. So for some examples, (1 + 1 = 1.1) (1.1 + 1 = 1.11) (100.1 + 010.1 = 101.101). This has the property that no amount of adding numbers less than 8 (or any other power of two) will ever get to 8. It's commutative by fiat (you have to start with the larger number), but I believe it is associative naturally (I don't have a proof but it worked in one example!)

Expand full comment
32 more comments...

No posts