Parfit1 had a weird view of personal identity. He was a reductionist about personal identity. Much like you and I are reductionists about “heaps” (taking them to be a concept that fuzzily tries to track an underlying thing — mass or volume — which is what really matters). Parfit’s theory is elegant enough that it has attracted some of Substack’s best to ardently defend it.
Recently, I had a realization: I believe the reductionist view of personal identity gives you reasons to be good (i.e., help others be better off) when you are thinking purely prudentially! If I am right, Parfit’s theory entails that it is sometimes in your self-interest to help others over yourself. This is pretty odd. Indeed, it is sufficiently powerful that it could power a meta-ethical theory of morality, grounding (some) moral obligations in pure prudence — as Kant, and arguably Rawls, tried to do. All from an at least credible theory of personal identity.
Let’s examine why this might be the case.
1 Reductionism About Personal Identity
Parfit’s view of personal identity, roughly, is that it is not what matters. What matters, instead, is psychophysical connectedness.
To get a nice intuition for what this means, consider the concept ‘heap’. We might ask: under what conditions are there heaps? This, it turns out, is a tricky question to answer. Simply imagine what everyone would call a heap of sand — say, a billion grains — and remove just a grain. This, too, is a heap. Keep on doing this, though, and we’ll get to a single grain of sand, which is clearly not a heap. But it also doesn’t seem as though there is some precise point in the middle where our heap became a non-heap.
Of course, this isn’t too troubling, because most of us are reductionists about ‘heaps’. We recognize that it is a linguistic concept, one which we’ve constructed that is merely supposed to track an underlying feature of reality: mass or volume. If I wanted to make a law banning people from throwing heaps of snow at one another, what I should do — if my goal is to be precise — is specify that people cannot throw piles of snow above mass M or volume V.
Parfit thought that we should think the same thing about ‘personal identity’. What matters is simply how psychologically and physically connected (how strongly “R-related”) I am to something else. Indeed, he thought there was a similar kind of Sorites one could give about personal identity. Richard Chappell, quoting Parfit, writes:
At the near end of this series is the normal case in which a future person would be fully continuous with me as I am now, both physically and psychologically.… At the far end of this series the resulting person would have no continuity with me as I am now, either physically or psychologically. In this case, the scientists would destroy my brain and body, and then create, out of new organic matter, a perfect Replica of… Greta Garbo.
At intermediate points along the series, varying proportions of the cells in Parfit’s brain and body are replaced in ways that make the resulting person more and more like Greta Garbo. But the series lacks drastic discontinuities. We can imagine a full series of possible people, starting with pure Parfit, then Parfit with a hint of Garbo, through various mixes of the two, until we reach Garbo with a hint of Parfit, and finally pure Garbo.
Parfit thinks we should respond to this Sorites series in much the same way. In the first few cases, the resulting person would be me. In the last few, they obviously aren’t. In the middle, the matter is just vague — because personal identity is a fuzzy linguistic concept, much like heaps. What we know about this series of people is the degree to which they are similar to me: how psychologically and physically similar they are to me. How strong the R-relation is between us. And that is what there are facts about: as Parfit put it, “if I knew these facts, I would know everything.”
2 The R-relation and Prudence
Ok, so all there is are the facts about R: how strongly R-related any two agents are. It seems to me that a natural step on this view is that the R-relation is the basis of prudence.2 That is to say, if I am trying to think about how to act in my own interest, what I should think about is how R-related a given person is. My future selves are the most strongly R-related things to me; so, I should put significant weight on their interests. Comparatively, strangers like you are very weakly R-related to me, so, prudentially, I should put little weight on your interests. If offered the chance to give you a dollar, or to save it for my future self, prudence recommends saving it.
The natural view here is to think of the R-relation as telling you how much you should care about someone. The more R-related they are to you, the more you should care. This would nicely explain why, eg, it is prudential to invest in your long-term future (by, say, getting an education). For it makes many people who are pretty strongly R-related to you better off (your future selves), even if it may come at the expense of a single person who is even more R-related to you (your current self).
Indeed, I suspect that if personal identity really reduces to the R-relation, the most plausible theory of prudence will tell you that you should maximize expected R-weighted welfare. Where R(S, P) designates the strength of the R-relation between an agent S and a person P:
- The amount of R-weighted welfare for an agent S who gives welfare W to person P = (W) * (R(S, P))
This is just a way of saying that if you are debating between giving 1 util to Ben (where R((You, Ben) = 0.5), or 1 util to your future self (where R(You, Your Future Self) = 0.99), you should give the util to your future self, since the expected R-weighted welfare from that action is 0.99, rather than 0.5.
3 Morality? (Eh, Sort Of)
Here is where things get cool. If indeed it is in our self-interest to be maximizing R-weighted welfare, then that means it is sometimes in our self-interest to help other people rather than ourselves.
How can this be? Well, first, notice that we can construct a Spectrum Argument (not a Sorites series) to this effect. First, start with the following plausible principle:
If persons P1 and P2 are such that R(You, P1) = X and R(You, P2) = 0.99999*X, and you are deciding between giving P1 N utils, or instead giving P2 2N utils, it is in your self-interest to give P2 2N utils.
Indeed, this just follows from maximizing R-weighted welfare. Now, where ‘>’ denotes which state of affairs is preferable for your self-interest, another plausible principle is:
If A > B, and B > C, then A > C.
From just these two premises, we get what we might call:
The Generous Repugnant Conclusion: It is in your self-interest to give a complete stranger hundreds of billions of utils, rather than to give yourself millions of utils.
This is a pretty amazing result. Indeed, while it might seem unrealistic to be in such a choice scenario, I suspect it is actually quite common. We can plausibly save someone’s life for a couple thousand dollars. I find it hard to believe that the marginal utility that, say, a billionaire gains from an extra few thousand dollars will get outweighed by the R-weighted gains someone else gets from living an entire life. If someone can convince Elon Musk to read Reasons and Persons, the irresistible force of prudence will compel him to donate millions of dollars to GiveWell!
Ok, but on a more serious note, this result can go quite far. Here are at least two more dimensions along which we can extend it. First, cases where many people are involved, even if the welfare gains for each are minimal. If there are enough such strangers, their interests will outweigh gains to yourself, from the perspective of your self-interest! Second, even cases where the R-relation is very weak (animals!) can still dominate over choices to help yourself, if the welfare gains are large enough, or if there are enough subjects at stake. But…*ahem*…this might mean that it would be in your self-interest to donate to the shrimp, or to prevent insect suffering.
Of course, these wacky conclusions can all be avoided by modifying the theory of R-weighted welfare in familiar ways. You might give lexical prudential priority to subjects that are very strongly R-related to you rather than those that are weakly R-related to you. But this will lead to all the usual unintuitive results of lexical threshold views: eg, you’ll have to deny that it is in your self-interest to give someone who is 0.99*X R-related to you trillions of times as many utils, than to give someone who is X R-related to you a single util. Additionally, it is worth noting that those who accept this sort of reductionist view of identity are usually keen on accepting the conclusions of spectrum arguments, rather than embracing lexical priority and trying to find some rejection point in the middle.
So, if you’re a reductionist about personal identity, your theory of prudence is quite interesting: it implies that you should in many cases act beneficently.
Richard Chappell has a wonderful series, Parfit in Seven Parts, wherein he succinctly explains in 7 mini articles what Parfit was up to across his philosophical work: from his work on rationality, to population ethics, to personal identity, to meta-ethics, etc. I’d highly recommend this series to anyone! Reading the Part on personal identity is what prompted me to write this piece.
This is not strictly required, and Parfit thought it was optional. But I’ll proceed on the assumption that the R-relation really does matter — and thus that it should power one’s theory of prudence.



A lot here depends upon whether you take the R-relation to build in some causal connection (or counterfactual dependency) between psychologically "connected" and/or "continuous" states, or if mere qualitative similarity is enough!
>We can imagine a full series of possible people, starting with pure Parfit, then Parfit with a hint of Garbo, through various mixes of the two, until we reach Garbo with a hint of Parfit, and finally pure Garbo.
This is a great example of how thought experiments about the mind go wrong. In fact, you do not see a gradual transition from the Parfait to the Garbo personality. Instead, you get Parfait with increasing amounts of brain (and other organ) damage, until hes dead on arrival, then it stays dead for a long time, until you get Garbo with loads of brain damage that gradually becomes healthier. Having a few Garbo brain cells does not give you a bit of Garbo mind, it triggers the immune system.