Sentience and preference utilitarianism

There was a brief discussion on Twitter yesterday about whether we should grant “human rights” to non-sentient robots. My reaction: “Why give a damn about non-sentient agents? They can’t feel anything, so who cares if harm should befall them?”

This idea that “morally, the only thing that matters is sentience” was famously expressed by Jeremy Bentham:

a full-grown horse or dog is beyond comparison a more rational, as well as a more conversable animal, than an infant of a day, or a week, or even a month, old. But suppose the case were otherwise, what would it avail? the question is not, Can they reason? nor, Can they talk? but, Can they suffer?

Despite my confidence that non-sentient agents do not matter morally, I admit that sentience might seem to pose a special problem for me as a preference utilitarian. The dissolution of this problem adds detail to my moral theory, and explains why we call it ‘preference’ rather than ‘desire’ utilitarianism.

A preference utilitarian differs from the traditional hedonistic type of utilitarian (such as Bentham) in that his basic good is not a particular sort of experience such as pleasure or relief from pain, or happiness understood as a feeling, but the satisfaction of desires. His “greatest good” is not the “greatest happiness of the greatest number” but the maximisation of the satisfaction of desires.

Now it’s important to see that the satisfaction of desires here is not the having of a “satisfying experience”, but the satisfying of objective conditions — and the agent might be wholly unaware that those conditions have in fact been satisfied. A desire is satisfied when the desired state of affairs is actually realised, whether or not the agent has any idea that the state of affairs is realised. Like a man becoming an uncle by virtue of a birth he knows nothing about, or a belief being true, a desire’s being satisfied is a matter of the world’s being arranged in the right way — something typically external to the mind of the agent.

For example, most people want their spouses to be faithful. They don’t want the mere experience of their spouse being faithful, but the actual objective fact of their spouse being faithful. This desire is not for the spouse to “keep up appearances” by telling convincing lies about their infidelities — there mustn’t be any infidelities to tell lies about.

Here’s why sentience might seem like a problem for preference utilitarianism: unless a desire is a desire to have a particular sort of experience, which it typically isn’t, the experience of a desire being satisfied is like a by-product of its actually being satisfied. So a “robotic” agent who doesn’t have any conscious experiences at all — but still has desires which can be satisfied or thwarted — would seem to make moral demands on preference utilitarians like myself. That conflicts with the intuition expressed above that only sentient agents matter morally.

The problem is dissolved, I think, when we remind ourselves that genuine desires (and beliefs, for that matter) only exist where pluralities of them together form a “system”. In moral deliberation, the utilitarian weighs desires thwarted against desires satisfied in an imaginary balance. Obviously, strong desires count for more than weak desires. When desires come into conflict with one another in the mind of a single agent, the strongest desire is the agent’s preference. Only desires in a system of several desires competing for the agent’s “attention through action” can count as preferences.

So system is required for one desire to take precedence over another, as it must if it’s a preference. And a preference to pursue one goal rather than another involves the weighing up of the relative merits of competing goals, the level of time-management needed to defer the less urgent goal, and so on… In short, it requires reflection and choice. This is “second-level representation” — i.e. meta-level representation of primary representational states — of the very sort that makes for consciousness. We need reflection to decide between competing desires (and for that matter, we need epistemic beliefs to guide our choices of first-level beliefs about the world — in other words, a sense of which among rival hypotheses is the more plausible). Second-level representations like these amounts to awareness of our own states, including awareness of such states as physical injury. In other words, the experience of pain. It’s a matter of degree, but the richer the awareness, the greater the sentience. So genuine desire and sentience are linked in a crucial way, even though any particular desire and the conscious experience of its satisfaction might not be.

To better understand why “genuine” desires are part of a system, we might contrast them with more rudimentary goal-directed states of ultra-simple agents such as a thermostats, or slightly more sophisticated but still “robotic” agents such as cruise missiles.

Thermostats and cruise missiles each have a rudimentary desire-like state, because their behaviour is consistently directed towards a single recognisable goal. And they have rudimentary belief-like states because they co-vary in a reliable way with their surroundings, co-variation which helps them achieve their goal. In both cases, they might be said to “bear information” (non-semantic information, reliable co-variation) about the world. A clever physicist (a “bi-metallurgist”?) would be able to work out what temperature a thermostat “wants” the room to stay at, and what temperature it “thinks” the room is currently at. A clever computer scientist would be able to reverse-engineer a cruise missile to reveal what its target is, the character of the terrain it is designed to fly over, its assumed current location, and so on. We could go further and adopt the intentional stance, assigning mental content to these agents. In effect, that would be to drop the cautionary quotation-marks around the words ‘wants’ and ‘thinks’. We might regard ourselves as referring literally to its desires and beliefs. But we would not be able to take the next step and talk about preferences. For preferences, we need various gaols of varying strengths, and we need something like consciousness to make decisions between them. In other words, we need sentience, at least to some degree.

Leave a Reply

Your email address will not be published. Required fields are marked *