Review and Summary of Moral Uncertainty

Published 2 October 2020

Back to writing
Contents (click to toggle)

I just read the recently published ‘Moral Uncertainty’ by Will MacAskill, Krister Bykvist, and Toby Ord. It’s a thoughtful survey of a nascent field in moral philosophy, and a key contribution to the field in its own right. Like my previous review of ‘Effective Altruism – Philosophical Issues’ I’ve written this half-summary-half-review for a couple of reasons. First, because I imagine there are some people who would be interested in reading about the main takeaways of the book before (or instead of) reading all of it. Second, because trying to explain something in your own words is a good test that you’ve actually understood it.

What it is, why it matters

Often, we may know all the relevant empirical facts, but we are left unsure about what (morally) we should do. This is no philosophical fancy: it should be familiar to basically anyone who cares about acting ethically, but takes seriously the fact that ethics is hard. The basic questions for the study of moral uncertainty are:

  1. Are there norms about how to act under moral uncertainty distinct from first-order moral norms?
  2. If so, what are they? How should we act under moral uncertainty?

Note that we’re limiting discussion to distinctively moral (other-regarding) problems, rather than the more general question of how to act under ‘normative uncertainty’ — uncertainty about how to decide how to act in general (e.g. what to do if you’re split between causal and evidential decision theory).

Before trying to answer (1) and (2), let it be said that you (whoever you are) really ought to be morally uncertain. Look: the greatest moral philosophers of the last century were morally uncertain. Singer changed his mind from preference- to hedonistic utilitarianism, and from non-cognitivism to cognitivism. Parfit never thought he found his ‘Theory X’ for population ethics after decades of intense reflection. What’s your excuse?

Even granting that moral uncertainty is unavoidable, some people answer in the negative to (1) — they think there is no sense of ‘ought’ that is special to moral uncertainty. Instead, they claim the answer to “what should I do in light of my moral uncertainty” is always the same as the answer to “what should I do?”. Suppose theory T1T_1is true and you’re deciding between options A,B,CA,B,C. If T1T_1says AA is best, then do AA! The fact that you’re personally unsure whether T1T_1is true does not affect this, and is therefore irrelevant.

There is a sense in which you ought just always do whatever the correct moral theory prescribes. But that doesn’t rule out there being some other legitimate answer to the question of how to decide among options given moral uncertainty which takes that uncertainty into account. Here’s an example: a casino game involves randomly drawing a card and placing it in an envelope. You receive $200 if the suit of the card is hearts, otherwise you get nothing. The game costs ​$100 to play, and your utility for money is linear, and you don’t get any special excitement from gambling. Unknown to you, the hidden card is in fact the queen of hearts. Should you pay to play? There’s one sense in which you should take whatever option has the best actual outcome, irrespective of your knowledge. In this sense, you should pay to play. There’s another sense in which you should not pay to play: given your knowledge, you stand to lose money in expectation. Martin Peterson calls this the ‘right/rational’ distinction — it would be right to play, but not rational. This exactly parallels moral uncertainty: sure, the best option is the option the correct moral theory recommends. But I don’t know what that theory is, and I want to construct some rules for deciding in light of that uncertainty in a similar way to how we already have well-defined rules for deciding among risky options (where the risks are known).

Sometimes the best thing to do under moral uncertainty is blisteringly clear. This strongly suggests that there are norms about acting under moral uncertainty which provide a sense of ‘ought’ distinct from the sense in which we just ought to do whatever the correct moral theory says. Taking an example from the book, suppose Jane is deciding between ordering an foie gras and a (vegan) salad. She has 55% credence in T1T_1 which ascribes no moral weight to animals. She has 45% credence in T2T_2, a kind of anti-speciesist utilitarianism. Jame mildly prefers the foie gras to the salad. On T1T_1, choosing the foie gras is equivalently as choiceworthy as choosing the salad. On T2T_2, choosing the foie gras is much worse than choosing the salad.

It seems very clear to us that, in some sense, Jane would act inappropriately if she were to choose the foie gras, whether or not it is morally wrong to choose the foie gras. But, if this is true, then there must be norms that take into account Jane’s moral uncertainty.

Jane thought that ordering foie gras was probably morally permissible and definitely prudentially best (tastiest), but seems to have made the wrong decision in choosing the foei gras. Intuitively, it does seem possible to violate distinctive norms of acting under moral uncertainty.

Moral Fetishism

How have critics resisted this conclusion? One argument charges the view that such norms exist with moral fetishism. Consider the agent who cares about weighing up rightness and wrongness across the moral theories she is unsure in, in order to choose the option which does best all things considered: the option which is somehow ‘least wrong’ or ‘maximally right’. This agent might be criticised for only valuing rightness and wrongness per se, as intrinsic or terminal objects of concern. But we call acts right and wrong because of reasons: because they lead to harm, or break promises, or treat people as ends, etc. The morally conscientious agent should care about those things, and would have no reason to care about rightness or wrongness over and above them. It’s not legitimate to only care about rightness or wrongness only in the abstract; and as such (concludes the critic) it’s not legitimate to act according to special norms for choosing under moral uncertainty.

This objection doesn’t seem to go through. Suppose I showed you a big red button, and offered to donate $1 to a charity of your choice for pressing it. Suppose I also told you, in full sincerity, that pressing the button would constitute a grievous moral wrongdoing, but didn’t specify why. I normally stick to my word, and our moral perspectives are aligned such that what counts as a moral wrongdoing for me always counts as a moral wrongdoing for you. Should you press the button? Of course not. But it seems like the most straightforward way to explain this is that you don’t want, in general, to do something morally wrong. In this case, it seems reasonable to care about rightness and wrongness in general. Secondly, there’s no either-or decision between caring about rightness in the abstract, and caring about the particular things that make acts right. In fact, it’s obviously best to care about both aspects: you’re falling short, morally speaking, when you care about only one. By analogy, it would be just as silly never to consider my own happiness in the abstract as it would to obsess about my happiness to the exclusion of the things that happen to make me happy.

[E]qually, an agent who cares intrinsically only about these features, which she believes to make actions right or wrong, and not at all about whether her actions are right or wrong, would also be deficient as a moral agent. After all, coming to see an action as wrong should motivate a moral agent to change her intrinsic concerns so that she starts to care intrinsically about what makes actions right or wrong, according to her newly acquired moral beliefs. (p.24)

I’m no sure if the authors mention it, but on the topic of moral fetishism there’s also a potentially illuminating link to norms for rational choice. It’s not controversial to say that individual rationality involves maximising expected utility. Under plain old first-order empirical uncertainty, you should probably pick the option which yields the highest utility in expectation. There’s a (not very good) objection to this claim which says: I can well imagine an agent choosing perfectly rationally without any notion of ‘utility’ in their head, so it can’t be right to say that the rational choice is the choice which maximises EU! This doesn’t work because it needn’t follow that an agent must have some notion of utility in the abstract. Instead, most popular arguments for expected utility theory take the form of representation theorems: given the ability to form preferences over acts or lotteries or something, and some sensible axioms, we can show that a rational decision maker will behave as if all she cares about is maximising expected utility in the abstract. So it may go under moral uncertainty also: you might represent me as if all I care about is rightness in the abstract, but that doesn’t tell you much about what’s actually inside my head.

Conscientiousness

Another objection to taking moral uncertainty seriously (roughly) says:

So we have to answer in the negative to (1).

There’s a nice example early on in the book which shows why any sensible norm for acting under moral uncertainty is sometimes going to recommend taking an action you are certain not to be the morally best action. Suppose Susan has two patients, Anne and Charlotte.

They both suffer from the same condition and are about to die. Susan has a vial of a drug that can help. If she administers all of the drug to Anne, Anne will survive but with disability, at half the level of welfare she’d have if healthy. If Susan administers all of the drug to Charlotte, Charlotte would be completely cured. If Susan splits the drug between the two, then they will both survive, but with slightly less than half the level of welfare they’d have if they were perfectly healthy. Susan is certain that the way to aggregate welfare is simply to sum it up[.] (p.16)

The kicker? Charlotte is a chimpanzee. Susan is morally uncertain about whether chimp welfare matters. Her credences are split 50-50 between T1T_1: that chimp welfare matters just as much as human welfare, and T2T_2: that chimp welfare doesn’t matter at all. Susan faces three options:

Here are the moral evaluations of each option (p.17):

Chimpanzee welfare is of no moral value—50% Chimpanzee welfare is of full moral value—50%
A Permissible Extremely wrong
B Slightly wrong Slightly wrong
B Extremely wrong Permissible

What should Susan do? Intuitively, she should pick B, because picking either A or C would be reckless: she runs the risk of severe wrongdoing. Yet, B is the best option on neither T1T_1 nor T2T_2. So the conscientiousness objection is right in (P2). There’s got to be a problem with (P1) for the whole moral uncertainty thing to pull through — and indeed there is. As the authors note, it’s easy to conflate (P1) with the claim that a morally conscientious agent never chooses an option they believe to be morally wrong rather than one they’re confident is right. In the above example, Susan chooses an option she knows to be morally wrong — but she isn’t confident that any other act is right. So (P1) is false.

Regress

As a final objection to the mere idea of moral uncertainty, there’s a worry that allowing for distinctive norms about moral uncertainty opens the gates to a regress of nthn^\text{th}-order uncertainty. Consider:

Uncertainty at level 4: I am uncertain about how to deal with uncertainty about how to deal with uncertainty about how to deal with uncertainty about first-order morality. (p.31)

The authors admit that there’s no principled stopping point here. Third-order uncertainty, for instance, seems coherent. But the fact that you can construct sentences like the quote above to the nthn^\text{th}-order by no means implies that an infinite regress is possible. As the authors point out, it’s a simple logical blunder to move from ‘for all x, it’s possible that Fx’ to ‘it’s possible that, for all x, Fx’. Furthermore, our limited ape brains render anything above the quote above or thereabouts all but impossible to actually comprehend.

There are a couple of other objections besides, but you can read the book for them.

Maximising Expected Choiceworthiness

Let’s agree that moral uncertainty is a thing: that it matters, and that there are distinctive norms for dealing with it. What are those norms? The bulk of the book consists in figuring out an answer to this question.

Why expect norms plural — why not one norm to rule them all? This is because the information we have access to is going to vary as we compare levels of comparability between theories, and measurability within them. Some theories are only able to say that one act is strictly more choiceworthy than another, while others are able to place options on a cardinal scale where you’re allowed to say things like “A is twice as worse than B as B is worse than C”. There’s a nice table in the introduction laying out the possible ‘informational situations’, where a tick indicates that the book considers it. It looks like this:

Full comparability Unit comparability Level comparability Incomparability
Ratio-scale ✔️ ✔️
Interval-scale ✔️ ✔️ ✔️
Ordinal-scale - - ✔️
Preorder - - ✔️

The authors first consider two simple answers to (2) which try to go in for one true norm. The first they dub My Favourite Theory (MFT). Edward Gracely argued for this view, writing, “the proper approach to uncertainty about the rightness of ethical theories is to determine the one most likely to be right, and to act in accord with its dictates”. In other words: take the theory you have highest credence in, and choose the option it recommends. This has conciseness going for it, and not much else. Firstly, it needs to be finessed (and thereby complicated) so it can deal with equal credences and cases where it violates dominance. Now consider this decision:

T1 (28%) T2 (24%) T3 (24%) T4 (24%)
A Permissible Grievous wrong Grievous wrong Grievous wrong
A Very minor wrong Permissible Permissible Permissible

Clearly, you should pick option B. But MFT recommends picking A. That’s bad, but it gets worse. The authors point out that MFT is vulnerable to the problem of theory individuation, where what counts as your favourite theory, and in turn the best option, can depend on how finely you distinguish between theories.

Non-consequentialism (40%) Utilitarianism (60%)
A Impermissible Permissible
B Permissible Impermissible

Non-consequentialism (40%) Utilitarianism1 (30%) Utilitarianism2 (30%)
A Impermissible Permissible Permissible
B Permissible Impermissible Impermissible

This really can’t be allowed to happen: it can’t be the case that high-stakes moral decisions can be so sensitive to how you decide to divide up moral theories in your head. This leads to a second pass at constructing ‘one true norm’; one which isn’t vulnerable to this problem. This is My Favourite Option (MFO), on which you should pick the option(s) that are most likely to be permissible (best) according to your credences over first-order moral theories. This seems like an improvement on MFT, but shares a further flaw with it — that it doesn’t make any room for trade-offs between theories. We can imagine situations where the option most likely to be best is also very likely to be horrendously wrong; while some other option is less likely to be best but is much ‘safer’ across the board. Consider this simple choice:

T1 (51%) T2 (49%)
A Permissible Gravely wrong
B Slightly wrong Permissible

The option most likely to be permissible is option A, but the option it seems like you better choose is option B. This could mirror the foie gras example if the foie gras was (somehow) a few dollars cheaper than the salad, and Jane plans to donate the money saved to charity if she ordered the foie gras. Similarly, notice that MFO would never recommend choosing the ‘safe’ option in cases like the chimpanzee example above. In this way, MFO seems to neglect (i) information about how the most likely permissible option fares on theories where it is impermissible; and (ii) information about how every other option fares across all theories. And this leads to perverse recommendations in some cases. So we should reject MFO along with MFT. With that, the book turns to finding better norms in specific informational situations.

Maximising expected choiceworthiness

We start with the ideal cases: where all the moral theories you’re considering are intertheoretically comparable and interval or ratio-scale measurable. This roughly means that each theory is able to assign some ‘rightness vs wrongness’ score to every option, and that it’s possible to translate those scores between theories. For instance, I might be split 60-40 between a form of utilitarianism on which (non-human) animal welfare counts for zero, and a form on which animal welfare counts for just as much as human welfare. It’s reasonable to think that some unit of human welfare is equally valuable across the theories, and that the only disagreement is over the value of animal welfare.

Unlike MFT and MFO, we want our rule here to make use of all the available information at its disposal. To identify this rule, the authors suggest that we should treat moral uncertainty analogously with empirical uncertainty. Why think if there’s a rule that works for empirical uncertainty under equivalent informational conditions, we should expect it to work for moral uncertainty too? Well, there seem to be many kinds of empirical uncertainty. We can be uncertain about the outcome of some physical process (like a dice roll), or even about some necessary mathematical truth (like the Rayo’s number-th digit of π\pi). But there is a rule which applies just the same across all kinds of empirical uncertainty. So it would seem arbitrary if it didn’t apply to moral uncertainty also. That rule is, of course, maximise expected utility. The equivalent to this rule in the case of moral uncertainty is maximise expected choiceworthiness (MEC). Using a consistent scale across theories, for each option ,sum up its choiceworthiness on each theory multiplied by your credence in that theory, and pick the option with the highest score. Simple!

One interesting objection here is that MEC would be way too demanding in practice. Maybe you’ve read Peter Singer’s arguments that the relatively well-off have an obligation to donate a significant amount of money to effective charities. Even on a fairly low (≈ 10%) credence in his view, MEC would still recommend donating. On Singer’s view, choosing not to donate £3000 or thereabouts is morally equivalent to standing by as a stranger drowns in front of you. So you really better donate that money even if you think, on balance, that Singer is wrong. The authors are happy to bite the bullet here: where this reasoning is watertight (i.e. the only alternative is to spend the money on hot tubs and fancy watches), then conclusions like these really do drop out of MEC.

Ordinal Theories

Things get interesting when the theories we’re uncertain about are less than fully structured. Some might only be able to rank options on an ordinal, rather than cardinal, scale. Others might retain some interval-scale measurability, but lose any direct comparability with other theories. The authors turn next to situations where the moral theories we’re uncertain about are only able to rank options ordinally. That is, a theory can tell you that option A is more choiceworthy than B, which is equivalently as choiceworthy as C — but it can’t say by how much. This means, when the theories we consider only narrowly compare between the options at hand, we’re losing out on any information that could tell us about the stakes involved: there’s no way of conveying that some option is greviously wrong rather than somewhat wrong. As such, the problem of merely ordinal theories involves the problem of intertheoretic incomparability. How do we begin to think about this?

The authors make use of a neat extended analogy to problems in social choice theory. The present case of comparing merely ordinal theories is similar to the case of aggregating (ordinally) ranked votes to find a single winner. For illustration, 10 members of the philosophy club might want to elect a new president. Three candidates put themselves forward, and everyone ranks each candidate from most to least preferred. What rule can we come up with to decide who wins? As it happens, this is a deceptively hard question with no unequivocally good answers.

Here’s one suggestion: let’s say that if, for every other option, the majority of voters prefer some option A to that option, then A wins. This is called the Condorcet method. As applied to moral uncertainty, we can say that A beats B if and only if “it is true that, in a pairwise comparison between A and B, the decision-maker thinks it more likely that A is more choiceworthy than B than that B is more choiceworthy than A.” In both cases, we call A a Condorcet winner if and only if A ‘beats’ every other option in the relevant ways. The analogy is strongest when our credences over the theories we’re uncertain about are split evenly, because we can think of each theory as a single voter putting forward their preference ordering. Where the analogy between social choice and moral uncertainty comes apart a little is where our credences are not evenly distributed, because it’s rarely appropriate to weight the opinions of voters in constructing a voting system.

Unfortunately, it turns out (in both cases) that sometimes there is no Condercet winner. Of course, there will often be no unique winner in cases where two or more options are tied for first place — this would hardly spell disaster for the approach. The problem is that sometimes the aggregated preferences of voters or moral theories can be cyclic: it is possibly the case that every option is beaten by some other option. Suppose three voters are asked to order three options A, B, and C, and give:

A can’t be a winner, because the majority of voters prefer C to A. B can’t be a winner, because the majority of voters prefer A to B. But C can’t be a winner either, because the majority of voters prefer B to C. So, like rock-paper-scissors, there is no winner. This is called the Condorcet paradox. It’s a cool result, quite aside from its relevance to moral uncertainty. Call a preference ordering rational just in case it’s transitive and complete. The innocent-looking Condorcet rule in fact allows a collection of individually rational voters to yield an irrational (because intransitive) **overall preference ordering! I don’t think I could have anticipated that.

Of course, we can also get cyclic orderings when we apply the Condorcet method in the case of moral uncertainty. Here’s an example from the book, where Jason is deciding between options A, B, and C and has credence in three competing moral theories.

No Condorcet winner exists here, as in the voting case. In the voting case, the conclusion to draw would be that no winner exists at all. Clearly, no other reasonable voting system can divine a winner from this straightforward tie. However, things are different in the moral example. Because Jason’s credences are not evenly distributed, it seems intuitively obvious that Jason should choose C. In general, it therefore seems like there is sometimes a clear best option even when there is no Condorcet winner. That’s a clue: there’s a better approach out there.

It is true that different ‘Condercet extensions’ exist, which handle the above case better. For instance, the authors suggest a minimax variation on the above called the Simpson-Kramer method. But this method fails the following condition in the case of voting:

Twin Condition: If an additional voter who has exactly the same preferences as a voter who is already part of the electorate joins the electorate and votes, that does not make the outcome of the vote worse by the lights of the additional voter. (p.69)

This has an analogue on the case of moral uncertainty.

Updating Consistency: Increasing one’s credence in some theory does not make the appropriateness ordering worse by the lights of that theory. (p.69)

Examples of either case are going to be fairly convoluted — if you’re curious, take a look at pp.69-71 of the book (pp.80-82 of the PDF). It turns out, for complicated reasons, that any Condorcet extension will violate the Twin Condition in the case of voting. That means that any equivalent method in the case of moral uncertainty will violate Updating Consistency. For any such approach, there will exist some choice where A is ranked over B before increasing your credence in some theory on which A is more choiceworthy than B (at the equal expense of all other theories), and B is ranked over A afterwards. Long story short: we need a new approach.

Next, the authors turn to a voting system called the Borda Rule. Unlike the basic Condorcet rule and like the Simpson-Kramer method, the Borda Rule considers the magnitudes of pairwise victories and defeats for each option. Unlike the Simpson-Kramer method, the Borda rule considers the size of every pairwise comparison for each option, rather than only the size of the biggest defeat. The best way to explain how it works is to quote the authors. We start with a couple of definitions:

An option A’s Borda Score, for any theory TiT_i, is equal to the number of options within the option-set that are less choiceworthy than A according to theory TiT_i’s choice-worthiness function, minus the number of options within the option-set that are more choiceworthy than A according to TiT_i’s choice-worthiness function.

An option A’s Credence-Weighted Borda Score is the sum, for all theories TiT_i, of the Borda Score of A according to theory TiT_i multiplied by the credence that the decision-maker has in theory TiT_i.

Now we can state the rule itself:

Borda Rule: An option A is more appropriate than an option B iff A has a higher Credence-Weighted Borda Score than B; A is equally as appropriate as B iff A and B have an equal Credence-Weighted Borda Score.

The first consideration in favour of the Borda Rule is that it does well in the cases where other rules (Condorcet extensions, MFT, and MFO) fare poorly or fail altogether. Secondly, the Borda Rule fulfils some general desirable properties of any voting system. For instance, it satisfies Updating Consistency. In fact, it can be shown that only so-called ‘scoring systems’, of which the Borda Rule is an example, satisfy this property. MFO was a scoring system too, but we can add a further desirable property: that “the score of each option in ithi^{\text{th}} position has to be strictly greater than the score given to an option in (1+i)th(1+i)^{\text{th}} position.” This comes close to singling out the Borda Rule. In fact, adding one more axiom leaves us with only the Borda Rule:

Cancellation: If, for all pairs of options (A,B), S thinks it equally likely that A>B as that B>A, then all options are equally appropriate.

So we have strong, convergent, independent grounds for going for the Borda Rule.

Interval Scale Theories

In the above section, the theories being considered were not intertheoretically comparable, and were only able to rank options ordinally. Now the authors consider theories which are still mutually incomparable, but which are able to rank options on an interval scale. A choiceworthiness function uses an interval scale just in case it is unique up to a positive affine transformation: just the same information results when you add x units to each result, or multiply every result by y. As such, these three sets of interval scale scores express one and the same ordering:

T1 T2 T3
A 0 -100 21
B 20 -50 22
B 60 50 24

In this way, interval scales can express the relative size of differences between options: they can say that A is twice as worse than B as B is worse than C, for instance. But they cannot express ratios of ‘absolute’ choiceworthiness — there is no ‘absolute 0’. As such, just by reading off the choiceworthiness scores of different interval scale theories, “we just don’t have enough information to enable us to compare differences of choice-worthiness across moral theories.” Given intertheoretic incomparability between interval-scale theories, how do we decide how to act?

Note that interval scale theories preserve at least as much information as ordinal scale theories, so we could use the Borda rule again. But it would be strange not to make use of this new information. Here’s an idea proposed by philosopher Ted Lockhart:

The maximum degrees of moral rightness of all possible actions in a situation according to competing moral theories should be considered equal. The minimum degrees of moral rightness of possible actions in a situation according to competing theories should be considered equal unless all possible actions are equally right according to one of the theories (in which case all of the actions should be considered to be maximally right according to that theory).

This suggests a simple and intuitively attractive rule for deciding how to act under moral uncertainty. First, we normalise all the scales by affine transformation so that the minimum and maximum choiceworthiness scores of each theory are all the same. What these points are is arbitrary — say from 0 to 10. For each option, take the sum for each theory of the normalised choiceworthiness of the option on that theory ×\timescredence in that theory. In voting theory, this is like ‘range voting’, where voters place candidates on a numerical scale, and the winner is the candidate with the highest total score.

The problem with this straightforward view is, as the authors write, “it treats maximum and minimum degrees of choice-worthiness as the same within a decision-situation rather than across all possible decision-situations”. Suppose you’re unsure between two theories, and you’re deciding between three options A-C.

T1 (60%) T2 (40%)
A 2 1
B 1 1
C -100 2

Suppose T1 thinks that option C is morally egregious. T2 may just think all options are perfectly acceptable, and the slightest consideration means C wins out as most choiceworthy. The present range voting proposal would imply that C in fact wins out as the best option. But this seems wrong. Relating to a previous example, A and B could be innocuous vegetarian restaurant options, and C foie gras. In this way, theories may disagree on the stakes involved in some question.

In a sense, this is an issue of intertheoretic comparability. Sometimes, the stakes are obviously much higher in some decision situation for one theory compared to another — but this can’t simply be read off the raw choiceworthiness scores on their respective interval scales. One way to solve this is to allow the theories being considered to rank the options presently being weighed up against a much larger set of options — or even all possible options in the limiting case. This way, we could quickly notice that one theory thinks the stakes are low in this particular decision situation relative to all others, whereas another theory thinks they are unusually high. Allowing interval scale theories to score options relative to all possible decision situations does still plausible leave out some information, because some theories might just have different maximum or minimum absolute levels of choiceworthiness — they might just treat moral decision-making as higher-stakes in general.

But we’re assuming for the time being that we aren’t able to make intertheoretic comparisons. Moreover, the method of comparing theories across all possible decision-situations would help, but seems massively impractical. So, let’s limit ourselves to making comparisons between interval theories within a decision-situation. Can we do any better than range voting? Is there some way of better representing cases where one option is unusually choiceworthy on a particular theory?

It looks as if there is — variance voting. Variance voting treats the choiceworthiness variance as the same across all theories, where variance is the “average of the squared differences in choice-worthiness from the mean choice-worthiness”. If all the choiceworthiness means are normalised to 0 (unnecessary, but tidy); then the derived scores are basically z-scores, which describe the distance of some score in standard deviations from the mean. Variance voting gives theories a way of making themselves heard when a particular option is extremely good or bad relative to the others.

The authors also argue for variance voting on more general grounds: that it does the best job of giving ‘equal say’ to every theory. One reason for this is a little technical, but the idea is that linearly rescaling a theory’s choiceworthiness values so their variance is equal to every other theory’s (but their mean stays the same) turns out to be the only way to make sure every theory is equally distant from a ‘uniform choiceworthiness function’ — a function which assigns the same fixed value to every option (p.91 for details). Another way of cashing out the ‘equal say’ idea is in terms of ‘voting power’. In the context of voting theory, “[a]n individual’s voting power is the a priori likelihood of her vote being decisive in an election, given the assumption that all the possible ways for other people to vote are equally likely.” In the context of moral uncertainty, “[w]e want theories with the same credence to have the same voting power and for voting power to go up on average as the credence increases.” But a theory shouldn’t just care about its likelihood of being decisive; it should also care about how its influence over the expected choiceworthiness of the chosen option. So we tighten up our understanding of ‘voting power’ to accommodate for this. At this point, we achieve ‘equal say’ just in case: “from a position of complete uncertainty about how our credence will be divided over different choice-worthiness functions, an increase in our credence in a theory by a tiny amount will increase the expected choice-worthiness of the decision by the same degree regardless of which theory it was whose credence was increased.” Again, it turns out (with a couple extra qualifications) that variance voting is the one and only method for giving theories equal voting power in this sense (details on p.95).

Option Individuation

The Borda rule and variance voting both look attractive, but they’re both vulnerable to the problem of option individuation, which I discussed in relation to ‘My Favourite Theory’. ‘Option individuation’ just means how we choose to list the options open to us. Suppose I’m deciding whether to ‘do exercise’ or ‘go to the pub’. I could present that as two options, E and P. But I have choice over what kind of exercise I could do, and which pub I go to. So my options might multiply into “lift weights”, “go cycling”, “go to the Mill”, and “go to the Granta” — and so on. I’m not forced to individuate my options in one way rather than another; so how I slice up the options open to me shouldn’t make a difference to the option that comes out as best according to any decision rules I’m using. Unfortunately, in this case, it does.

First, notice how this is possible in the case of a straightforward vote: everyone gets to vote for exactly one option and the option with the most votes wins. Suppose referendum 1 is between A) not building a bridge; and B) building a bridge. Let’s say (B) gets 60% of the votes and (A) 40%. Now rewind time and imagine the referendum is conducted with three options: A) not building a bridge, B’) building a red bridge, and B’‘) building a blue bridge. The voters in support of building a bridge don’t give a hoot what colour it is, but they’re 50-50 in terms of the mild preferences they do have. The result is that (B’) and (B’') both get 30% of the votes, so (A) wins and the bridge doesn’t get built. This, in general, is the idea behind the problem of option individuation.

Now notice how the problem applies to the Borda rule too. Consider three theories T1-3 choosing between three options A-C:

T1. B>C>A (credence 40%)

T2. A>B∼C (credence 30%)

T3. A>B>C (credence 30%)

The way things are set up here, B turns out to be the most appropriate option. Now suppose you decide to split B into two distinct but similar options B’ and B’’ (like splitting ‘go to the pub’ into ‘go to the Mill’ and ‘got to the Granta’). All of T1-3 are indifferent between B’ and B’', such that:

T1. B’∼B’’>C>A

T2. A>B’∼B’’∼C

T3. A>B’∼B’’>C

On this revised choice, A turns out to be the most appropriate option. Intuitively, this is because adding ‘more Bs’ means that there are more options which A is better than on T2 and T3, increasing A’s score disproportionately.

Option individuation is also a problem for variance voting, though the details are a bit fiddlier (p.98 for an explanation). Intuitively, just notice how ‘splitting’ options into duplicates may reduce or increase the variance of the option set.

There is no analogous problem of individuation in the case of empirical uncertainty. Suppose you’re betting on Federer winning Wimbledon. The bookie has offered odds of 4/1, so a £100 bet stands to win £400 if Federer wins:

Federer wins (40%) Federer loses (60%) Expected Payoff
Bet £100 on Federer £400 – £100 £100
Don’t bet on Federer £0 £0 £0

The options can be individuated as much as you like, but nothing is going to ‘break’. For instance:

Federer wins (40%) Federer loses in straight sets (10%) Expected Payoff
Bet £100 on Federer wearing a hat £400 – £100 £100
Bet £100 on Federer not wearing a hat £400 – £100 £100
Don’t bet on Federer £0 £0 £0

Further — and this feature moral uncertainty shares — the events themselves can be individuated any way you like without ever affecting the expected payoff of each act. This would be like distinguishing more finely between variants of the theories you’re unsure about. For instance:

Federer wins (40%) Federer loses in straight sets (10%) Federer loses but wins at least one set (50%) Expected Payoff
Bet £100 on Federer £400 – £100 – £100 £100
Don’t bet on Federer £0 £0 £0 £0

I mention this because the reason event individuation never affects expected utility is a clue to addressing the problem of option individuation under moral uncertainty. In the case of event individuation, splitting events also splits the probability we associate them with. These probabilities always add to 1. This is a probability measure — making sure that expected utilities always add up to the same thing no matter whether we slice events into finer variants. Misleadingly, a probability measure can be applied to things other than probability qua likelihood — roughly speaking, it’s a way of representing the space some set of things takes up, giving a value of 1 to that whole set, and making sure the size of the whole set stays the same size no matter how its members are sliced up. Picture a bar of chocolate: you can break off pieces any which way, but the total quantity of chocolate stays fixed (briefly). In the moral uncertainty case, we can apply a measure over the space of possible options. Now we can redefine the rules we came up with above:

An option’s Borda Score is equal to the sum on the measure of the options below it minus the sum of the measure of the options above it.

In the case of variance voting:

[W]e normalize the variance of the distribution of choice-worthiness by taking the choice-worthiness of each option weighted by that option’s measure.

This avoids the problem of option individuation. However, introducing a measure over options seems to leave a similar problem behind; which is the problem of how exactly the measure should be spread across options. It seems like any proposed methods for spreading the measure is going to be totally arbitrary. This is less disastrous than the arbitrariness of which options we pick in the first place, but it still sometimes makes to which option gets chosen as most appropriate.

Broad and Narrow

The foregoing discussion has assumed that, when decide what to do in some decision situation, we should look to what the moral theories we’re considering say about the options currently at hand. If we’re choosing between A and B, we consider how the theories rank A and B and nothing else. But we could, of course, throw in options not currently open to us. We could consider how the theories rank A and B relative to C, D, and so on. In fact, we could rank the options being presently considered among all possible options. This leads to a question:

When we normalize different theories at their variance, should we look at the variance of choice-worthiness over all possible options, or the variance of choice-worthiness merely over all the options available to the decision-maker in a given decision situation?

Similarly, should we take an option’s Borda score relative to the other options currently available, or all imaginable options? Some terminology:

Following Sen, we will say that Broad accounts are defined across all conceivable options and that Narrow accounts are defined over only the options in a particular decision-situation.

The obvious question here is which account is best. In brief: a major argument for Broad accounts is that Narrow accounts can’t make sense of the fact that some decisions are higher stakes than others. Suppose you’re choosing between two different sets of two options at two different times. On one occasion, a theory might prefer one option to another but (loosely speaking) not really care either way. On another occasion, a theory might think the difference in choiceworthiness between the best and worst option is enormous — the theory would much rather get its way in this decision-situation than the other one. The problem is that Narrow theories can’t know when a decision really matters for some theory or not. This would be a problem if the Borda rule and variance voting were accounts of how theories actually compare. But that’s not why we came up with them — they’re instead accounts of “coming to a principled decision in the face of incomparable theories. So there isn’t a fact of the matter about some decision-situations being higher stakes for some of these theories rather than others.” I’m personally not fully convinced by this response. It is true that these are accounts of decision making under conditions of intertheoretical incomparability, but each theory can still make comparisons between decision-situations. So theories can still meaningfully ‘say’ that this decision matters far more than e.g. the past ten decisions. That seems like decision-relevant information, but Narrow accounts can’t make use of it (and Broad accounts can by virtue of considering every conceivable option every time).

In any case, a more pressing objection to Narrow accounts is that they violate a condition called ‘Contraction Consistency’:

Contraction Consistency: Let M\mathcal{M} be the set of maximally appropriate options given an option-set A\mathcal{A}, and let A\mathcal{A}' be a subset of A\mathcal{A} that contains all the members of M\mathcal{M}. The set M\mathcal{M}' of the maximally appropriate options given the reduced option-set A\mathcal{A}' has all and only the same members as M\mathcal{M}.

For an example of where the Narrow application of the Borda rule violates Contraction Consistency, see p.103. There’s a vague worry here that this opens Narrow accounts up to money pump arguments, but some recent work indicates that there are ways to avoid such a damning conclusion.

In favour of Narrow accounts is of course the fact that it’s practically impossible to rank every conceivable option. Again, the authors propose these voting rules not to give an account of how theories in fact compare, but to describe how to choose among option when they don’t. And it seems like any such account of how to actually decide should at least be halfway useable.

I should mention that there seems to be a fairly obvious compromise between the Narrow and Broad views, and I’m not entirely sure why the authors don’t consider it. This is just to always consider the same fixed set of options in addition to the particular options currently open to you. This could be some selection of (say) 100 options ranging across petty theft and white lies to creating new people, to murder or whatever else. This would allow the decision rule to understand when a particular theory thinks some option is not only best or worst out of the available options, but really very bad or really very obligatory — because those options rank in the extremes among the wider ‘fixed’ option set. My guess is that this wasn’t considered because the choice of this fixed set is necessarily arbitrary; so gives up the pretense that we’re in the business of ‘discovering’ the best rule. But, again, we’re not looking for an account of how theories in fact compare. We’re looking for the best tool for the job — and a tool can happily sacrifice elegance for usefulness.

Intertheoretic Comparisons

The authors have just considered voting rules for how to decide even when theories are incomparable with one another. But this raises the question of how often are differences of choice-worthiness in fact comparable across theories? Note that we’re just interested in differences in choiceworthiness and not levels of choiceworthiness.

There’s another analogy to voting here. Voting systems do not assume any way of knowing that voter 1 prefers candidate A to candidate B far more, in absolute terms, than voter 2. And this is reasonable: if it’s meaningful to talk about such absolute differences at all, voting ballots aren’t going to reliably elicit that information. We would need some kind of brain probe instead. But this doesn’t rule out the possibility that differences in voters’ preference ‘strength’ are in fact comparable between voters — this is a philosophical question, distinct from the question of how and whether to in fact represent such absolute differences.

Some more terminology: sceptics argue that choiceworthiness differences are always or nearly always incomparable across theories. Structuralists argue that they are (often) possible, and that “intertheoretic comparisons should be made only with reference to structural features of the theories’ qualitative choice-worthiness relation (such as the choice-worthiness of the best option and worst option) or mathematical features of its numerical representation (such as the mean, sum, or spread of choice-worthiness).” While the Borda rule and variance voting were presented as the best way to decide when theories are incomparable, structuralists might go further and say that such rules are good descriptions of how theories actually compare. The authors argue against both structuralism and scepticism.

Proponents of scepticism argue along two broad lines. First, they appeal to particular cases and point out how there’s no obvious way to compare between theories at all — i.e. any comparison would seem arbitrary. Second, in cases where there does seem to be some intuitive and non-arbitrary way of comparing theories, they point out how such comparisons lead to decision-situations in one theory ‘swamps’ another by a ridiculous amount. In the first case, consider how choiceworthiness differences on utilitarianism compare with those on prioritarianism (on which gains in welfare are weighted towards the worse-off). The prioritarianism thinks that the difference in choiceworthiness between assisting and not assisting a poor person is greater than the difference between assisting and not assisting a hedge fund manager. As long as the well-being gains are the same and all else remains equal, plain utilitarianism is going to say the two differences in choiceworthiness are equal. But is that because the proponent of prioritarianism think gains in well-being for the worse-off matter more, or gains in well—being for the better-off matter less? This question really represents a sliding scale: at what existing level of well-being or advantage is an equivalent gain in well-being worth the same on utilitarianism and prioritarianism? Suppose utilitarianism values absolute well-being , and prioritarianism values the square root of absolute well-being times some constant — kW\sqrt{kW}. Because we’re asking when a small improvement in well-being is worth the same on both theories, we’re asking when the derivative of each function is the same. But this depends on the value of kk. And how are we supposed to know that?

In the second case, suppose you’ve (somehow) found yourself needing to decide whether to increase the global population from 6 to 24 billion people “at the cost of halving the average happiness level”. Suppose you’re uncertain between total and average utilitarianism, and you equate a “a unit of total wellbeing with a unit of average wellbeing”. Now notice that the gains in well-being on total utilitarianism for increasing the population far outweigh the loss on average utilitarianism (and vice-versa). Thus, writes Brian Hedden:

[M]aximizing intertheoretic expectation will recommend that the agent implement the population-increasing policy (i.e. doing what Totalism recommends) unless she is over 99.9999999916% confident that Averagism is right. But this seems crazy.

Against Scepticism

The above arguments pack some punch, but note that they only ever succeed in showing how intertheoretical comparisons are non-obvious in particular comparisons between theories. The sceptical claim is that theories are always or nearly always incomparable.

Further, the opponent of scepticism can instead point to cases where intertheoretic comparisons of choiceworthiness differences seem entirely obvious and non-arbitrary. This is clearest when the theories we’re comparing only disagree about some minor issue: like whether the death penalty is good. Comparisons seem equally obvious when theories differ over their extension: the set of things that can bear value. For instance, you might be unsure between two variants of utilitarianism which only disagree about whether fish have moral patienthood. In both cases:

These are cases where we’re not really comparing two different complete theories, considered in the abstract. We’re comparing two different moral views that differ with respect to just one moral issue. In these cases, the intertheoretic comparison seems obvious: namely, that choice-worthiness differences are the same between the two views with respect to all moral issues other than the one on which they differ.

Against Structural Accounts

The authors define structuralism as follows —

A structural account is a way of giving different moral theories a common unit that only invokes structural features of the theories’ qualitative choice-worthiness relation (such as the choice-worthiness of the best option and worst option) or mathematical features of its numerical representation (such as the mean, sum, or spread of choice-worthiness). The identities of particular options have no relevance; only positional properties matter.

In other words, these structural or mathematical features do more than help us decide when theories are incomparable: they also tell us how theories in fact compare. Structuralism has a lot going for it. For instance, it can easily dodge the ‘swamping’ worry by imposing bounds on choiceworthiness. It also refrains from the need to ‘get its hands dirty’ by considering moral intuitions in particular cases. Unfortunately, the authors argue convincingly that structuralism is too weak — often extra-structural features obviously tell us something important about the magnitudes of choiceworthiness differences.

According to structural accounts, there is only one possible way to make intertheoretic comparisons between any two theories… All we need to argue is that an agent is not making a mistake if she has a belief that two theories compare in a way that structural accounts cannot countenance.

Firstly, structuralist accounts seem unable to distinguish between cases where theories A and B are entirely different, and cases in which B is just A with a different extension (as in the fish example). Secondly, structuralism accounts seem unable to make sense of how the moral stakes can generally be higher on some theories than others. On a few theories, very many decisions matter a great deal. All kinds of things bear value, and all kinds of consequences are counted as morally significant. At the other end of the scale is nihilism, or ‘universal indifference’, on which nothing matters — all options are always equally choiceworthy. And between these two points lies a gradient of theories whose stakes vary from high to low. Perhaps one theory doesn’t think that animals are morally significant; another that aesthetic achievements don’t morally matter; another imposes an acts / omissions distinction, and so on. This should make intuitive sense — but structuralist accounts instead impose a strict discontinuity between nihilism / universal indifference and every other theory. That’s because as we move down the ‘stakes scale’, structuralist accounts scale up the absolute magnitudes of choiceworthiness differences — until there are not differences to scale at all. Thirdly, structuralist accounts say that all theories to which their rule can be applied are comparable. But some such theories don’t seem to be comparable (at least in some respects). So as well as not going far enough in using information beyond the theories’ structure, they sometimes go too far in assuming that structural comparability imply actual comparability. Finally, it seems possible that one theory may be an ‘amplification’ of another, in the sense that only difference between the theories is how high the moral stakes are. Again, because this difference is not captured by the (identical) structures of each theory, structuralism cannot represent these differences. In sum, we should be able to do better than both structuralism and scepticism.

Non-structural accounts

This leaves the authors with two questions — one metaphysical and one epistemic. The metaphysical question is the question of what underlies intertheoretic comparisons: “in virtue of what are intertheoretic comparisons true, when they are true?”. The epistemic question is: “how can we tell which intertheoretic comparisons are true, and which are false?”. For brevity, I will skip most of this discussion. Suffice to say the authors end up with a view on which theories are comparable by virtue of a ‘universal scale’; much like how differences in mass may be grounded by an absolute property of mass. Because each theory is really a bundle of structurally identical theories with different levels of ‘amplification’, the epistemic problem in some sense amounts to the problem of how to assign credences over these different levels of amplification. And the authors take a common-sense, deflationary approach to this problem: “relying on intuitions about particular cases and appealing to more theoretical arguments”.

Fanaticism

Next, the authors discuss two further philosophical issues for any account of decision-making under moral uncertainty — and particularly any theory which prescribed maximising expected choiceworthiness.

One of these issues is ‘infectious incomparability’. Some moral theories are radically incomparable: “there are very few positive choice-worthiness relations between options because almost all important moral decisions involve trade-offs between different types of value”. Sometimes this incomparability can spread throughout the rest of the decision-problem, rendering the rules for decision-making useless. I won’t discuss the authors’ comments in depth — suffice to say they are sceptical of any such radically incomparable account, but nonetheless think the issue can be resolved.

The other issue, which had been bugging me during the foregoing chapters, is the problem of fanaticism. This is the worry that sometimes, “expected choice-worthiness will be dominated by theories according to which most moral situations are incredibly high stakes.” Let’s distinguish between two kinds of ‘incredibly high’: very high and infinitely high. As an example of a decision-situation in which one theory thinks that the stakes are very high, consider a decision about what to do with your sizeable fortune as you reach the end of life. For simplicity, suppose you’re choosing between giving it all to your children, and giving most to the Against Malaria Foundation with enough left over for your children to live comfortable but not opulent lives. Most moral theories would say something like: it would be very generous of you to give your money to charity, but giving it to your children is not much worse. Suppose, however, you have something like 10% credence in Peter Singer’s arguments to the effect choosing not to give your money to an effective charity amounts to choosing to let dozens of people die. Since Singer rejects the ‘acts/omissions’ distinction, choosing to let dozens of people die is no less bad than (or just equivalent to) causing dozens of people to die. On Singer’s view, not donating to the charity thus constitutes a grave wrong: the stakes are seriously high. It’s easy to imagine running the expected choiceworthiness calculation and finding that donating the money is the overwhelmingly better option, because of this theory in which you only have 10% credence. This might sound fishy, and the authors discuss such practical implications later. But it’s not so fishy to give up on the maximise expected choiceworthiness rule that led you to this conclusion. In other words, where the stakes are very high, it’s ok to bite the bullet.

Unsurprisingly, things are different when theories instead assign infinite choiceworthiness to some options — in other words, when they prescribe some options as absolutely impermissible. I probably have a 0.1%(ish) credence in some kind of fire-and-brimstone deontological religious ethic on which lying is always absolutely wrong. But consider a decision-situation in which I’m unsure whether to lie to a murderer about the whereabouts of my friend in order to save him. Every theory in which I have some credence says that I should obviously lie — except this one deontological theory which says it’s absolutely impermissible to do so. As long as it assigns a choiceworthiness of -\infty to lying, it doesn’t matter how small my credence in that theory is — maximising expected choiceworthiness is always going to tell me not to lie. This seems insane.

The authors have a few responses here. First, they note that the infinities are rarely if ever going to point in a single direction. But as long as one theory thinks option A is best by an infinite margin, and another thinks option B is best by an infinite margin, then the ‘maximise expected utility’ rule just breaks. Secondly, this problem of infinities is not unique to moral uncertainty — it’s just endemic to decision theory in general. It’s there in plain empirical uncertainty too. The most obvious example is Pascal’s mugging, which argues from the infinite loss of choosing not to believe in God when God exists to the conclusion that you ought to believe in God. Pascal’s wager sounds wrong, but not for a single agreed-upon reason. In any case, the problem of fanaticism thus shouldn’t be expected to uniquely undermine the possibility of maximising expected choiceworthiness under moral uncertainty.

Do also note that the problem of fanaticism in the context of moral uncertainty doesn’t involve cases where you have some credence in views which say you’ll face severe consequences — even retribution — if you do some act. Having a non-zero credence in heaven and hell, for instance, might affect which courses of action you take — but only in the straightforward, prudential, first-order ‘maximise expected utility’ sense.

Metaethical Implications

In the book, the authors next consider the implications of moral uncertainty for metaethics: that is, questions about what moral beliefs and claims are about, whether they take truth values, and what makes them true if so. In brief, they argue that taking moral uncertainty seriously favours a view called ‘cognitivism’ over non-cognitivism. Moral cognitivism is the view that moral statements express beliefs, and those beliefs can be true or false. Not that this needn’t imply moral realism — the view that there are moral properties or facts — because it could just turn out that every moral belief is false. Moral non-cognitivism is the view that moral statements are not truth-apt; rather they express some kind of attitude like approval, disapproval, or desire. Moral non-cognitivism pretty much does imply moral irrealism — the view that there are no moral facts or properties — because it would be deeply strange if there were such facts or properties, but we’re somehow no able to form beliefs or make claims about them.

Intuitively, moral uncertainty fits far better with moral cognitivism. One way of seeing this is to notice how moral cognitivism ensures a parallel between moral uncertainty and various kinds of empirical uncertainty. When I’m uncertain about how things will in fact play out, I’m uncertain over statements which can be true or false. On cognitivism, when I’m uncertain about which moral theory is true, I’m also uncertain over statements which can be true or false. By contrast, non-cognitivism has a hard time making sense of what we have been calling credences over moral theories. For the non-cognitivist, saying “murder is wrong” is something like saying “I disapprove of murder” or “I wish that people didn’t murder” or just “Murder? Boo!”. But these aren’t things I can really have a credence in — it seems odd to say that I’m 60% confident I wish people didn’t murder. The problem runs even deeper. Suppose the non-cognitivist identifies ‘credence’ in this context with something like ‘strength of my desire’. Now non-cognitivism has a hard time distinguishing between ‘credence’ in a theory, and the choiceworthiness of an option on that theory. How can you tell apart low confidence in a theory which says option A is terrible; and high confidence in a theory which says option A is quite bad?

The authors discuss various ways out for the non-cognitivist. These solutions involve adding some machinery onto the simple versions of non-cognitivism in order to make this distinction. The most plausible of these views go some way towards cognitivism in allowing moral judgements to have a belief dimension. The bottom line, however, is that moral uncertainty presents a tricky problem for the non-cognitivist while it presents no problem at all for the cognitivist.

This can be framed as an argument for cognitivism. At the start of the book, the authors presented (and I summarised) some a priori reasons for taking moral uncertainty seriously. These reasons did not assume cognitivism. But not we have seem that taking moral uncertainty seriously suggests cognitivism. So it looks like we’ve found an independent reason for cognitivism. One man’s modus ponens, of course, is almost always another man’s modus tollens. And in this case, the non-cognitivist could say, “Non-cognitivism is obviously true. Taking moral uncertainty seriously looks incompatible with non-congitivism. So we shouldn’t take moral uncertainty seriously.” You would have to be a die-hard non-cognitivist to say that, however, because moral uncertainty very real indeed.

Practical Ethics under Moral Uncertainty

These final two chapters were my favourites of the whole book — ‘8. Practical Ethics Given Moral Uncertainty’ and ‘9. Moral Information’. The theoretical rubber has finally hit the practical road, and we get to see what moral uncertainty means for making decisions in the real world. The brief answer is: a surprising amount.

The major result of the foregoing discussion is that taking moral uncertainty seriously sometimes implies choosing an option which isn’t recommended by the theory you have most confidence in. Sometimes choosing the best option under moral uncertainty is demanding, because decisions are often dominated by theories for which the stakes are extremely high. Most the literature, it turns out, focuses on (i) abortion and (ii) vegetarianism. In both cases, ssome have argued that maximising expected choiceworthiness leads to such obvious recommendations in some cases that first-order theorising becomes all but irrelevant.

Suppose you’re driving fast along a quiet road and approach a corner with a crossing the other side. In all likelihood, there’s nobody around the corner and you can afford to speed on through. But there is a chance somebody will be crossing. That chance is large enough, the consequences of hitting someone terrible enough, and the cost of slowing down small enough, that it’s obviously best to slow down — even when you think it’s very likely slowing down will turn out to have been pointless. So it goes with moral uncertainty: as long as you have some credence in an ethical theory which tells you that taking this option would amount to an awful mistake, even if it’s not your preferred theory, then you have a reason not to take this option. But now suppose, as we’ve already discussed, you’re deciding whether to buy / order meat or a vegetarian alternative. Suppose you’re mostly confident that animals don’t matter morally, but you have some confidence in the view that animals do matter morally, and that buying meat contributes to their ongoing suffering in a morally blameworthy way. If you take moral uncertainty seriously, the decision you face is analogous to the decision over whether you should speed round the corner: eating / ordering the meat would be morally reckless.

Similar reasoning seems applicable in the case of abortion. Consider:

Isobel is twenty weeks pregnant and is considering whether to have an abortion. She thinks it’s pretty unlikely that twenty-week-old fetuses have a right to life, but she’s not sure. If she has an abortion and twenty-week-old fetuses do have a right to life, then she commits a grave wrong. If she has the child and gives it up for adoption, she will certainly not commit a grave wrong, though she will bear considerable costs as a result of pregnancy, childbirth, and separation from her child. (p.181)

It looks as if the appropriate choice for Isobel is to give the child up for adoption, by analogous reasoning. However, the authors caution against drawing such quick conclusions, as many commentators have done.

Taking moral uncertainty seriously undoubtedly has important implications for practical ethics; but coming to conclusions about what those implications are requires much more nuanced argument than has been made so far.

This is in part due to ‘interaction effects’:

We cannot simply look at how moral uncertainty impacts on one debate in practical ethics in isolation; moral uncertainty arguments have very many implications for practical ethics, and many of those interact with one another in subtle ways. (p.188)

For instance, eating meat sends a price signal which causes more animals to be born and raised for slaughter. If their lives are above some threshold of well-being, some ethical views might view those lives as net positive; and so provide considerations in favour of eating meat. Equally, performing some action might involve an important opportunity cost. In the case of abortion:

Carrying a child to term and giving it up for adoption costs time and money (in addition, potentially, to psychological distress) that could be used to improve the lives of others. According to a pro-choice view that endorses Singerian duties of beneficence, one may be required to have an abortion in order to spend more time or money on improving the lives of others. [Thus], what seems appropriate under moral uncertainty is critically dependent on what exactly the decision-maker’s credences across different moral theories are.

So the conclusions from applying moral uncertainty to particular practical decisions are rarely so obvious as to be knowable without first establishing the details of a choice and the agent’s precise credences across theories. This shouldn’t come as any surprise: making practical moral decisions is hard, and it would be suspiciously convenient if moral uncertainty-based reasoning was often able to resolve these nuanced and controversial questions into open-and-shut cases.

However, the authors point out that moral uncertainty does suggest general prima facie principles, where uncertainty about which theory is correct seems to point in a single direction from a given baseline. I’ve already mentioned the first example they give, which is that if you don’t already believe you are morally required to give to charity, “considerations of moral uncertainty provide an argument for donating a substantial proportion of your resources to save the lives of strangers”. This is due to arguments from Singer and others aimed at undermining the distinction between acts and omissions. Moral uncertainty points in this direction because virtually no plausibly theories positively recommend against assisting the worse-off through effective charity.

A second example is partiality. Even if your preferred theory is impartial, moral uncertainty should cause you to give some extra weight to your friends and family. The asymmetry again comes from the fact that virtually no moral theories recommending weighting the interests of strangers over the interests of your close kin. For similar reasons, a utilitarian should be nudged in the direction of prioritarianism by moral uncertainty (no plausible theories recommend weighting benefits to the better-off over benefits to the worse-off).

Next, moral uncertainty suggests placing some value on objective (i.e. not subjective) prudential goods. My preferred theory may come with an hedonistic account of well-being, but other plausible accounts of well-being value things like appreciation of beauty, friendships, discovery, etc. So it may sometimes be appropriate to promote those goods independently of their effects on subjective well-being. Similarly, welfarist views only value people’s well-being. But other views value non-welfarist goods like biodiversity or great art. So even if my preferred theory is welfarist, it may sometimes be appropriate to promote non-welfarist goods.

The authors also consider how moral uncertainty should apply to population ethics, which you can read about on p.186 and here.

Moral Information

The final chapter asks about the value, on the foregoing framework, of ‘moral information’. Note that we do and should value information which helps resolve more familiar kinds of uncertainty. Suppose you’re buying a second-hand car, but you’re unsure whether it’s a dud, a lemon. Presumably there should be some amount of money up to which you would be willing to pay to find out.

Well, suppose you’re thinking about giving your time (career) or resources (donations) to some cause, but you’re uncertain about which cause is morally best. How much should you be willing to pay, in time or money, to resolve that uncertainty? A lot, according to these authors! In one plausible example, we see that a philanthropic organisation should be willing to pay $2.7 million of their ​$10 million pot for reliable moral information which determines how to spend the remaining ​$7.3 million.

In another (highly idealised but amusingly counterintuitive) example, somebody considering how to spend a 40 year career might reasonably be willing to spend anything up to 16 years of her life studying ethics in order to figure out how to spend the remaining 24! Bottom line: moral information, even when it’s less than fully reliable, can be hugely valuable.

Reflections

A few closing thoughts.

It’s worth noting that a lot of this book is fairly technical. Not offputtingly so, but it would probably help to have some familiarity with moral philosophy and maybe a bit of decision theory. I recently read Peterson’s An Introduction to Decision Theory, which came in handy reading this — maybe worth checking out.

This is a carefully written and clearly argued book. You get the impression that almost every page has been wrought from some intense discussions, revisions, and re-revisions. And clearly this is a book many years in the making — in the sense that its ideas were floating around long before they were collected in one place. For instance, MacAskill’s BPhil thesis about moral uncertainty was published in 2010! I also appreciated how many questions were left open or only partially resolved: you get a sense that this is a young and exciting field for research. The case has been opened, but it won’t be shut for a while.

You can access the free PDF version of this book here.



Back to writing