Showing posts with label Principle of Double Effect. Show all posts
Showing posts with label Principle of Double Effect. Show all posts

Monday, October 30, 2017

Counseling the lesser evil

A controversial principle in Catholic moral theology is the principle of “counseling the lesser evil”, sometimes confusingly (or confusedly) presented as the “principle of the lesser evil”. The principle is one that the Church has not pronounced on. (For a survey of major historical points, see this piece by Fr. Flannery.)

First, a clarification. Nobody in the debate thinks it is ever permissible to do the lesser evil. The lesser evil is still an evil, and it is never permissible to do evil, no matter what might result from it. The debate is very specifically the following. Suppose someone is determined to do an evil, and cannot be dissuaded from doing some evil or other. Is it permissible to counsel a lesser evil in order to redirect the person from a greater evil? For instance, if someone is about to murder you, and cannot be dissuaded from an evil course of action, are you permitted to counsel theft instead, as on some interpretations the ten men in Jeremiah 41:8 do? (But see quotations in Flannery for other interpretations.)

There is no question that if the potential murderer is redirected to theft, the theft will still be wrong, indeed quite possibly a mortal sin (depending on the amount stolen). The moral question about “the lesser evil” is not about the primary evildoer but about the counselor. On the one hand, it appears that if the counselor’s counsel is sincere, the counselor is wrongfully endorsing an evil—albeit less evil—course of action. Indeed, it seems that the counselor is even intending the evil, albeit as an alternative to a greater evil.

On the other hand, a number of people will have very strong intuitions that it is not wrong to say to a potential murderer “Don’t kill me: here, take my laptop!” (Note: I assume the coerced circumstances do not render this a valid gift, so the potential murderer will indeed be a thief by taking the laptop.)

Let me add that the argument I will give leaves open the question of the advisability of counseling the lesser evil. Often it may be better to inspire the evildoer to do the good thing rather than the lesser of the evils. Moreover, one needs to be extremely wary of any public counseling of the lesser evil, because it is apt to encourage people who are not determined on evil to do the lesser evil. I think it is unlikely that such counseling is often advisable.

So, here’s the argument. Start with this thought. Agents deliberate about options. As they do so, they come to favor some options over others. Eventually, as they narrow in on the decision, they favor one option over all the others. Moreover:

  1. If a deliberating agent in the end favors B over C, typically the agent will not choose C as a result of this deliberation.

There are at least two reasons for the “typically”. First, maybe the agent is irrational. Second, maybe there can be cases of circular favoring structures, so that the agent favors B over C, favors A over B, and favors C over A, so that she ends up choosing C anyway.

Next observe this:

  1. If option B is better than option C, then it is good for a deliberating agent to favor B over C.

This is true regardless of whether B and C are both good options, or B is good and C is bad, or both B and C are bad. It is simply a good thing to favor the better over the worse.

With (1) and (2) in mind, consider a case where the agent has three options: a good A (e.g., going away), a lesser evil B (e.g., theft) and a greater evil C (e.g., murder). By (2) it is good if the agent to favors B over C. Suppose the counselor strives to lead the agent who is determined on evil to favor B over C (e.g., by emphasizing the resale value of the laptop, or the likelihood that the police will investigate a murder more thoroughly than a theft, or the greater sinfulness of murder, depending on what is more likely to impress the particular agent). Then the conditions for the Principle of Double Effect can be satisfied on the side of the counselor.

  1. The counselor is pursuing a good end, the agent’s not choosing C.

  2. The counselor’s chosen means to the good end is the agent’s favoring B over C. By (1), such favoring is likely to be effective in fulfilling the counselor’s good end (namely, the agent’s not choosing C) and by (2), such favoring is good.

  3. There is a foreseen but not intended evil of the agent opting for B. It is not intended, because the counselor’s plan of action will be successful whether or not the agent opts for B (as foreseen) or for A (an unexpected bonus).

  4. The good of the agent’s not choosing C is proportionate to the foreseen evil of the agent’s choosing B, and there is, we may suppose, no better way of achieving the good.

In particular, there is no intention that the agent choose B, or even choose B over C. The intention is that the agent favor B over C, which is all that is typically needed, given (1), for the agent not to choose C.

Note 1: This provides a defense of pretty strong cases of counseling the lesser evil. The argument works even in cases where the agent being counseled wouldn’t have thought of evil B prior to the counseling (that is the case in Jeremiah 41:8). It might even work where B is impossible prior to the counseling. For instance you might unlock your safe in order to make it easier for the agent to steal your money in place of killing you. In so doing, your end is still that C not be done, and the means is that B is favored over C.

Note 2: This solves the problem of bribes.

Note 3: I am not very confident of any of the above.

Friday, October 27, 2017

Bribes and conditional intentions

You are trying to get a permit that you are both morally and legally entitled to, but an official requires a bribe to give you the permit. Are you permitted to pay the bribe?

I always thought: Of course!

But now I think this is more difficult than it has seemed to me. Initially, it seems that your action plan is very simple:

  1. Give the bribe in order that the official give you the permit.

But suppose that you pay the bribe and the official never notices the money slipped onto her desk, though when you lean over her desk, from that angle you look just like her nephew, so she gives you the permit out of nepotism. In that case, while you got what you wanted, you didn’t fulfill your plan–your bribery was not a success. That shows that (1) is only a part of your action plan. More fully, your plan is:

  1. Give the bribe in order that the official be motivated by it (in the usual way bribes motivate) to give you the permit.

But now it seems to be a moral evil that an official be motivated by a bribe to do something, even if the thing she is motivated to do is the right thing. So in setting oneself on plan (2), it seems one intends something immoral.

I wonder if this isn’t a case similar to asking a murderer: “If you are going to kill me, kill me painlessly” (which one might even put in the simple phrase “Kill me painlessly”, with everybody understanding that the request is conditional). In that case, your intention is not that the murderer kill you painlessly, but that:

  1. If the murderer kills you, she kills you painlessly.

And that conditional isn’t a bad thing.

One makes the request of the murderer on the expectation–but certainly neither intention nor hope!–that the the antecedent of the conditional will turn out to be true. Nonetheless, one does not intend the consequent.

Perhaps in the bribery case one has a similar intention:

  1. If the official isn’t going to be motivated by duty, she will be motivated by the bribe.

One then gives the bribe on the expectation–but neither intention nor hope–that the official will be unmotivated by duty.

But things aren’t quite that simple. Suppose that I prefer Coca Cola to cocaine, and in a really shady restaurant I place this order:

  1. I’ll have a Coca Cola, but if you can’t do that, then I’ll have some cocaine.

Here I’ve done something wrong: I’ve conditionally procured illegal drugs. But how to distinguish (5) from (3) and (4)?

One psychological difference is that in (5), presumably I desire the cocaine, just not as much as I desire the Coca Cola. But in (3) and (4), I don’t desire the painless killing or the taking of the bribe. (Compare this case: Malefactors will forcibly give you Coca Cola, cocaine or cyanide. You say “I’ll have a Coca Cola, but if you can’t do that, I’ll have some cocaine.” Here, I presume, you don’t desire the cocaine, but it’s better than the Coca Cola. That’s more like (3) and (4) than like the restaurant version of (5).)

But I don’t really want to rest the relevant moral distinctions on desires.

Here’s what I’d like to say, but I have a hard time making it work out. In (5)–the restaurant coke/cocaine order–when the antecedent of the conditional is met, your will stands behind the consequent. In (3)–the killing case–your will doesn’t stand behind the consequent even when the antecedent of the conditional is met. Even when it is inevitable that you will be killed, you don’t intend to die, but only not to die painfully. But I worry about this. Suppose then you die painlessly. Isn’t your intention not to painfully die satisfied by the painless death, and hence the painless death was the means to avoiding the painful death? And in the bribery case you intend not to have your request denied, but wasn’t the taking of the bribe the means to the request?

Perhaps there is something much simpler, though, that doesn’t involve intentions so much. Perhaps it’s not morally wrong for the official to give the permit because of the bribe. What is wrong is for the official to give the permit solely because of the bribe. But you needn’t intend that. On the contrary, you might have emphasized to the official that you are morally and legally entitled to the permit. There are many ways the bribe can work. It might be the sole motive. But it might also be a partial motive. Or it might be a defeater for a defeater: "It's a lot of trouble to give permits, so I won't bother. But if I get a bribe, then the trouble is worth it." Of course, that still leaves the probably purely hypothetical case where you know that the only way the bribe will work is by being the sole motive. But now it's not so clear that it's permissible to give it.

And in the case of the murder, you are trying to dissuade the murder from killing you painfully by drawing her attention to the argument that option C is bad because there is a better–albeit still bad–option B? She might then go for option B or she might go for the good option A. Either way, she refrains from doing C. There is, in fact, a way in which the murder case is easier than the bribe case, because your being killed painlessly is not a means to your avoiding the painful death–it is what occurs in its place. If I am offered coffee or water and I go for the water, my drinking water isn’t a means to avoiding coffee, though it happens in its place.

Thursday, October 26, 2017

A two-stage view of proportionality in the Principle of Double Effect

A question about Double Effect that hasn’t been sufficiently handled is in what way, if any, the good effects of bad effects are screened off when judging proportionality.

It seems that some sort of screening off is needed. Consider this case. An evildoer says that he’ll free five innocents unless you kill one innocent; otherwise, she’ll kill them. So you shoot at the innocent’s shirt covering his chest, intending to learn how the fabric is rent by the bullet (knowledge is a good thing!), while foreseeing without intending that the innocent should die, and also foreseeing without intending that the evildoer will free the five.

This is clearly a travesty of double effect reasoning. But the only condition that isn’t obviously satisfied is the proportionality condition. So let’s think about proportionality. Here are two ways to think here:

  1. All good and bad effects count for proportionality. Thus, both the death of the one and the saving of the five count, as does the trivial good of knowing how the shirt rips. Thus proportionality is satisfied: the goods are proportionate.

  2. The good effects that are causally downstream of the bad effects of one’s action don’t count. On this view, it is the intended effect that must be proportionate to the unintended bad effects. Thus, the death of the one counts, and the trivial good of knowing about how the fabric rips counts, but the saving of the five does not count, as it is not intended (if it were intended, the act would be impermissible, of course). But of course the good of knowing how the fabric rips is not proportionate to the death of the one innocent.

Option 2 fits better with the intuition that the initial case was a travesty of double effect reasoning.

But option 2 doesn’t seem to be the right one in all other cases. Suppose I am guarding five innocents sentenced to death by an evil dictator. If I free them, I will be killed. I also know that unless the innocents leave the country, they will be recaptured soon. The innocents are planning to bribe the border officials, which is quite likely to work. But it will be wrong for the border officials to let them escape, because the border officials will have the false belief that these people are justly sentenced, but are venal.

It seems permissible to free the innocents. Here, the unintended but foreseen bad effect is my own death. The good effect is the innocents’ being allowed out of prison. But it seems that if we don’t get to consider effects downstream of bad stuff, we don’t get to consider the fact that the innocents will escape the country, as that’s downstream of the border officials’ venal acceptance of bribes.

Here’s one theory I developed today in conversation with a graduate student. Proportionality is very complex. Perhaps there are two stages.

Stage I: Are the intended good effect and the foreseen bad effects are in the same ballpark? This is a very loose proportionality consideration. One life and ten lives are in the same ballpark, but knowing how the fabric rips is far out of that ballpart. If the intended good effect is so much less than the foreseen bad effects that they are not in the same ballpark, proportionality is not met. Here, the good effects that are downstream of the bad effects don’t count.

If the Stage I proportionality condition is violated, the act is wrong. If it’s met, I proceed to Stage II.

Stage II: Now I get to do a proportionality calculation taking into account all the foreseen goods and bads, regardless of how they are connected to intentions.

The proportionality condition now requires a positive evaluation by means of both stages.

On this two stage theory, shooting the innocent’s shirt in the initial case is wrong, as proportionality is violated at Stage I. On the other hand, the release of the prisoners may be permissible. For the freedom of the innocents is in the same ballpark as my life—it’s a big ballpark—even if they are going to be recaptured. It’s not a trivial good, like the taste of a mint.

I am not happy with this. It’s too complicated!

Monday, October 9, 2017

Preventing someone from murdering Hitler

You are a secret opponent of the Nazi regime, and you happen to see Schmidt sneaking up on Hitler with an axe and murderous intent. You know what’s happening: Schmidt believes that Hitler has been committing adultery with Mrs. Schmidt, and is going to murder Hitler. Should you warn Hitler’s guards?

  1. Intuition: No! If Hitler stays alive, millions will die.

  2. Objection: You would be intending Schmidt to kill Hitler, a killing that you know would be a murder, and you are morally speaking an accomplice. And it is wrong to intend an evil to prevent more evil.

There is a subtlety here. Perhaps you think: “It is permissible to kill an evil tyrant like Hitler, and so Schmidt is doing the right thing, but for the wrong reasons. So by not warning the guards, I am not intending Schmidt to commit a murder, but only a killing that is objectively morally right, albeit I foresee that Schmidt will commit it for the wrong reasons.” I think this reasoning is flawed—I don’t think one can say that Schmidt is doing anything morally permissible, even if the same physical actions would be morally permissible if they had other motive. But if you’re impressed by the reasoning, tweak the case a little. All this is happening before Hitler has done any of the evil tyrannical deeds that would justify killing him. However, you foresee with certainty that if Hitler is not stopped, he will do them. So Schmidt’s killing would be wrong, even if Schmidt were doing it to prevent millions of deaths.

What’s behind (2) is the thought that Double Effect forbids you to intend an evil, even if it’s for the purpose of preventing a greater evil.

But here is the fascinating thing. Double Effect forbids you from warning the guards. The action of warning the guards is an action that has two effects: (i) prevention of a murder, and (ii) the foreseen deaths of millions. Double Effect has a proportionality condition: it is only permissible to do an action with a good and a bad effect when the bad effect is proportionate to the good effect. But millions of deaths are not proportionate to the prevention of one murder. So Double Effect forbids you from warning the guards.

Now it seems that we have a conflict between Double Effect and Double Effect. On the one hand, Double Effect seems to say that you may not warn the guards, because doing so will cause millions of deaths. On the other hand, it seems to say that you may not refrain from warning the guards in order to save millions because in so doing you are intending Schmidt to kill Hitler.

I know of three ways out of this conflict.

Resolution 1: Double Effect applies only to commissions and not omissions. It is permissible to omit warning the guarads in order that Schmidt may have a free hand to kill Hitler, even though it would not be permissible to help Schmidt by any positive act. One may intend the killing of Hitler in the context of one’s omission but not in the context of one’s commission.

Resolution 2: This is a case of Triple Effect or, equivalent, of a defeater-defeater. You have some reason not to warn the guards. Maybe it’s just the general moral reason that you have not to invoke the stern apparatus of Nazi law, or the very minor reason not to bother straining one’s voice. There is a defeater for that reason, namely that warning the guards will prevent a murder. And there is a defeater-defeater: preventing that murder will lead to the deaths of millions. Thus, the defeater to your initial relatively minor moral reason not to warn guards—viz., that if you don’t, a murder will be committed—is defeater, and so you can just go with the initial moral reason. On this story, the initial Objection to the Intuition is wrong-headed, because it is not your intention to save millions—that is just a defeater to a defeater.

Resolution 3: Your intention is simply to refrain from acting in ways that have a disproportionately bad effect. We should simply not perform such actions. You aren’t refraining as a means to the prevention of the disproportionately bead effect, as the initial Objection claimed. Rather, you are refraining as a means to prevent oneself from contributing to a disproportionately bad effect, namely to prevent oneself from defending the life of the man who will kill millions.

Evaluation:

While Resolution 1 is in some ways attractive, it requires an explanation why intentions for evils are permissible in the context of omissions but not of commissions.

I used to really like something like Resolution 2. But now it seems forced to me, because it claims that your primary intention in the omission can be something so very minor—perhaps as minor as not straining one’s voice in some versions of the story. That just doesn’t seem psychologically realistic, and it seems to trivialize the goods and evils involved if one is focused on something minor. I still think the Triple Effect reasoning like has much to be said for it, but only in those cases where there is a significant good at stake in the initial intention.

I find myself now pulled to Resolution 3. The worry is that Resolution 3 pulls one towards the consequentialist justification of the initial intuition. But I think Resolution 3 is distinguishable from consequentialism, both logically and psychologically. Logically: the intention is not to contribute to an overwhelmingly bad outcome. Psychologically: one can refrain from warning the guards even if one wouldn’t raise a finger to help Schmidt. Resolution 3 suggests that there is an asymmetry between commission and omission, but it locates that asymmetry more plausibly than Resolution 1 did. Resolution 1 claimed that it was permissible to intend evils in the context of omissions. That is implausible for the same reason why it is impermissible to intend evils in the context of comissions: the will of someone who intends evil is a corrupt will. But Resolution 3 is an intuitively plausible non-consequentialist principle about avoiding being a contributor to evil.

In fact, if one so wishes, one can use Resolution 3 to fix the problem with Resolution 2. The initial intention becomes: Don’t be a contributor to evil. Defeater: If you don’t warn, a murder will happen. Defeater-defeater: But millions will die. Now the initial intention is very much non-trivial.

Wednesday, June 28, 2017

Intention and credence

In a paper on Double Effect, I offer this kind of an example. Jim has sneaked into a zoo on a mission to kill the first mammal he sees at the zoo, because a very rich eccentric has informed him that he’d give a very large sum of money to famine relief if Jim did that. Jim sees the zookeeper and kills him, reasoning that zookeepers are mammals, and hence the kill will satisfy the eccentric’s condition. In the paper, I argued that Jim need not be intending to kill a human being even if he knows the zookeeper is a human being. His intention need simply be to kill that mammal. Of course, this is still a murder, and hence I argue that the Principle of Double Effect should not be formulated in the classical way in terms of intentions.

I think a lot of people are incredulous of my claim that Jim can know that the mammal he is shooting is a human being and yet not intend to be killing a human being. It’s just occurred to me that there may be a way to help overcome that incredulity by making the story more gradual. Jim first sees a shadowy figure in the dark in the primate enclosure very far away. He assumes it’s an ape, and aims his rifle. However, he doesn’t want to miss, so he comes a couple of steps closer. As he gradually approaches, he has a very vague impression that there is something a little human-like about the movements of that primate. He thinks to himself, however, that apes are close relatives to humans, so it’s almost certainly still an ape. But as he approaches, his evidence that what is before him is a human rather than an ape increases. Finally, by the time he’s close enough to shoot, the evidence is conclusive: he knows it’s a human. But he doesn’t care a whit—the only thing that matters to this callous individual is that it’s a mammal. So he shoots and murders.

Let’s suppose that Jim’s credence that the mammal is human goes from 0.0001 to 0.9999 as he walks forward. At the 0.0001 point, it’s clearly not Jim’s intention to kill a human being. Nor at the 0.5000 point. Nor even at the 0.5001 point. Could it be that Jim’s intention becomes one to kill a human being once his credence gets high enough for him to count as believing, or maybe even knowing, that this is a human being? But it is implausible that a merely numerical increase in the credence suddenly forces a change in Jim’s intention. Intention just does not seem to be degreed in a way that lines up with the degreed nature of Jim’s credence.

So, what should we say? I think it is this: Whether Jim’s credence was 0.0001 or 0.9999 at the time of the shot, as long as he was acting callously and not caring about whether the victim is ape or human, he accomplished the death of a human being. This accomplishment (or something close to it) makes him a murderer. Of course, at the 0.0001 credence point, it would be hard to prove in a court of law that he accomplished the death, that he shot without caring whether the victim is ape or human, caring only that the victim was a mammal.

Friday, June 23, 2017

Abortifacient effects of contraception and the Principle of Double Effect

Suppose that a contraceptive has the following properties:

  • Fewer than 1% of users have a pregnancy annually.

  • At least 5% of users annually experience a cycle where the contraceptive fails to prevent fertilization but does prevent implantation.

I think there is good empirical reason to think there are such contraceptives on the market. But that’s a matter for another post. Here I want to look at just the ethics question. So let’s suppose that the above stipulated properties obtain, and in fact that they are known to obtain.

The cases where the contraceptive prevents implantation are cases where the contraceptive kills an early embryo: in short, they are cases where the contraceptive is being abortifacient. The question I want to address in this post is this: Could someone who thinks early embryos have whatever property (personhood, membership in the human race, the imago dei, the possession of the soul, etc.) that makes it paradigmatically wrong to kill adult human beings nonetheless defend the contraceptive on the grounds that the deaths due to implantation-prevention are just an unintended and unfortunate side-effect?

Basically, the defense being envisioned would invoke some version of the Principle of Double Effect, which allows for some actions that have a bad side-effect that isn’t intended as a means or as an end. Of course, Double Effect requires that there not be other reasons why the action is wrong. But let’s bracket the question—which I address at length in my One Body book—whether there are other reasons the contraceptive could be wrong to use, and just focus on the abortifacient effect.

We can ask the question from two points of view:

  1. Can the manufacturer justify the production of the contraceptive on the grounds that failures of implantation are just an unfortunate side-effect?

  2. Can the user justify the use on those grounds?

Regarding 1, here’s a thought. For the contraceptive to be competitive, it has to be highly effective. If one does not count the 5% of annual cases where fertilization occurs but implantation is prevented as part of the contraceptive’s effectiveness, then one can at most claim 95% effectiveness for the contraceptive. And that effectiveness would put the contraceptive significantly behind the most effective formulations of the pill. In fact, it will put it somewhat behind the results that can be achieved by Natural Family Planning by a well-prepared and well-motivated couple. So for commercial purposes, the manufacturer will have to be advertising 99% effectiveness. But one cannot with moral consistency claim 99% effectiveness while holding that 5% of that is an unfortunate side-effect. By claiming 99% effectiveness, one is putting oneself behind the mechanisms that one knows are being used to achieve that effectiveness.

Suppose that a manufacturer advertises an analgesic that is guaranteed to be 99% effective at pain relief. But suppose that 5% of the time, the analgesic kills the patient and 94% of the time it relieves pain non-fatally. Then indeed the analgesic relieves pain 99% of the time, since killing the patient stops the pain. But by holding out 99% effectiveness, the manufacturer is showing that that it is really intending this to be a pain-relief-cum-euthanasia drug rather than a mere pain-relief drug.

What about 2? As we saw from the case of the manufacturer, the user cannot intend 99% effectiveness while saying that the deaths of early embryos are unfortunate side-effects. But the user, unlike the manufacturer, can say: “From my point of view, this is about 94% effective, with a 5% likelihood of a fatal side-effect, which side-effect I don’t intend.”

There are two points I want to make here. First, Double Effect requires there to be no reasonable alternatives to the course of action. But there are methods of fertility control that do not cause implantation-failure, for instance Natural Family Planning, and some of these methods are not less effective when compared against the 94% figure. And one cannot with moral consistency compare these method against the 99% effectivness figure while holding out that 5% of that is an unfortunate side-effect one would like to avoid.

Finally, imagine a hypothetical male contraceptive pill that works by releasing genetically engineered sperm-eating viruses that has the following annual properties:

  • Fewer than 1% of female partners get pregnant.

  • But 5% of female partners get a fatal viral infection from it.

  • No men die.

Clearly, nobody would tolerate such a product. Both the manufacturer and the men using it would be accused of murder. Technically, it might not be murder if the deaths of the women were not intended, but the act would be closely akin to vehicular homicide through criminal negligence. Any Double Effect justification would have no hope of succeeding, because Double Effect requires that the unintended bads not be disproportionate to the intended goods. But a 5% annual chance of death is just not worth the contraceptive effect, especially when there are alternatives present. Indeed, even if the only alternative to using this nasty contraceptive were abstinence, which isn’t the case, surely total abstinence would typically be preferable to inducing a 5% annual chance of death (unless perhaps the woman were already suffering from a terminal disease).

Of course, my arguments are predicated on the assumption that killing an early embryo is morally on par with killing an adult. That's another argument.

Wednesday, March 29, 2017

Yet another odd double effect case

Alice has just fed a poison to Bob. Bob hasn’t died yet. He is standing, by coincidence, on the edge of a cliff, and soon will die of the poison, unless he gets an antidote. Carl is there and has a syringe full of the antidote. Carl injects Bob with the antidote, but this startles Bob and Bob falls off the cliff to his death.

Question 1: Did Alice murder Bob?

Answer: I think not. Here’s an argument. Bob dies as a side-effect of injection with the antidote. But it could just as well have been Carl who slipped and fell while injecting Bob instead of Bob falling. And surely then we shouldn’t say that Alice murdered Carl—though she did wrongfully cause his death.

Question 2: Suppose that Carl was Alice’s friend and foresaw that Bob would fall off the cliff to his death if injected with the antidote, but reasoned: “I am saving Alice from being a murderer.” Could one legitimately make this double-effect analysis? “Carl is intending that Alice not be a murderer. His means to that is giving Bob an antidote to the poison. A foreseen side-effect of Carl’s action is Bob’s death, but this side-effect is not intended either as an end or as a means. And given that Bob would have died anyway, the side-effect is not disproportionate to the good of saving Alice from being a murderer.”

Answer: I think the proportionality condition is not met. Sure, Carl makes Alice not be a murderer. But Alice is still an attempted murderer—which is just as culpable as being an actual murderer—and her malfeasance still causes Bob’s death, so she still has that death on her conscience. Granted, she isn’t a murderer any more (if I am right about Question 1), but the bad of Carl’s accidentally killing Bob seems disproportionate to the relatively minor good achieved here.

It’s interesting when it is the proportionality condition in double effect that ends up being crucial.

Thursday, February 2, 2017

Rollerblading three and eight miles

Suppose I intend to rollerblade eight miles. And I succeed. Then I also rollerbladed three miles. But I need not have intended to rollerblade three miles, though of course my arithmetic is good enough that if asked “Will you also go three miles?” I would have answered affirmatively. Suppose that rollerblading three miles were intrinsically wrong. Then I couldn’t excuse myself by invoking the Principle of Double Effect, saying “I intended eight miles, not three.” Rollerblading three miles wasn’t intended. But it also wasn’t a side-effect like Double Effect talks about.

But isn’t rollerblading three miles a means to rollerblading eight miles? If it is, it isn’t a causal means (unless I have to rollerblade three miles before I’m allowed to do eight). Maybe it’s a constitutive means: rollerblading three miles is partly constitutive of rollerblading eight miles. But even that’s not quite right. There are many instances of rollerblading three miles within rollerblading eight miles: the first three miles, the last three miles, the middle three miles, and so on, perhaps even ad infinitum. One could make a case that rollerblading the first (or last or middle) three miles is a constitutive means to rollerblading eight. But rollerblading three miles is a disjunctive event: it is rollerblading the first three or the middle three or the last three or …. And while this disjunctive event has to happen for me to rollerblade eight miles, it isn’t a constitutive means to rollerblading eight miles.

So it looks like rollerblading three miles is neither a causal nor a constitutive means to rollerblading eight miles. This is another consideration in favor of my thesis that the Principle of Double Effect must go beyond the concept of means to that of accomplishment. For I definitely accomplish rollerblading three miles (in fact, multiple times) in rollerblading eight miles. Here's a quick test for this: Suppose that after three miles I have to stop. Then I would say: "I aimed for eight miles but all I managed to accomplish was three." But if I didn't stop after three, I would surely still have accomplished three.

Friday, December 23, 2016

Double Effect in daily life

The Principle of Double Effect is often introduced in terms of weighty cases of killing, like bombing military installations or redirecting trolleys. But the importance of the distinction between intended and unintended but foreseen harms can be seen even more clearly in everyday cases.

Yesterday, my wife went grocery shopping, while I was home with some of the kids. My son asked to be taken for a bike ride. The thought flashed into my head: “If I go, I probably won’t be home when my wife comes back with the groceries, and hence I won’t be able to help with unloading them.” There are three possible attitudes I could have with respect to this observation:

  1. I shouldn’t take my son for a bike ride now.

  2. Not being able to help my wife is an unfortunate side-effect of taking my son for a bike ride.

  3. Being able to get out of helping my wife is a reason to take my son for a bike ride.

In cases (2) and (3), the foreseen effects are the same. There are no deontic issues (I didn’t promise my wife to be home). But clearly if I take attitude (3), and hence intend not to be there when my wife comes back, I am being a bad husband, while if I go for (1) or (2), my behavior is defensible. (In fact, I never got around to taking my son for the bike ride.)

Wednesday, August 24, 2016

Inculpably acting through culpable ignorance

It is widely held that:

  1. Doing the wrong thing while inculpably ignorant that it's wrong is itself inculpable.
  2. Doing the wrong thing while culpably ignorant that it's is culpable, assuming the other conditions for culpability are met (freedom, etc.).
I think (1) is true but (2) is false. I think that not only does inculpable ignorance excuse, but so does culpable ignorance. (Assuming, of course, that it's real ignorance: one can lie to oneself that one is ignorant when in fact one knows.)

Start with this case. Sally was inculpably ignorant of the wrongness of targeting civilians in just wars. Like many Americans, she was raised to think that the bombing of Hiroshima and Nagasaki were morally permissible, since the bombings saved many lives by ending the war early. One morning, while an undergraduate, she culpably spent an extra five minutes on Facebook before going to her ethics class. As a result, she culpably showed up five minutes late (being late to class isn't always morally wrong, but being late without sufficiently good reason disturbs others' learning and is morally wrong, and I assume this is a case like that). Consequently, she missed the discussion of double effect and the distinction between strategic and terror bombing. Had she heard the discussion, she would have known that it's wrong to target civilians. Since she is culpable for lateness to her ethics class, her ignorance of the wrongness of the kind of terror bombing that Hiroshima and Nagasaki were subjected to is wrong. Years later, incurring no further culpability, she is still ignorant. But then one day there is a just war, and she is a drone pilot asked to target civilians in a situation relevantly similar to Hiroshima and Nagasaki. She does so, believing that it's her duty to do so.

Had Sally refused to follow orders, she would have been culpable for violating her conscience--and indeed, very seriously culpable since her bombing saved many lives by ending the war early (I am assuming that this was the case in Hiroshima and Nagasaki). But in fact, Sally acted wrongly: she committed mass murder. She did so in ignorance, but her ignorance was culpable, since she was culpable for being late to the class that would have cured her of her ignorance.

Given (1), had double effect not been discussed in class that morning when she spent too much time on Facebook, she would have been entirely inculpable for mass murder. It seems implausible that whether Sally is culpable for mass murder depends on what in fact went on in a class that she missed. Furthermore, culpability shouldn't depend on arcane counterfactuals. But it could be quite an arcane counterfactual whether Sally would have learned that it's wrong to target civilians in a just war. It might have depended on fine details of just how persuasive the professor was, what effect Sally's presence in the class would have had on the mode of presentation, etc.

Moreover, it seems implausible that Sally is culpable for mass murder because of her culpability for the peccadillo of being five minutes late to class. The intuition behind (1) is that you don't get culpability out of inculpability. You likewise shouldn't get mass-murder-level culpability out of a peccadillo. But this last argument is a little fast. For while "Sally is culpable for mass-murder" misleadingly suggests that Sally has great culpability. If we accept (1), we should accept a parallel principle that the degree to which one is culpable for a wrong act done in ignorance is no greater than one's degree of culpability for the ignorance. As a result, we might say that Sally is culpable for mass-murder, but the degree of guilt is at a level corresponding to being five minutes late to class (without, I assume, any reasonable expectation that those five minutes would result in ignorance about mass murder).

Very well. Let's suppose that five milliturps are the level of guilt corresponding to the lateness to class. Maybe the level of guilt for the mass murder would have been a gigaturp per victim, if Sally had known that such bombing is wrong. So the suggestion we are now exploring for saving (2) is that Sally's level of guilt for an ignorant bombing run is capped at five milliturps, no matter how many victims there are. (There is something odd about having slight guilt for something so big, but I don't think we should worry about the oddity.) Very well. Consider now two scenarios. In the first one, Sally goes on a single bombing run that she knows will claim 10,000 civilian victims. In the second, she goes on two bombing runs, which will claim 5,000 civilian victims each. On the capping suggestion, in the first scenario, Sally acquires five milliturps of guilt for her bombing run. In the second scenario, she acquires five milliturps of guilt for the first bombing run, too. That's already a little strange: we would expect less culpability with fewer victims. But it gets worse. In the second bombing run, the capping view will also assign five milliturps. As a result, in the second scenario, Sally incurs a total of ten milliturps of guilt. And that seems just wrong: it shouldn't matter that much how the victims are divided up. Furthermore, the intuition being the principle that culpability for an ignorant act can't exceed the culpability for the ignorance is, I think, violated when a multiplicity of ignorant acts exceeds in total culpability the culpability for the ignorance.

We might try a modified capping principle: The culpability for all acts coming from culpable ignorance is capped in total. This has the odd result, however, that in the second scenario, Sally is five-milliturps-guilty for the first run, but not at all guilty for the second, having already reached her culpability cap. At this point it seems much more reasonable simply to suppose that all of Sally's guilt is the initial five milliturps for being late to class. She doesn't acquire a second five milliturps for her bombing runs.

It may seem to be an insult to the memory of the victims that Sally manages to murder them without incurring any guilt. But, for what it's worth, it seems to me to be less of an insult to suppose that she is innocent of the murder than to suppose that she is pecadillo-level guilty for it, as on the capping views.

Wednesday, February 24, 2016

A puzzle about medicine and war

The following seem to be true:

  1. It is never permissible for the state to force on a non-consenting innocent patient medical procedures very likely to cause death.
  2. It is sometimes permissible for the state to force a non-consenting drafted soldier to go to near certain death in a just war.
In regard to (1), the state can legitimately force patients to undergo medical operations involving minimal risk and invasiveness, at least as long as the patients have no conscientious objection to them (a restriction that has an obvious military analogue): vaccinations are the standard example. This is very puzzling: Why the distinction?

Here is a suggestive hint. We can imagine circumstances where a war against a vicious enemy could only be won by an attack by non-consenting draftees even though it was morally certain that most of the draftees would be captured and horrendous medical experiments would be done on them by the enemy. Such an attack could well be permissible, even though much less extreme medical experiments could not be intentionally imposed on non-consenting patients even for an equal good (say, to defeat some awful disease). This suggests a difference between directly imposing harms and acting in a way that is morally certain to lead to the self-same harms. This is exactly the sort of difference that the Principle of Double Effect is sensitive to. Someone who thinks that foreseeing/intending differences do not matter is probably not going to be able to make the distinction between enforced medical procedures and the draft.

At the same time, the Principle of Double Effect does not seem sufficient to remove the puzzle concerning (1) and (2), since it doesn't really get at what it is that is so special about medical procedures likely to cause death as opposed to military operations likely to cause death. Probably another part of the puzzle has to do with the integrity of the body. But it's tricky: the importance of bodily integrity is not enough to make all enforced medicine wrong. It seems that the state can legitimately require procedures that are minimally invasive and minimally risky, but cannot legitimately require procedures that are minimally invasive but highly risky (think of injecting someone with a vaccine versus injecting someone with a fully functional virus).

Maybe it's like this: the fact that an intentional procedure directly transgresses bodily integrity typically calls for consent. But in at least some cases where someone's lack of consent is strongly irrational, that lack of consent can be overridden for a sufficiently good cause. But where the lack of consent is at least somewhat rational, the lack of consent cannot be overridden. When the risks are minimal, the lack of consent is strongly irrational, barring conscientious objection. But when the risks are high, lack of consent is at least somewhat rational. Medical procedures always transgress bodily integrity, so we get (1). On the other hand, commanding an attack likely to lead to death (or torture or being the victim of vicious medical experiments) does not transgress bodily integrity, and so a completely different set of standards for consent and authorization is in place. This is a mere sketch. I am not sure the details can be worked out.

Notice, also, that the account in the preceding paragraph does not apply to sexual cases. Even if someone's lack of consent to sex is strongly irrational (imagine a contrived case where a married person for completely irrational or even malicious reasons refuses to have sex with a spouse, despite the fact that great benefits would come to society from their having sex--perhaps a killer robot has been programmed by a mad scientist to stop its rampage only if they have sex), it is wrong for the state to force the person to have sex. Once again, sex is morally exceptional.

Thursday, January 7, 2016

Deontology and double effect

This post continues the line of investigation from an earlier post. There I supposed that you're a police officer and you saw that Glossop was very likely to be about to murder Fink-Nottle, and the only way to stop him was to shoot Glossop in the head. Glossop was pointing a shotgun at Fink-Nottle, and your credence that he was about to shoot was 0.9999. I took it for granted that under circumstances like that it was permissible to kill Glossop. But on deontological grounds, I also assumed it was wrong to kill one person to save 9999 innocents. This did not seem to fit with standard decision theory, but I managed to make it fit.

I now consider a variant. In Gotham there are right now 10000 setups like the Glossop and Fink-Nottle story, each observed by a different police officer. You are the police chief. You know that in 9999 of the cases, the person in the Glossop role is about the murder the person in the Fink-Nottle role. But in the remaining one case, they are just amateur actors rehearsing a scene.

Should you order by radio all of your officers to shoot the Glossops in the head? There are arguments on both sides.

Pro: Each of the 10000 cases is just like the original case. So if in the original case, the officer should shoot Glossop, each of your 10000 officers should shoot. But if that's what each should do, that's what you should tell each to do.

Con: You know that by ordering all 10000 Glossops to be shot, you will be killing one innocent man (and 9999 guilty ones, but surely that doesn't make it better) in order to save 9999 Fink-Nottles. By deontology, this is wrong.

I think the Con argument is flawed. The death of the innocent Glossop differs from the deaths of the guilty Glossops. The deaths of the guilty Glossops are a means to saving the Fink-Nottles. The death of the one innocent Glossop isn't a means to saving any of the Fink-Nottles. We should take your issuing of the order "Shoot" to be intended to kill the 9999 guilty Glossops, with the death of the innocent Glossop a side-effect. (Admittedly, this is a bit awkward because you are ordering the innocent Glossop to be shot.) The side-effect is not intended, proportionality is met if the ratio 9999:1 is high enough (if not, change the numbers), and it seems likely that the Principle of Double Effect will justify the action.

I think the Pro argument, perhaps with some refinement, is convincing. This means that deontologists must find a way to block the Con argument. But it seems that the only relevant difference between this case and cases where ordering shooting is wrong on deontological grounds is the structure of intentions. So the above case provides support for the kind of focus on intentions that is essential to the Principle of Double Effect.

Tuesday, December 29, 2015

Trusting leaders in contexts of war

Two nights ago I had a dream. I was in the military, and we were being deployed, and I suddenly got worried about something like this line of thought (I am filling in some details--it was more inchoate in the dream). I wasn't in a position to figure out on my own whether the particular actions I was going to be commanded to do are morally permissible. And these actions would include killing, and to kill permissibly one needs to be pretty confident that the killing is permissible. Moreover, only the leaders had in their possession sufficient information to make the judgment, so I would have to rely on their judgment. But I didn't actually trust the moral judgment of the leaders, particularly the president. My main reason in the dream for not trusting them was that the president is pro-choice, and someone whose moral judgment is so badly mistaken as to think that killing the unborn is permissible is not to be trusted in moral judgments relating to life and death. As a result, I refused to participate, accepting whatever penalties the military would impose. (I didn't get to find out what these were, as I woke up.)

Upon waking up and thinking this through, I wasn't so impressed by the particular reason for not trusting the leadership. A mistake about the morality of abortion may not be due to a mistake about the ethics of killing, but due to a mistake about the metaphysics of early human development, a mistake that shouldn't affect one's judgments about typical cases of wartime killing.

But the issue generalizes beyond abortion. In a pluralistic society, a random pair of people is likely to differ on many moral issues. The probability of disagreement will be lower when one of the persons is a member of a population that elected the other, but the probability of disagreement is still non-negligible. One worries that a significant percentage of soldiers have moral views that differ from those of the leadership to such a degree that if the soldiers had the same information as the leaders do, the soldiers would come to a different moral evaluation of whether the war and particular lethal acts in it are permissible. So any particular soldier who is legitimately confident of her moral views has reason to worry that she is being commanded things that are impermissible, unless she has good reason to think that her moral views align well with the leaders'. This seems to me to be a quite serious structural problem for military service in a pluralistic society, as well as a serious existential problem.

The particular problem here is not the more familiar one where the individual soldier actually evaluates the situation differently from her leaders. Rather, it arises from a particular way of solving the more familiar problem. Either the soldier has sufficient information by her lights to evaluate the situation or she does not. If she does, and she judges that the war or a lethal action is morally wrong, then of course conscience requires her to refuse, accepting any consequences for herself. Absent sufficient information, she needs to rely on her leaders. But here we have the problem above.

How to solve the problem? I don't know. One possibility is that even though there are wide disparities between moral systems, the particular judgments of these moral systems tend to agree on typical acts. Even though utilitarianism is wrong and Catholic ethics is right, the utilitarian and the Catholic moralist tend to agree about most particular cases that come up. Thus, for a typical action, a Catholic who hears the testimony of a well-informed utilitarian that an action is permissible can infer that the action is probably permissible. But war brings out differences between moral systems in a particularly vivid way. If bombing civilians in Hiroshima and Nagasaki is likely to get the emperor to surrender and save many lives, then the utilitarian is likely to say that the action is permissible while the Catholic will say it's mass murder.

It could, however, be that there are some heuristics that could be used by the soldier. If a war is against a clear aggressor, then perhaps the soldier should just trust the leadership to ensure that the other conditions (besides the justness of the cause) in the ius ad bellum conditions are met. If a lethal action does not result in disproportionate civilian deaths, then there is a good chance that the judgments of various moral systems will agree.

But what about cases where the heuristics don't apply? For instance, suppose that a Christian is ordered to drop a bomb on an area that appears to be primarily civilian, and no information is given. It could be that the leaders have discovered an important military installation in the area that needs to be destroyed, and that this is intelligence that cannot be disclosed to those who will carry out the bombing. But it could also be that the leaders want to terrorize the population into surrender or engage in retribution for enemy acts aimed at civilians. Given that there is a significant probability, even if it does not exceed 1/2, that the action is a case of mass murder rather than an act of just war, is it permissible to engage in the action? I don't know.

Perhaps knowledge of prevailing military ethical and legal doctrine can help in such cases. The Christian may know, for instance, that aiming at civilians is forbidden by that doctrine. In that case, as long as she has enough reason to think that the leadership actually obeys the doctrine, she might be justified in trusting in their judgment. This is, I suppose, an argument for militaries to make clear their ethical doctrines and the integrity of their officers. For if they don't, then there may be cases where too much disobedience of orders is called for.

I also don't know what probability of permissibility is needed for someone to permissibly engage in a killing.

I don't work in military ethics. So I really know very little about the above. It's just an ethical reflection occasioned by a dream...

Monday, November 9, 2015

Four plausibilistic arguments for redirecting the trolley

Start with the standard scenario: trolley speeding towards five innocent strangers, and you can flip a lever to redirect it to a side-track with only one innocent stranger. Here are four arguments each making it plausible that redirecting the trolley is right. [Unfortunately, as you can see from the comments, the first three arguments, at least, are very weak. - ARP]

1. Back and forth: Suppose there is just enough time to flip the lever to redirect and then flip it back--but no more time than that. Assuming one shouldn't redirect, there is nothing wrong with flipping the lever if one has a firm plan to flip it back immediately. After all, nobody is harmed by such a there-and-back movement. The action may seem flippant (pun not intended--I just can't think of a better term), but we could suppose that there is good reason for it (maybe it cures a terrible pain in your arm). But now suppose that you're half-way through this action. You've flipped the lever. The trolley is now speeding towards the one innocent. At this point it is clearly wrong for you to flip it back: everyone agrees that a trolley speeding towards one innocent stranger can't be redirected towards five. This seems paradoxical: the compound action would be permissible, but you'd be obligated to stop half way through. If redirecting the trolley is the right thing to do, we can block the paradox by saying that it's wrong to flip it there and back, because it is your duty to flip it there.

2. Undoing. If you can undo a wrong action, getting everything back to the status quo ante, you probably should. So if it's wrong to flip the lever, then if you've flipped the lever, you probably should flip it back, to undo the action. But flipping it back is uncontroversially wrong. So, probably, flipping the lever isn't wrong.

3. Advice and prevention. Typically, it's permissible to dissuade people who are resolved on a wrong action. But if someone is set on flipping the lever, it's wrong to dissuade her. For once she is resolved on flipping the lever, it is the one person on the side-track who is set to die, and so dissuading the person from flipping the lever redirects death onto the five again. But it's clearly wrong to redirect death onto the five. So, probably, flipping the lever isn't wrong. Similarly, typically one should prevent wrongdoing. But to prevent the flipping of the lever is to redirect the danger onto the five, and that's wrong.

4. Advice and prevention (reversed). The trolley is speeding towards the side-track with one person, and you see someone about to redirect the trolley onto the main track with five persons. Clearly you should try to talk the person out of it. But talking her out of it redirects death from the five innocents to the one. Hence it's right to engage in such redirection. Similarly, it's clear that if you can physical prevent the person from redirecting the trolley onto the main track, you should. But that's redirection of danger from five to one.

Trolleys, breathing, killing and letting die

Start with the standard trolley scenario: trolley is heading towards five innocent people but you can redirect it towards one. Suppose you think that it is wrong to redirect. Now add to the case the following: You're restrained in the control booth, and the button that redirects the trolley is very sensitive, so if you breathe a single breath over the next 20 seconds, the trolley will be redirected towards the one person.

To breathe or not to breathe, that is the question. If you breathe, you redirect. Suppose you hold your breath, thinking that redirecting is wrong. Why are you holding your breath, then? To keep the trolley away from the one person. But by holding your breath, you're also keeping the trolley on course towards the five. If in the original case it was wrong to redirect the trolley towards the one, why isn't it wrong to hold your breath so as to keep the trolley on course towards the five? So perhaps you need to breathe. But if you breathe, your breathing redirects the trolley, and you thought that was wrong.

I suppose the intuition behind not redirecting in the original case is a killing vs. letting die intuition: By redirecting, you kill the one. By not redirecting, you let the five die, but you don't kill them. However, when the redirection is controlled by the wonky button, things perhaps change. For perhaps holding one's breath is a positive action, and not just a refraining. So in the wonky button version, holding one's breath is killing, while breathing is letting die. So perhaps the person who thinks it's wrong to redirect in the original case can consistently say that in the breath case, it's obligatory to breathe and redirect.

But things aren't so simple. It's true that normally breathing is automatic, and that it is the holding of one's breath rather than the breathing that is a positive action. But if lives hung on it, you'd no doubt become extremely conscious of your breathing. So conscious, I suspect, that every breath would be a positive decision. So to breathe would then be a positive action. And so if redirecting in the original case is wrong, it's wrong to breathe in this case. Yet holding one's breath is generally a decision, too, a positive action. So now it's looking like in the breath-activated case, whatever happens, you do a positive action, and so you kill in both cases. It's better to kill one rather than killing five, so you should breathe.

But this approach makes what is right and wrong depend too much on your habits. Suppose that you have been trained for rescue operations by a utilitarian organization, so that it became second nature to you to redirect trolleys towards the smaller number of people. But now you've come to realize that utilitarianism is false, and you haven't been convinced by the Double Effect arguments for redirecting trolleys. Still, your instincts remain. You see the trolley, and you have an instinct to redirect. You would have to stop yourself from it. But stopping yourself is a positive action, just as holding your breath is. So by stopping yourself, you'd be killing the five. And by letting yourself go, you'd be killing the one. So by the above reasoning, you should let yourself go. Yet, surely, whether you should redirect or not doesn't depend on which action is more ingrained in you.

Where is this heading? Well, I think it's a roundabout reductio ad absurdum of the idea that you shouldn't redirect. The view that you should redirect is much more stable until such tweaks. If, on the other hand, you say in the original case that you should redirect, then you can say the same thing about all the other cases.

I think the above line of thought should make one suspicious of other cases where people want to employ the distinction between killing and letting-die. (Perhaps instead one should employ Double Effect or the distinction between ordinary and extraordinary means of sustenance.)

Friday, November 6, 2015

Pacifism and trolleys

In the standard trolley case, a runaway trolley is heading towards five innocent people, but can be redirected onto a side-track where there is only one innocent person. I will suppose that the redirection is permissible. This is hard to deny. If redirection here is impermissible, it's impermissible to mass-manufacture vaccines, since mass vaccinations redirect death from a larger number of potentially sick people to a handful of people who die of vaccine-related complications. But vaccinations are good, so redirection is permissible.

I will now suggest that it is difficult to be a pacifist if one agrees with what I just said.

Imagine a variant where the one person on the side-track isn't innocent at all. Indeed, she is the person who set the trolley in motion against the five innocents, and now she's sitting on the side-track, hoping that you'll be unwilling to get your hands dirty by redirecting the trolley at her. Surely the fact that she's a malefactor doesn't make it wrong to direct the trolley at the side-track she's on. So it is permissible to protect innocents by activity that is lethal to malefactors.

This conclusion should make a pacifist already a bit uncomfortable, but perhaps a pacifist can say that it is wrong to protect innocents by violence that is lethal to malefactors. I don't think this can be sustained. For protecting innocents by non-lethal violence is surely permissible. It would be absurd to say a woman can't pepper-spray a rapist. But now modify the trolley case a little more. The malefactor is holding a remote control for the track switch, and will not give it to you unless you violently extract it from her grasp. You also realize that when you violently extract the remote control from the malefactor, in the process of extracting it the button that switches the tracks will be pressed. Thus your violent extraction of the remote will redirect the trolley at the malefactor. Yet surely if it is permissible to do violence to the malefactor and it is permissible to redirect the trolley, it is permissible to redirect the trolley by violence done to the malefactor. But if you do that, you will do a violent action that is lethal to the malefactor.

So it is permissible to protect innocents by violence that is lethal to malefactors. Now, perhaps, it is contended that in the last trolley case, the death of the malefactor is incidental to the violence. But the same is true when one justifies lethal violence in self-defense by means of the Principle of Double Effect. For instance, one can hit an attacker with a club intending to stop the malefactor, with the malefactor's death being an unintended side-effect.

This means that if it is permissible to redirect the trolley, some lethal violence is permissible. What is left untouched, however, by this argument is a pacifism that says that it is always impermissible to intend a malefactor's death. I disagree with that pacifism, too, but this argument doesn't affect it.

Sunday, September 13, 2015

Educational institutions and football

In the light of the brain damage resulting from football, it is a serious question whether it's morally permissible to participate in or support the sport at all. Still, one can make a case that there are human excellences that this sport provides a particularly good opportunity for (I am grateful to Dan Johnson for this point), and the brain damage is an unintended side-effect, so there might be a defense of the sport in general on the basis of the Principle of Double Effect.

But I think it is particularly difficult to defend educational institutions supporting this sport among students. For the defining task of an educational institution is to develop the minds of the students. But brain damage harms the individual precisely in respect of mental functioning. And it is much harder for an organization to justify an activity that has among its side-effects serious harm to the goods pursuit of which defines the organization.

Friday, November 28, 2014

Compensation

It seems to be widely accepted in the philosophy of religion community that supposing God's merely compensating a person for evils suffered will not make for a good theodicy. Rather, the evils must be defeated in a way that goes over and beyond compensation, that draws a defeating good out of the evil. I think that in many cases this conviction might well be mistaken. Compensation may well be good enough.

Start with this story:

You are a financially well-off Olympic archer. You are about to take your last shot at the Olympics, indeed at the last Olympics you will ever compete at (you have promised your spouse to hang up your bow after these Olympics) and whether you get a gold medal depends on this shot. Getting a gold medal means a lot to you. At the same time, you see in the stands a vicious dog attacking me. You could turn your bow on the dog, and painlessly kill it at this range (nor would it be wrong to do so; the dog will be put down anyway after the vicious attack). There is no other way for the attack to be stopped. But then you would lose your last chance for a gold medal: the dog is not the official target. You also know that what kind of a dog it is and how terrified of dogs I am, so that you know that if that were the whole story, the sufferings that I would endure through being bitten are so great that your gold medal wouldn't be worth it: you would have a duty to shoot the dog. But you also know me well enough to know for sure that I would survive the attack without permanent damage, and that you have a sum of money that you could give me such that both by my own lights and objectively I'd be much better off bitten and compensated than neither bitten nor compensated. You resolve to compensate me financially, take your shot at the target, win your gold medal, write me a large check, and we are both much better off for this.
This is a clear case of compensation for evil rather than defeat of evil. At the same time, your failure to stop the attack is justified. I chose this case so that my own biases would go against the justification. I am in fact terrified of dogs. Being bitten again would be a truly horrific experience. Nonetheless, if you gave me a sum of money sufficient to pay off the rest of our mortgage and there was no permanent damage, I think it would be well-worth being bitten. (I don't think I would go for it for half of the mortgage!)

Notice what compensation does here. The good you achieve by allowing me to be bitten—the gold medal—is insufficient to justify your permitting me to be bitten. (If this isn't true, we can tweak the case so it is.) But when you add your resolve to compensate me, and your knowledge that the compensation would be sufficient both objectively and by my own lights, you come to be justified.

An important feature of this story is that the good you achieve—the gold medal—is one to which my sufferings are not a means, and indeed you do not intend my sufferings either as an end or as a means. (Here one remembers Double Effect, of course.) But there is an end that you are pursuing, and your pursuit of this end precludes your preventing the evil.

There are many real-world cases that might well have this structure. Consider Rowe's fawn dying painfully in a forest fire. God could miraculously prevent this, but doesn't, because he wants the laws of nature to have as few exceptions as possible. Now, it's good, I suppose, that the laws of nature have as few exceptions as possible. But this good does not seem to be sufficiently great to justify several days of the fawn suffering. However, if God resolves to compensate the fawn in an afterlife, to a degree such that both objectively and by the fawn's lights (to the extent that the fawn is capable of making the relevant judgments) the fawn will be much better off for having both the suffering and the compensation, then we will have the structure of the archery-dogbite case.

I do not know that the compensation story will work for every case. One worry is that if you foreknew that the dog would bite and were responsible for the dog's presence in the stands in the light of this foresight, then the justification in the compensation story is less clear, even if you don't intend the dog's biting. So there will be relevant questions about determinism, Molinism and the like in the theological cases.

Here's another interesting thing, by the way, about the fawn case. The compensation would not have to come in an afterlife. Suppose:

You are a super-rich archer. You know that sometimes dogs show up and bite people in one area of the stadium. So ahead of time you write sufficiently large checks to all these people such that both objectively and by their lights they would be much better off for having the check and the dogbite than for having neither. And then when it's time to compete, you don't even need to think about the dogs.
This seems quite justifiable as well. So if God sufficiently compensates all deer that are in danger of forest fires ahead of time, all is well, too.

Note, though, that in general pre-compensation works less well for human sufferers than for non-human sufferers. For humans see their lives as a narrative, and the narrative structure and order of events matters a lot as a result. So in the case of a human it is particularly tragic if existence ends in a particularly bad way, no matter how good the earlier parts were. So compensation for evils that happen around the time of death still likely requires an afterlife in the case of humans. (And even in the case of non-human animals, it may be better for God to compensate in an afterlife, since it would require fewer miracles in this world.)

Friday, June 27, 2014

Intentions, tryings and Double Effect

Bob buys a lottery ticket, hoping to win but knowing that it's exceedingly unlikely.

Suppose Bob wins. We can't say that his winning is an unintended side-effect in the sense involved in the Principle of Double Effect. But it is also odd to say that he intended to win, given that he knows how exceedingly unlikely it is. The phrase "hoping to win"much more apt than "intending to win." Likewise, it doesn't seem right to say that winning was a part of Bob's plan. He'd have to be crazy to plan on winning. Nonetheless, winning is something he aimed at, and his action would have been a failure—an expected failure—if he didn't win.

I intend to post this post, and posting this post is a part of my action plan. Bob's relationship to winning only differs quantitatively from my relationship to posting this post. In both cases, there is probability of success somewhere between 0 and 1. In my case, it's close to 1. In Bob's case, it's close to 0. Neither of us can disclaim responsibility upon success. Both of us have our hearts set upon the goal, and our action is defective if it doesn't reach that goal. The difference is that Bob expects it to be defective while I expect mine to be successful (at least in respect of posting—whether it will be successful in respect of philosophical progress is a different question).

There is a yet third kind of case, that of "stretch goals". Suppose Sally buys a lottery ticket in order to support the government activities that the lottery funds, while at the same time still hoping to win (perhaps she plans to donate any winnings to the state, and thereby support the same government activities even more). If Sally wins, again that's not an unintended side-effect of the Double Effect sort. Winning is indeed something she aims at, something she has heart set on. But it's a stretch goal: if she doesn't accomplish it, her action need not be a failure in any way. It is even more awkward to say that Sally intends to win, or that winning is part of her action plan, than it is to say these things about Bob.

Both Bob and Sally are trying to win, but neither is intending to win. The difference between them is that if Bob doesn't win, his action fails, but if Sally doesn't win, his action doesn't need to fail in any way.

All this means that the traditional formulation of the Principle of Double Effect in terms of effects that are intended and effects that are not is incomplete.

I think we do a bit better, then, to formulate Double Effect not in terms of what one is intending, but in terms of what one is trying to do. The classical formulation tells us something like this:

  1. An action expected to have an evil effect can be permissible when and only when one is intending a proportionate good and one does not intend the evil effect (either as a means or as an end).
Of course, when the action is expected to have an effect, the distinction between what one intends and what one tries for disappears. But we should extend the principle:
  1. An action that has a chance of an evil effect can be permissible when and only when one is trying for a proportionate good and one is not trying for the evil effect (either as a means or as an end).

A bonus of (2) is that while some have claimed that merely instrumental goals are not intended, thereby destroying the distinction that Double Effect is about, it is obvious that an agent is trying to make these goals happen. Whatever we say about whether the terror bomber is intending to do, it's clear that he's trying to kill innocent people.

I also think that talking in terms of trying instead of intending has the benefit of further de-psychologizing the notion and avoiding the inner-speech objection to Double Effect (which says that one ends up justifying actions simply by thinking about them differently). It is even more obvious that the moral worth of an action depends on what one was trying to do than that it depends on what one was intending.

Now my own preferred reformulation of Double Effect is even more radical than (2): it replaces intention with accomplishment. I think (2) is a step along the path to that reformulation, since trying is more intimately linked to accomplishments than intending is (pace what I say about intention in that paper). If something is an accomplishment of mine, I tried to bring it about under some description. But I needn't have intended it under any description, as the cases of Bob and Sally show.

Monday, October 21, 2013

Utilitarianism and trivializing the value of life

Consider these scenarios:

  • Jim killed ten people to save ten people and a squirrel.
  • Sam killed ten people to save ten people and receive a yummy and healthy cookie that would have otherwise gone to waste.
  • Frederica killed ten people to save ten people and to have some sadistic fun.
If utilitarianism is true, then in an appropriate setting where all other things are equal and no option produces greater utility, the actions of Jim, Sam and Frederica are not only permissible but are duties in their circumstances. But clearly these actions are all wrong.

I find these counterexamples against utilitarianism particularly compelling. But I also think they tell us something in deontological theories. I think a deontological theory, in order not to paralyze us, will have to include some version of the Principle of Double Effect. But consider these cases (I am not sure I can come up with a good parallel to the Frederica case):

  • John saved ten people and a squirrel by a method that had the death of ten other people as a side-effect.
  • Sally killed ten people and received a yummy and healthy cookie that w would have otherwise gone to waste by a method that had the death of ten other people as a side-effect.
These seem wrong. Not quite as wrong as Jim's, Sam's and Frederica's actions, but still wrong. These actions trivialize the non-fungible loss of human life. The Principle of Double Effect typically has a proportionality constraint: the bad effects must not be out of proportion to the good. It is widely accepted among Double Effect theorists that this constraint should not be read in a utilitarian way, and the above cases show this. Ten people dying is out of proportion to saving ten people and a squirrel. (What about a hundred to save a hundred and one? Tough question!)