On the television science fiction drama "Extant", an android is asked to respond to the following scenario, which I have fleshed out a little from the script on the show. A known terrorist, of well documented acts of mass murder numbering in the thousands, has been located hiding in a school house in a remote location. A drone strike will kill the terrorist, preventing possible future acts of mass murder. The strike will also kill a dozen innocent school children. When asked if she would execute the drone strike, the android without hesitation, says "yes".
I have heard others, who are real humans, claim they would make similar choices: if the possibility of stopping future evil is sufficiently large, they would harm or kill a few innocents for that objective.
Problem is that this is not only a morally questionable action, it is illogical as well. Hence an perfectly logical android would not have said "yes". Of course, the fictional scenario does not indicate that the android is perfectly logical -- in fact (in fiction?) the android is said to have been provided with a human sense of morals.
It is illogical because it balances a certainty of harming the innocent against several alternative futures, many of which may or may not have resulted in more evil acts, which may or not result in greater evil.
Possible futures.
1, The dozen children die, the terrorist dies, the World has suffer an evil of level 12.
2. The children live, the terrorist lives, the terrorist commits only N murders in the future. That will be a future level N.
If N is greater than 12, was the murder of the school children justified?
The value of N might be determined by many factors. Perhaps the terrorist is killed or captured by other means. Perhaps the terrorist decides he has gone too far, or the act of mercy causes a change in his personality. Perhaps newly implemented security measures will frustrate the terrorist in the future. Or perhaps, the terrorist kills hundreds.
The point is that you can not balance a certainty of 12 deaths now against an uncertain number of deaths in the future.
The only rational course is to take the option of least harm in the given moment -- the future is literally "a matter for another day."
There are other scenarios, usually pretty strained scenarios, that make the consequences more certain and immediate. For example, the classic "push one adult person in front of a train so that you can save a baby". That is a completely different class of choice from the above terrorist scenario. Most people will not push the adult, choosing to be passive and accept that fate has killed the baby. On the other hand, if the choice is "I will jump in front of a train to save a baby", most say they hope they would have the courage to do so.
These immediate consequence scenarios are not balancing hypothetical futures against a certain present.
Also consider that in the above terrorist scenario, the choice is greatly influenced by the value that is placed on the school children. An Air Force general, in a drone control center half a world away, may consider one school child to be worth one future life. However, the parents of that school child may well express their feeling that their child is worth a thousand future lives.
So we do not kill the innocent to kill the guilty. Neither do we imprison the innocent so that the guilty do not escape -- that is in the core of our beliefs.
I know I will get pushback on this stance, especially from a few militaristic personalities. They will inflate their chests, announce ponderously that some one has to decide "who lives and who dies", and that their duty is to do so. Of course, so long as it is usually some one else who has to die.
By the way, in the fictional show, a military officer did choose to launch a drone, killing everyone (citizens of his own country) that were in a particular location. Except for the intended target who had left the targeted location before the drone struck. For the officer, it was still in his mind a good decision. Thank God that has never happened in reality. Has it?
I have heard others, who are real humans, claim they would make similar choices: if the possibility of stopping future evil is sufficiently large, they would harm or kill a few innocents for that objective.
Problem is that this is not only a morally questionable action, it is illogical as well. Hence an perfectly logical android would not have said "yes". Of course, the fictional scenario does not indicate that the android is perfectly logical -- in fact (in fiction?) the android is said to have been provided with a human sense of morals.
It is illogical because it balances a certainty of harming the innocent against several alternative futures, many of which may or may not have resulted in more evil acts, which may or not result in greater evil.
Possible futures.
1, The dozen children die, the terrorist dies, the World has suffer an evil of level 12.
2. The children live, the terrorist lives, the terrorist commits only N murders in the future. That will be a future level N.
If N is greater than 12, was the murder of the school children justified?
The value of N might be determined by many factors. Perhaps the terrorist is killed or captured by other means. Perhaps the terrorist decides he has gone too far, or the act of mercy causes a change in his personality. Perhaps newly implemented security measures will frustrate the terrorist in the future. Or perhaps, the terrorist kills hundreds.
The point is that you can not balance a certainty of 12 deaths now against an uncertain number of deaths in the future.
The only rational course is to take the option of least harm in the given moment -- the future is literally "a matter for another day."
There are other scenarios, usually pretty strained scenarios, that make the consequences more certain and immediate. For example, the classic "push one adult person in front of a train so that you can save a baby". That is a completely different class of choice from the above terrorist scenario. Most people will not push the adult, choosing to be passive and accept that fate has killed the baby. On the other hand, if the choice is "I will jump in front of a train to save a baby", most say they hope they would have the courage to do so.
These immediate consequence scenarios are not balancing hypothetical futures against a certain present.
Also consider that in the above terrorist scenario, the choice is greatly influenced by the value that is placed on the school children. An Air Force general, in a drone control center half a world away, may consider one school child to be worth one future life. However, the parents of that school child may well express their feeling that their child is worth a thousand future lives.
So we do not kill the innocent to kill the guilty. Neither do we imprison the innocent so that the guilty do not escape -- that is in the core of our beliefs.
I know I will get pushback on this stance, especially from a few militaristic personalities. They will inflate their chests, announce ponderously that some one has to decide "who lives and who dies", and that their duty is to do so. Of course, so long as it is usually some one else who has to die.
By the way, in the fictional show, a military officer did choose to launch a drone, killing everyone (citizens of his own country) that were in a particular location. Except for the intended target who had left the targeted location before the drone struck. For the officer, it was still in his mind a good decision. Thank God that has never happened in reality. Has it?
Comments
Post a Comment