Friday, March 26, 2010

The Carmack Vector Addition Theory of Ethics: Advancing the Ball

Through TAing for Bioethics I've noticed a lot of students and article authors struggle with using ethical frameworks such as utilitarianism, deontology, and virtue ethics as decision-making tools.  Example 1: they might look at an instance of a father considering staying at work a couple more hours and say that deontology fails because although the father has a duty to provide for his family, what about spending time at home developing relationships with them?  Example 2: utilitarianism fails in an analysis of the morality of animal experimentation because it's impossible to compare the suffering of a mouse to a person because they have different consciousnesses.  Example 3: utilitarianism fails because Steve may value exercise much more than John, so a decision resulting in greater opportunities for exercise does not bestow equal benefit on the two and thus a teleological calculation is valueless.  Below I offer a resolution to each of these complaints/contentions.

First, example three.  Use preference utilitarianism!   "'good' is described as the satisfaction of each person's individual preferences or desires, and a right action is that which leads to this satisfaction. Since what is good depends solely on individual preferences, there can be nothing that is in itself good or bad except for the resulting state of mind. Preference utilitarianism therefore can be distinguished by its acknowledgment that every person's experience of satisfaction will be unique." Utils are a sufficient common denominator to use as a measuring unit.

Second, example two.    Bracket the uncertainty!  Or, to use the latin phrase, capture the uncertainty sub modo - "within limits." Let's say you're trying to make the difficult calculation of the net benefit of inducing a stroke in a chimpanzee to test a stroke medication.  The primary consequences that need a benefit/cost valuation are the chimpanzee's suffering and the medical advances likely to result from the research.  Sure, it's difficult to quantify the chimpanzee's suffering.  However, it is reasonable to define a modest range within which the magnitude of the suffering likely falls.  Let's say that suffering = (the number of pain neuron's firing)*(the duration of the firing).  Calculate the likely quantity of neurons, or benchmark it against a comparable mammal's suffering (say a human or a dog on some assumed scale, say 1-100 where 100 is maximum torture plus killing).  Then decide the range for comparing that level of suffering to human suffering.  E.g. 100 neurons firing for 10 minutes= 1000 suffermeters.  Will the ratio to humans be 2:1 or 1:1 or 1:10 or 1:1,000,000 (i.e. 1 human suffermeter = 1,000,000 chimpanzee suffermeters)?  Don't know?  Fine, but the likely value could still be bracketed- i.e. 1 suffermeter for a chimp is likely worth less than 2 suffermeters of a human (that'd be a 2:1::human:chimp ratio for you mathematicians out there) and probably more than 1/one duodecillianth of a human (1:1x10^39).  Now at least you've bracketed the uncertainty and can proceed with some fuzzy utilitarian calculations rather than getting stuck [again for you mathematicians- the next step is multiplying the human:chimp ratio by the quantity of experienced suffering by the chimp, then adding that quantity to the (pleasure/good of medical advancements)*(likelihood of those advancements)].  Then later, scholars may find substance on which to ground a narrowing of the range.

Third, example 1.  Use vector addition!  This method could apply to utilitarianism (add up all the benefit and cost vectors), deontology (add up all the duty vectors) and virtue ethics (add up all the virtue/eudaimonia-promoting vectors).

Each vector has a magnitude and a direction.  For utilitarianism, the direction is easy: qualify a specific consequence as either a benefit or a cost.  If this seems difficult, further subdivide the consequence into components that are either only benefits or only costs.  Next, valuate the magnitude of the consequence vector (use the bracketing method above to put parameters on a difficult-to-quantify magnitude).  Example- you want to tell your boss about a consistent error the boss makes.  One cost is the risk of getting fired.  Multiply the likelihood of that outcome by the severity of getting fired (say, 50 units of psychological pain plus 1000 units of financial pain resulting from lost salary).  If you're not sure about the pleasure/pain units, say between 5 and 500 units of psychological pain and between 10 and 1,000,000 units of financial pain and move on with the calculations (such as the benefit in pleasure units of correcting an egregious, repeated error).  After plotting and adding all the vectors of the consequence bundle, you get a net vector for the alternative.  Repeat this process, and then select the alternative with the greatest net benefit (or least net cost- when life hands you several options that all suck, choose the one that sucks least).

For deontology, the direction of the duty vector will be limited to two or three alternatives unless you use linear algebra, which as I understand it allows for n alternatives (1 dimension: 1 alternative).  Let's use a 3 alternative ethical dilemma: say the Gestapo shows up and asks if you're harboring Jews.  You are, and your three primary alternatives are 1) lie, 2) kill the Gestapo agent, or 3) tell the truth.  Say those are your three axes/dimensions.  Then plot in all the duty vectors (duties to sustain life, tell the truth, obey the law, resist evil, inspire others to resist evil, refrain from killing, etc.) in the 3D space.  The magnitude of each vector comes from the weight of the duty (on a priority sequence, e.g. to obey is better than to sacrifice).  For additional help in gauging the relative weight of the duty, draw upon trade-off techniques such as those articulated by Hammond and Raiffa in Smart Choices or the Analytic Hierarchy Process.  The direction comes from to what degree that duty advocates the axis's alternative (e.g. the refrain from killing vector would be perpendicular to the killing the agent axis and probably a 45 degree angle between the tell the truth and the lying axes).  Repeat for all duties, then add up all the vectors: the resultant vector is the ethical solution (and, conveniently, has the appearance of a guiding arrow).


Another deontology illustration: Animal experimentation generally.  Lets say I have a duty to maximize human health of 60 import units, and a duty to respect animals of 40 import units.  Dissecting and inducing illness is not very respectful, but doing so will very likely help me fulfill my obligation to maximize human health.  I acknowledge both duties, but am not stuck.  Using my method above, I conclude that I should breach the duty of respecting animals up to 20 units, since that's the net duty vector (60-40). If I could mitigate my disrespect with some with minimal reduction of advancing human health, I should do so according to the ratio I think exists between them (e.g. 3 units of disrespecting animals = 1 unit of breaching the duty to maximize human health).

In some situations, mitigating would not be feasible.  For instance, let's say you're a Hmong mother crossing the river into Thailand to escape Hmong-genocidists on the Cambodian shore, and your baby starts crying.  The slightest noise may result in detection, and detection will certainly result in death for you and your companions.  Your only options are to drown the baby or not drown the baby.  You have a weighty duty to preserve your child's life; you also have a weighty duty to preserve your own and your companions' lives.  Lets say that the vector addition results in a vector with a very small magnitude in the direction of preserving your own and companions' lives.  Ideally you would kill the baby just a little bit- but obviously that's not possible.  The ethical decision with these givens is to drown the baby.  In other circumstances, say the tension between the duty to be at home developing relationships with family members and the duty to provide, it may be feasible to "kill the baby just a little bit."  Let's say the resultant vector in the father's decision also has a small magnitude and is in the direction of spending time at home.  The father should then work just a few less hours and come home during that time, rather than quitting his job.  


For virtue ethics, you have one axis to plot your virtues on.  Each vector's direction is along the axis between infinite positive eudaimonia and infinite negative eudaimonia.  Each vector's magnitude is the extent the given alternative promotes achieving eudaimonia (think of arete (excellence or virtue), phronesis (practical or moral wisdom), and eudaimonia (flourishing) to help achieve a number- bracket the uncertainty if such discernment is fuzzy).  If you like, break the alternative down into a set of virtues (say, 6-12 of them such as wisdom, prudence, justice, fortitude, courage, liberality, magnificence, magnanimity, and temperance), determine how much each alternative promotes each virtue, then aggregate the magnitudes of each virtue vector for a resultant vector for that alternative.  After repeating for all alternatives, select the alternative of greatest positive (or least negative) magnitude.  


I have now completed my proposed resolutions to the three sample complaints detailed above.

Though these three processes (using utils/preference utilitarianism, bracketing uncertainty, and using vector addition) seem abstract and prone to error, their quantitative requirements force ethicists to expose their levels of certainty and take a stand rather than quitting early with the lame excuse that the valuations are too abstract to be meaningful.  This means authors will make more claims that can be criticized and hopefully improved: the more falsifiable the claim, the more correctable is the theory.  The resulting debate is therefore more likely than the status quo to arrive at increasingly precise ethical answers with only minimal cost of risking a sacrifice of meaningfulness.  The current lack of attempting to quantify results in trivial advances in ethics articles, at the opportunity cost of making substantive progress toward increasing meaningful resolutions of difficult ethical questions.  


This approach is of course conditioned on a presumption that most ethical questions are resolvable [i.e. ethical problems look like mathematical formulas: what to do in an ethical situation = f(variables A, B, C, D, ...)].  Articles in ethical journals should quantify those variables, weigh them relative to each other, and expound the relationships between them in a way that provides increasingly precise, meaningful answers.  Example: As a physician faced with performing a sterilization requested by a patient, I should do y where y = f(personhood of patient, level of consciousness of patient, availability of alternatives, degree of informed consent, maleficence/beneficence of act, fairness of procedure, etc.) These factors could be weighed against each other much as trade-offs are in decision analysis.  Example: Should I move my office closer to where I live?  Assume the move would be x units less convenient for my clients as a whole, cost y more dollars per month, and be z units more convenient for me by way of less travel time.  How much y am I willing to pay for z?  How much x for y? Z for x?  These relative judgments are as useful in decision making as I claim they can be in ethical analyses. 

Another rebuttal to my proposed approaches is that vector addition is somewhat difficult.  I would respond that calculus and statistics are also difficult and require some training.  However, they are immensely valuable tools for research and scholarship.  I make the same value-adding claim for a vector addition approach to ethical reasoning.


The three proposed approaches above help advance the calculations prerequisite to rigorous ethical decisions by overcoming common obstacles to the progress of those calculations. 



No comments:

Post a Comment

Search This Blog