The True Moral Theory: Comparison and Analysis

Alright, this is going to be very long. Like, extremely long. It’s about 9500 words worth of long. I’m going to post it all at once simply as a useful reference to all of those philosophers out there that use my blog as a resource for their own research and writing. If I could have made it shorter, I would have. It’s called “The True Moral Theory: Comparison and Analysis”. If there are any questions, or if anyone ever needs any clarification, shoot me a comment and I’ll reply as soon as I can.

I’ll also be posting an actual analysis of Franz Kafka’s “The Sudden Walk” because so many of my readers seem to really need it. I’ll get that up ASAP. Who knows, maybe I’ll have some decent prose written by then too.

For now, enjoy the following killer music:

“Future Primative” by Papercuts; You Can Have What You Want.
“Living Behind the Sun” by Devics; My Beautiful Sinking Ship.
“Wooden Arms” by Patrick Watson; Wooden Arms.
“Michigan” by The Milk Carton Kids; Prologue.
“The Base” by Paul Banks; Banks.
“Burn” by Ray LaMontagne; Trouble.
“Photobooth” by Deathcab for Cutie; Forbidden Love EP

Oh, and read this blog post. It’s one of my favorites. The Sound of Bees In a Tree.

Yours in Contemplation,

Kierkegaard

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The True Moral Theory: Comparison and Analysis

INTRODUCTION

One the most important question’s in philosophy is one that men and women have been trying to answer for millennia. The greatest minds have spent innumerable hours attempting to provide a definitive answer to, “How should we humans live?” It is a question of morality; a question of right and wrong, and a question of what constitutes good or bad character. Philosophers have presented myriad theories, and have argued quite convincingly on all sides of the issue. The problem with there being so many strong and intuitively appealing theories is that it further obfuscates a clear answer, prolonging the doubts that many pensive minds often have about what it means to be a moral person. In this paper we shall examine the major moral theories, apply a set of rigorous filters to test their strength, and come to a conclusion about the most plausible moral theory. I will argue that Act Utilitarianism is the most plausible moral theory based on its possession of desirable features, and its success at surviving the filter tests presented to it. In order to properly discuss and evaluate each theory, the paper will be presented by first introducing the desirable features to a moral theory, and the filters that will test the theory’s strength. Next, we will evaluate each of the theories themselves, and make preliminary judgments about their plausibility based on their success against the various filters. Finally, we will evaluate the strongest theories against a final filter, and against each other to determine which theory emerges as the most plausible moral theory.

SECTION I: DESIRABLE FEATURES, AND THE FILTERS

Moral theories, like all theories, must possess certain features that make its application practical, and make it useful toward achieving the end for which it is developed. Such features of a moral theory we shall call “desirable features” (DF). One may question the necessity of systematizing moral theories in such a way. Would it not be simple enough to merely follow some set of common sense rules that are obviously the “right” thing to do? Those who advocate a common sense theory of morality suggest that our intuition is sufficient in guiding our moral choices; that the one that appears to be right is. This is called Common Sense Morality (CSM). But surely things can’t be that simple. To double-check, let us examine just such a common sense-based theory.

With powerful intuitive appeal, CSM tells us that the following rules are the right thing to do:

1)   Tell the truth

2)   Don’t kill

3)   Help others in need

4)   Keep your promises

5)   Don’t steal

This system, naïve common sense morality (NCSM), seems simple enough for anyone to follow, and to do so will predictable reliability. But what happens when these rules come in to conflict with one another? What is the correct way to dissolve the moral dilemma? Examples for such dilemmas are quite simple to come up with, and present a significant problem with accepting such a naïve moral theory:

–       Telling the truth to the deranged axe-wielding man would fail to help his potential victim who needs to hide from his assailant.

–       Not stealing medicine that is needed to save the life of someone will fail to help that person who is in need.

–       Keeping a promise to meet someone for dinner despite your aid being necessary to save the life of an accident victim would fail to help that person in need.

–       Killing Hitler in order to save millions of human beings would require you murdering someone.

–       Keeping an immoral promise (e.g. to kill and innocent person, to steal, etc.).

These situations very clearly present conflicts between the rules within the theory. What’s worse is that the theory is not sophisticated enough to suggest a solution to these problematic scenarios. We see here that a desirable feature of a moral theory must be as follows:

First Desirable Feature: A moral theory should offer guidance for how to deal with conflicts of moral rules.

This rule appears to be a welcomed addition to NCSM, and certainly bolsters the plausibility of such a position. The sort of “guidance” required in the dealing with the aforementioned situations would be a set of priority rules that determine which of the common sense rules take priority when there is a conflict. By adding these priority rules to NCSM, we have sophisticated common sense morality (SCSM). Now let us examine a series of conflicts between our CSM rules:

1)   3 trumps 1 (but with exceptions, like killing Hitler.)

2)   3 trumps 5

3)   3 trumps 4

4)   3 trumps 2

5)   2 trumps 4

6)   5 trumps 4

7)   4 trumps 3

We can see immediately that there appears to be further conflict as a result of the addition of priority. The difficulty, it seems, is that there isn’t consistency as to which rule trumps another all of the time; everything depends on the situation. Additionally, we see that exceptions are permitted, but there is no clarity as to when, or to what extent. It seems then that conflict between the CSM rules becomes exacerbated by the implication of multiple priority rules operating at once, leading to nearly endless applications of priority rules, and therefore, endless conflicts within the theory. Priority rules, then, while somewhat helpful are also a significant source of additional complexity. Because there are limitations to the complexity, and number of rules that will be useful/accessible to people, it seems that an additional trait is needed. This leads us to our second desirable feature:

Second Desirable Feature: It cannot be too complicated. (Humans can only learn so many rules, and those rules must not be too complex.)

But the ad hoc constraining of rules to a certain number or level of simplicity does not constitute a theory free of further problems. What justifies the priority rules that we are to use? Is it merely an appeal to intuition? Is it entirely arbitrary? In order for the rules to possess legitimacy (as opposed to being cobbled together with ad hoc devices for expediencies sake), the theory must have sufficient justification for the constraints, and applications of its rules. This constitutes our third desirable feature:

Third Desirable Feature: It should provide a plausible/defensible rationale for its solutions to conflicts of moral rules, and for its issuing conclusions.

With three desirable features available to begin evaluating moral theories, we can initiate an evaluation of a new moral theory to test whether or not three features will be sufficient to our aims. We will call this process the “desirable features test.” Its purpose will be to determine if additional desirable features are needed.

Let us select the considerable work of Louis Pojman as the basis for our next desirable feature test. In A Critique of Ethical Relativism, Louis Pojman evaluates ethical relativism as an answer to which moral theory is best. As we will soon see, Pojman concludes that ethical relativism possesses undesirable implications, and should be rejected. (More importantly, this conclusion provides us with a fourth desirable feature.)

We begin with Pojman’s criticism of ethical relativism (ER). On his view, ER consists in the following argument:

  1. Different cultures have different moral codes.
  2. Therefore there is no objective truth in morality. Right and wrong are merely matters of opinion.

Pojman rejects this argument on the grounds that it is invalid. To clarify, the definition of logical validity is that “an argument is logically valid iff there is no consistent story in which all the premises are true and the conclusion is false.” Whereas it is true that different cultures have different moral codes, it does not entail that there is no objective moral truth. Here we observe that although the premise is true, the conclusion is false. Pojman rightly points out that this argument is, therefore, invalid by definition. In an effort to resurrect the theory, Pojman offers the following reformulation:

  1. Different cultures have different moral codes. (The Diversity Thesis.)
  2. What is right or wrong for the individual depends on the culture or society to which he belongs. (The Dependency Thesis.)
  3. Therefore, there is no objective truth in morality. Right and wrong are merely matters of opinion. (Ethical relativism.)

With the addition of the second premise, Pojman restores validity to the argument. However, still at hand is the issue of the conclusion, and its truth-value. To further evaluate ER, Pojman employs a reductio ad absurdum to test its soundness by exposing any potential false implications. To be clear, logical soundness demands the following; “an argument is logically sound iff it is logically valid and all of its premises are true in the real world.” As a consequence of this definition, the conclusion must also be true. To construct the reductio, Pojman takes the conclusion of ER, and makes it the first premise of an argument designed to derive false conclusions.

Two reductios are possible using this strategy, the first being subjective ethical relativism (SER), and the other, conventional ethical relativism (CER). The argument for SER is formulated as follows:

  1. An act is right iff it conforms to the values of the person performing it.
  2. Genocide conforms to Hitler’s values.
  3. Therefore, genocide is right/permissible.

Clearly this argument leads to a false conclusion, despite the fact that the logic remains logically valid. It must be the case that one or more of the premises are false. Knowing that premise 2 is historically true, we must conclude that premise 1 (SER) is false.

CER is formulated in a similar fashion by making the conclusion of ethical relativism its first premise, and also deriving false implications.

  1. An act is right iff it conforms to the mores of the society in which it is performed.
  2. Intolerance conforms to the mores of society
  3. Intolerance is right/permissible.

Since both reductios produce (apparently) false implications, Pojman concludes that a fourth desirable trait should be present in a moral theory. The fourth trait is:

Fourth Desirable Feature: The moral theory must not contain false implications.

In Thomas Kuhn’s revolutionary The Structure of Scientific Revolutions, Kuhn argues that the presence of a false implication within a theory is not enough to reject the theory out of hand; there would have to have many anomalies/false implications, but not just that, a superior new theory would be necessary to replace the old moral theory. Modifying and revising the theory is necessary until a clearly superior theory/paradigm is discovered. (Those modifications often include ad hoc devices to preserve old paradigms. It is important to note that ad hoc devices are not the same as legitimate modifications to a theory. Ad hoc devices merely resolve anomalies, while legitimate modifications apply to the whole theory.) False implications, on Kuhn’s view, merely means “implausible implications.” These are insufficient to reject the theory as a whole, and instead point to a need for revision. Kuhn suggests that Popperian desires to falsify early paradigms are similar to our problem with discovering the true moral theory; we can falsify each and every of our moral theories if the criterion was so severe as to only require a single implausible implication. We are then left to subscribe to the “least incorrect” moral theory, or, the moral theory with the fewest implausible implications. With this practical wisdom at our disposal, we may amend the fourth desirable feature as follows:

Fourth Desirable Feature: The moral theory must not contain too many implausible implications.

In order to apply another desirable feature test, we must skip slightly ahead and give a very brief introduction to a moral theory that will receive much greater treatment later; utilitarianism (UT). Specifically, we will examine criticisms of UT to determine if additional desirable features are needed.

UT (aka, the “Greatest Happiness Principle”) argues that what is moral is that which produces the greatest good for the greatest number of people. Bernard Williams sees UT as being an implausible moral theory because of the motivational demands that it puts on the agent. Two crucial examples are addressed in his article Against Utilitarianism.

The first criticism of UT involves negative responsibility. According to Williams, UT insists that there is moral responsibility for refraining from certain actions just as there is for performing certain actions. For instance, we would be morally responsible for not preventing immoral acts as much as we would be for committing some immoral act. Williams thinks this is a counterintuitive position to promote.[1]

The second criticism of UT that Williams alleges is that UT violates our personal integrity. On Williams’ view, we are required to set aside our morally permissible projects to do what UT requires (maximize the good). Suppose that I have a special ability to do a job that will help other people, but I have taken up the project of going to law school, and becoming an attorney. Williams argues that on UT I would be required to abandon my project (and my integrity) in favor of doing what is optimific.[2]

These criticisms are said to demand far too much of any agent, and place unrealistic motivational demands on the agents. This criticism, so understood, becomes our fifth desirable feature:

Fifth Desirable Feature: The moral theory must not place unrealistic motivational demands on agents.

Continuing with our desirable feature test, we skip ahead once more to another moral theory that will receive a more thorough treatment later in the paper; ethical egoism (EE). Brian Medlin discusses EE in various forms, but concludes that EE in general faces self-defeat if it is promoted as a doctrine to be adhered to by more than one person. For example, individual ethical egoism claims that an act is right if and only if (iff) it maximizes the good for that specific agent. That is, that you would be acting morally if your action leads to the most good for me. Medlin disregards this form of EE as outrageous, and presents universal ethical egoism (UEE), which claims that an act is right iff it maximizes the good for the acting agent. (Each person looks to maximize his/her own good.)

The critical objection to this doctrine seems as self-evident as it is self-defeating, namely that if one were to promulgate such a moral theory, then he would stand diminish his own optimific ends because everyone else would be exclusively looking after their own good! The promoter of this theory, then, would be hard pressed to gain any advantage that would maximize his good because each and every other adherent to EE would be simultaneously trying to promote their own good, and preserving any potential advantage for himself. Medlin says:

“What is he doing when he urges upon his audience that they should each observe his own interests and those interests alone? Is he not acting contrary to the egoist principle? It cannot be to his advantage to convince them, for seizing always their own advantage they will impair his.”[3]

It seems, then, that secrecy would be required of EE in order for it to be of any use, but how then could it be considered a true moral theory if it cannot guide the moral choices of more than one person? In this case, publicity of EE defeats itself, which Medlin says is grounds for rejection as a plausible theory. We now come to our sixth and final desirable feature:

Sixth Desirable Feature: The moral theory should be something that can be publicly proclaimed or endorsed.

Having established the six desirable features of a moral theory we can now use them to develop “filters” to sift through the forthcoming moral theories in order to consider only the strongest theories, and ultimately deciding which one theory presents the most plausible case for being the true moral theory.

The filters will be applied as followed:

Filter 1 (F1): Does the moral theory possess at least some of the desirable features?

Filter 2 (F2): Does the moral theory possess too many false implications? (This includes unrealistic motivational demands, for these are basically false implications.)

Filter 3 (F3): Is the moral theory too complicated to be useful? Are the modifications or “escape clauses” too numerous?

Because we are dealing with a moral theory that is intended to be the true, or ultimate, moral theory, we must require that the theory be applicable to the widest number of people. As such, we must insist that even imbeciles may comprehend the rationale behind the theory, so Filter 3 (F3) will be the most rigorous, and therefore, final filter that we will apply in our considerations.

SECTION II: THE MORAL THEORIES

In the previous section it was necessary for us to have a starting point from which to begin developing our desirable features. To do this we started with common sense morality, and began applying our desirable features test, moving us toward progressively more sophisticated moral theories. Over the course of the last section we thoroughly discussed naïve, and sophisticated CSM, and subjective, and conventional ER. A more exhaustive treatment of these theories is, therefore, unnecessary. We will, however, apply the filters to these theories in this section to determine their plausibility as a true moral theory.

Also in the previous section, we briefly introduced limited parts of UT, and EE in order to motivate further desirable features. Without motivating the features their application would seem arbitrary and unjustified, and therefore necessitated skipping ahead slightly to provide the groundwork for the rest of the paper. We will revisit both theories later in this section with much greater detail.

COMMON SENSE MORALITY

Since we have already done the hard work of laying out CSM in both of its forms, we have only to evaluate the theories against the filters.

Naïve CSM:

F1: NCSM, as we have seen, is an excellent starting point for discovering intuitions about morality. The problem with NCSM in particular is its overly simplistic nature, so it clearly achieves DF2, but as we have demonstrated, our first desirable feature was needed because of NCSM, so it clearly does not satisfy DF1. Additionally, the theory can be proclaimed or endorsed publicly with defeating itself, so it achieves DF2, and DF6. This warrants further application of our filters.

F2: Because of the inability to resolve its moral conflicts, the theory possesses far too many false implications to make it plausible as a true moral theory. No further filtering is needed; the theory is rejected.

Sophisticated CSM:

F1: The addition of priority rules to NCSM created SCSM, but as we demonstrated, the addition of rules did not help resolve instances of moral conflict. In fact, many more conflicts became possible with the introduction of the priority rules because of the seemingly endless number of applications of those rules that can compound when trying to resolve a conflict. Additionally, its complexity and lack of justification for rules regarding application and constraints made the priority rules seemingly ad hoc. SCSM seems then, to be overly complicated, and under-justified. Because SCSM only satisfies DF6, no further filtering is needed; the theory is rejected.

ETHICAL RELATIVISM

As with CSM, we have already laid out ER in its two forms, so we have only to apply our filters against them.

Subjective ER:

F1: SER successfully enables us to resolve conflicts of moral rules by using a person’s values as the yardstick for priority rules. This is a straightforward way of dispatching the problems inherent in priority rules, and because of this, SER achieves DF1, and DF2 quite handily. Also, using one’s values as a yardstick for applications and constraints of priority rules provides a plausible rationale that satisfies DF3. It warrants further examination.

F2: We have seen in Pojman’s presentation of the reductio for SER that genocide becomes permissible, and it is assumed that every reader is confident that such a conclusion is ridiculous. Because of the reductio, we can see many more instances where unsound conclusions may be arrived at; therefore, SER lends itself to myriad false conclusions. Despite its possession of multiple DF’s, the theory possesses such a quantity of false implications that even Kuhn would be compelled to reject the theory. SER fails F2; the theory is rejected.

Conventional ER:

F1: As with SER, CER provides straightforward rules for resolving moral conflicts, but bases its rationale upon societal mores rather than individual values. It’s effectiveness and simplicity garner success in achieving DF1, DF2, and DF3. Further, DF5, and DF6 may be added to its collection of DF’s because a society would not possess mores that it would not be willing to adhere to, and it can be publicly endorsed without defeating itself. Further consideration is needed.

F2: The chief false implication inherent in CER becomes evident by Pojman’s reductio against it, namely that intolerance may be permissible so long as it conforms to the mores of society (e.g. Jim Crow laws, apartheid, Jewish segregation via ghettos, etc.) This may appear to be a similar “slam-dunk” case to the permissibility of genocide, as in SER, but there exists support for the position that a society has the deontological right to determine how to govern itself. Advocates of such a position include Avishai Margalit, Joseph Raz, and David Miller.[4]Especially in the case of Miller, rational arguments can be made in defense of a society that chooses to discriminate against what the majority sees as non-members. Additionally, other cultures and foreign countries would have no legitimate grounds for criticism of relatively immoral acts, for instance, the USA would have no legitimate basis for criticizing the stoning of women for getting an education in Afghanistan, nor could we justifiably intervene against oppressive regimes that mistreat their people. Each of the false implications that are possible remain open to some potential justification. Further examination is needed.

F3: The theory itself is very simple, and there are relatively few modifications needed for the theory to be considered highly plausible. The inherent problem with this theory revolves around its society’s theory of value, and whether or not other cultures have a right to criticize or intervene against what they perceive as being wrong. CER achieves all of the desirable features of a moral theory, with the exception of DF4, which remains up in the air as of now. It has passed all three filters successfully, and therefore, emerges as a very plausible candidate as the true moral theory. Further examination will be necessary later.

DIVINE COMMAND THEORY:

DCT theory is the moral theory that argues that all morality depends on the will of God. All of the things that we are morally obligated to do, the things that we are morally permitted to do, and the things that we are forbidden from doing all derive their moral status from their relation to God’s commands. Whether the theory adopts a monotheistic, or a polytheistic approach is irrelevant; all moral value is determined by divine command.

In Plato’s Euthyphro, Socrates and a self-proclaimed pious man named, Euthyphro, debate what it is that makes a thing pious. The debate results in two critical questions; Is (a) the pious pious because it is loved by the gods or is (b) the pious loved by the gods because it is pious? Suppose (a). The pious is pious because it is loved by the gods. Why do the gods love the pious? Either for some reason or it’s arbitrary. Saying that it’s arbitrary would denigrate the gods. So there must be a reason. Euthyphro says the reason is because it’s pious. So then it seems they want to embrace (b). Then what makes it pious? What is piety? Euthyphro and Socrates never reach a definition in the dialogue but demonstrate the circularity of accepting either of these options. Euthyphro’s problem can be expanded to relate to DCT as a whole, namely, is the moral moral because god commands it or does God command it because it is moral?

If there is a reason God commands morality, then the reason shows that morality would be morality even if God did not command it. Then there is a problem with DCT because it would seem God doesn’t dictate morality. There is something more supreme than God that God must abide by (objective morality). This is inconsistent with our characterization of God as omniscient, omnipotent and supremely good. If this argument against DCT succeeds, it succeeds against strong DCT (SDCT).

SDCT says that an act is right iff (and because) god commands it. We see that this version of DCT does not seem plausible because of its arbitrariness. On this view, God could command the wanton murder of every firstborn male in Egypt, and it would not be wrong to do so. In fact, because God commanded it, it would be immoral to not find a firstborn son in Egypt to kill.

The Euthyphro argument does not succeed against the second type of DCT, weak DCT (WDCT). WDCT says that an act is right iff god commands it. But as we have seen, this version permits the existence of an objective definition of right and wrong, and merely avers that God commands only that which is right, and forbids that which is wrong. This theory encourages us, then, to merely look to what God commands for clarity on what is right and wrong; God is basically a street-sign by which we can navigate the streets of morality. Such a position can be restated by the following argument:

P1: An act is right if x (where x is a statement of the true moral theory)

P2: God commands x

C: If god commands x, then it is right

Strong DCT:

F1: SDCT achieves DF1,DF2, and DF3 by virtue of its nature; one must simply obey God. No further reasoning or effort is needed. It may also be publicly declared, so it achieves DF6. Further consideration is needed.

F2: SDCT fails DF4 and DF5 rather immediately. The notion that God may arbitrarily command acts that seem to be immoral results in nearly endless false implications because any immoral act could become moral by God’s command alone. This also produces unrealistic motivational demands on the agent who may be required to perform acts that they are morally opposed to. SDCT fails F2; the theory is rejected.

Weak DCT:

F1: As with SDCT, WDCT achieves DF1, DF2, and DF3 by virtue of its nature; one must simply obey God. No further reasoning or effort is needed. It may also be publicly declared, so it achieves DF6. Further consideration is needed.

F2: On DCT there are obvious false implications that render it implausible, specifically, that not all people have the same God, or conception of God. One must merely look to the Middle East (or the tragedy on 9/11) for glaring examples of how DCT can encourage unqualified evil against innocents. Additionally, it fails DF5 because those who are atheists must therefore adhere to a moral theory that they disavow. WDCT fails F2; the theory is rejected.

UTILITARIANISM:

As we have seen in the previous section, UT says that what is right is the maximization of the good for the most agents in any given situation. Here it may be illuminating to quote directly from UT’s most notable progenitor:

“The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure.”[5]

Mill understands that many detractors of UT will aim to classify UT as merely hedonism in disguise, so he goes on to note that there are higher and lower order pleasures, meaning that some are more worthy of satisfaction than others (seeking knowledge by attending Dr. Naticchia’s class is more worthy of satisfaction than going rock climbing). This distinction is said to be made by “competent judges,” or, people who have the ability to appreciate both higher order pleasures, and lower (sensual) pleasures. Among what has been said thus far of UT, it is perhaps the most important to reiterate that UT requires the most happiness for the most people. This is to be presumed whenever we speak of the “best consequences.”

We must now turn to the first variation of UT, act UT (AUT) AUT says that an act is right iff it produces the best consequences of all available acts. This means that in any situation, you must act in a way that produces the most happiness for the most people involved. There are no other overarching rules to consider, and ones actions are deemed to be moral on a case-by-case basis depending on the maximization of the good for the most people involved. Morality, then, would seem to always depend on the circumstances.

In contrast to AUT, rule UT (RUT) says that an act is right iff it conforms to a rule, general adherence to which produces the best consequences. In this version of UT, there is a set of hard-and-fast rules that must be obeyed in order to promote the most happiness, and consequently, obtain positive moral status. On this view, when one observes a sign that reads, “Keep Off Grass,” that person’s keeping off the grass is both the moral thing, and the thing that produces the most happiness for everyone involved. This may be read as conventional RUT (CRUT).

But things get tricky very quickly for CRUT, as pointed out by J.J.C. Smart in his article, Extreme and Restricted Utilitarianism.[6] In Smart’s example, there is a standing rule to “keep off the grass.” But what if there is a man on the grass who will die unless someone treads the grass and helps him? On CRUT, the rule of keeping off the grass must be obeyed, and is the moral thing to do. To Smart (and most other rational beings) this is serious implausible implication that can be replicated in many similar scenarios. He accuses CRUT of “irrational rule worship” that irrationally abandons the consideration of consequences in favor of adherence to non-negotiable moral rules. Smart argues that what is right is analogous to what is most rational, and wrong acts as the least rational. In this situation, Smart argues that the right thing to do is to tread the grass and help to person in need. Such an example is an excellent illustration of AUT, which Smart supports. On AUT, one has a set of rules that act as “rules of thumb,” or general guidance for what to do, but when one can produce more happiness by breaking the rule than by obeying, AUT says that the correct thing to do is to break the rule.

In order to save RUT, its defenders may try to add exceptions to rules. RUT with these added exceptions constitute ideal-RUT (IRUT). In this case, the IRUT would say that the rule to “keep off the grass” could include the modification, “except to render aid to someone in need.” But even this modification does not seem to help much. Suppose now that Hitler is choking on the grass; IRUT would demand that you tread the grass to go save Hitler who is in need. An additional modification can be made to exclude helping Hitler, but this reveals a larger problem; to what extent are the modifications permitted? How many are allowed? At some point, IRUT collapses into AUT because allowing a large number of exceptions is equivalent to taking things on a case-by-case basis, as in AUT.

As we have seen in the previous section, Williams offers the “Negative Responsibility” argument, and the “Integrity” argument. The former assigns moral responsibility for not preventing evil, and the latter commands that one give up ones integrity to promote the most happiness. Williams’ arguments, while seeming strong at first, overlook a critical feature of UT; UT simply informs us what we ought to do, it does not demand that we do anything. UT, then, survives Williams’ criticism.

Conventional and Ideal RUT:

F1: CRUT (without numerous modifications) offers resolution to moral conflicts, it is not too complicated, and it provides a plausible rationale for its solution to moral conflicts, so it achieves DF1-3. DF6 is achieved because it can be publicly declared without defeating itself. Further evaluation is warranted.

F2: CRUT (without numerous modifications) faces one serious implausible implication, which is its irrational adherence to rules in situations that suggest breaking the rules is the most moral, and optimific choice. But this implausible implication is met by a series of exceptions meant to disarm critics of the theory. CRUT then becomes IRUT because of the allowance of exceptions. Therefore, we shall apply our final filter to IRUT.

F3: IRUT offers seemingly ad hoc devices to CRUT via exceptions to rules in order to rescue the theory from implausible implications. The trouble here is determining what exceptions are justified, how many are justified, and why they are justified. Aside from IRUT collapsing into AUT by ultimately judging morality based on consequences, the modifications are far too numerous, making the theory unwieldy and complicated. IRUT fails F3; the theory is rejected.

Act UT:

F1: AUT offers clear resolution to moral conflicts, it is very simple, and it provides a plausible rationale for its solution to moral conflicts, so it achieves DF1-3. It does not place unreasonable motivational demands in any obvious way, and one may publicly endorse it without it defeating itself, so it achieves DF5 and DF6 as well. Further evaluation is warranted.

F2: Williams’ objections to AUT have limited success in showing implausible implications. Aside from the distinction noted above, Kai Nielsen offers further vindication of the theory. In Against Moral Conservatism, Nielsen presents a scenario that frames AUT as the superior moral theory because of its interest in obtaining the best consequences in any given situation. He does so by offering a hypothetical scenario of a fat man blocking the exit of a cave, behind which 20 other hikers are stuck. The campers all face being drown unless they blow the man to smithereens with dynamite. On Williams’ view one must let refrain from blowing up the fat man, and allowing the rest to die. Nielsen rightly points out that the only rational choice, and therefore the moral one, is to save 20 lives at the expense of 1 (not a welcome conclusion for Star Trek fans).  This does not, as Williams’ avers, constitute placing unrealistic motivational demands on the person who must light the stick of dynamite. The implausible implications alleged against AUT, then, are greatly mitigated.

F3: AUT is a profoundly simple theory, and despite the objections to it, numerous revisions, modifications, or exceptions are not needed for the theory to maintain its strength. AUT clearly passes F3, and emerges as our strongest candidate for a true moral theory. AUT will, then, undergo a final evaluation at the end of this paper.

CATEGORICAL IMPERATIVE:

Immanuel Kant is a name that invokes tremendous respect in the philosophical community (and great trepidation in the heart of undergraduate philosophy students) for very good reason. Kant’s The Foundation for the Metaphysics of Morals shines as one of the best examples of a skillfully crafted, and intricate theory in all of philosophy. In Metaphysics, Kant presents his famous Categorical Imperative (CI), which states, “I am never to act otherwise than so that I could also will that my maxim should become a universal law.”[7] A critical feature of categorical imperatives[8] is that they are based solely upon a priori reasoning, and have no empirical justification, nor do they consider consequences when determining what one should do. Kant argues that the categorical imperative is an objective principle, while a maxim is a subjective policy that may or may not fall in line with that principle. The categorical imperative, then, works like so; if I have a maxim that punching people in the stomach is ok, then I am promulgating the imperative that it is ok for everyone else to punch me in the stomach. (One may be reminded of Sunday school lessons where the Golden Rule was taught; “Do unto others as you would have done unto you.”) Without the a posteriori experience of first having tested this maxim out, we can quickly say that we would not will for that maxim to be a universal law, thus, we reject it on a priori grounds. But we are also to adhere to certain maxims based on the same a priori basis.

Kant develops three formulations of the CI, Formulation of Universal Law (FUL), Formulation of Humanity as an End in Itself (FEI), and the Formulation of the Kingdom of Ends (FKE). We will discuss them in turn.

The initial formulation of CI is the Formulation of Universal Law (FUL) which requires that we do only those things that we would be amenable to making a universal law. The initial plausibility of this formulation deteriorates rapidly once we consider its implications. On FUL, many intuitively permissible acts would become impermissible. The following examples are known as “false negatives:”

–       Buying pork

–       Refusing bribes

–       Freeing the slaves

–       Succoring the poor

–       Flushing the toilet

–       Lying to an axe murder

If one were to universally require these, then there would be no occasion to do these at all, so these are impermissible. But they are falsely impermissible. The problem here is that universally willing to do something that there is no occasion to do is a contradiction in conception. To wit, if one wills that refusing to buy pork become a universal law, then all of the pork suppliers will stop making pork, and thus, there would be no occasion to refuse to buy it. It causes one to will, and not to will x.

False negatives are not the only problem for FUL; there also exist false positives, or, intuitively impressible things would then become permissible. For example:

–       Buying term papers (The secret remains secret)

–       Small-time tax cheating (The government won’t fold if you don’t pay up)

–       Descendents of slaves shall be slaves (If the descendents don’t oppose it)

–       Revenge killing (If one doesn’t mind being a victim)

These maxims are “tailored” to mitigate their negative effects, and so they falsely pass the FUL.

A potential solution to the false negative problem proposed by Christine Korsgaard utilizes two types of tests to determine if the act would remain counterintuitively impermissible; the logical contradiction test, and the practical contradiction test. The former test suggests that the universalization of the maxim is logically impossible (like the pork example). The latter suggests that a maxim may be self-defeating if universalized. Korsgaard chooses to apply the practical contradiction test because while the maxim may become self-defeating, the practical purpose of the maxim is still achieved, and therefore is permissible.[9] While this approach technically vindicates Kant from the trouble of false negatives, it does nothing to help with the false positives! It is important to note the necessity of adding an entire “testing methodology” to FUL in order to rescue it from its myriad false implications.

In order to save Kant from the false positives problem, we must move on to FEI. The key quotation that drives this formulation is this, “Treat humanity, whether in your own person or that of another, always at the same time as an end itself, and never merely as a means.”[10] Here, we see that we have perfect duties to ourselves, and to others. To treat a person as an end in themselves is a way of acquiring their consent to your action. A person cannot consent to your maxim of using them as a means if you hide that maxim from them, but if you are open about your intentions and do not practice deception, then they can give their consent to your maxim, making the exchange permissible. In this case, acts that treat humans as mere means, and involve coercion or deception, are always wrong. This, then, vindicates Kant from the false positives problem because no one is being treated as a means, and their consent is obtained before the exchange takes place. So in the case of the purchased term paper, if you were honest about the fact that you had purchased the paper, and the professor accepts it anyway, the act becomes justified, and permissible. This may strike many rational individuals as a very implausible ad hoc device aimed at saving Kant. To wit, this formulation says that so long as coercion or deception are not involved, then otherwise morally impermissible acts become permissible. Intuition informs us that simply because there is bilateral consent to some immoral act, that act does not become de facto moral.

The final formulation is a sort of marriage between FUL, and FEI. It is a hypothetical contract wherein all agents are rational actors that agree to treat all other agents as ends in themselves, and to act only on those maxims that are universal laws. This theory seems to be a successful attempt at presenting a plausible formulation of CI, that is, until one recalls that FKE is entirely hypothetical, and requires that all agents in such a kingdom be fully rational. In short, it is not a plausible formulation for the real world.

It seems then that Kant has numerous, and very serious strikes against his CI; the practical contradiction test adequately side steps the false negatives problem, but does nothing for false positives. FEI is able to disarm some of the problems with false positives, but does nothing for false negatives, and our intuitions tell us that consent alone does not make an immoral act a moral one.

CI, then, is heavy laden with “escape clauses” and amendments that are necessary to save the theory from being rejected out of hand. Despite the apparent weakness of CI, we will proceed with our application of the filters.

Categorical Imperative:

F1: The theory is able to resolve moral conflicts by acting only by the maxims that you would have applied to you, so CI satisfies DF1. Additionally, Kant’s rationale for why this is a plausible theory is, at least initially, reasonable; so it satisfies DF3. It may also be publicly endorsed (and would be required in FKE), so it satisfies DF6 as well. Further evaluation is warranted.

F2: There are simply too many false implications for this theory to be even remotely plausible as a true moral theory. The problems caused by false negatives and false positives are severe enough, but when trying to remedy these problems, the theory becomes an unintelligible mess of escape clauses, logical and practical test methodologies, hypothetical utopias where, arguably, a moral theory would hardly be need. Even if CI passed F2, which it does not, it would fail F3 on its face. CI fails F2; the theory is rejected.

ETHICAL EGOISM:

In the previous section we brieflydiscussed EE as a means of motivating DF6, but a few more things must be said to round out our understanding of EE.

By close evaluation of UEE, we find a critical contradiction in addition to those already outlined by Medlin:

1)   An act is right iff it produces the most happiness for the agent

2)   Toms borrowing the book produces the most happiness the most happiness for Tom

3)   Joes borrowing the book produces the most happiness for Joe

4)   Toms borrowing the book is right

5)   Joes borrowing the book is right

6)   Tom and Joe cannot both borrow the book

7)   Ought implies can

8)   It is not the case that both Tom and Joe ought to borrow the book

9)   If Tom’s borrowing the book is right, Joe’s borrowing it is wrong

10)                   If Joe’s borrowing the book is right, Tom’s borrowing the book is wrong

11)                   Tom’s borrowing the book is right and wrong

12)                   Joe’s boring the book is right and wrong

A response could be that the UCE says that although everyone would want to be on top, only one person can be on top, and that person may accept that condition and continue to endorse the moral theory. This is implausible.

Despite Medlin’s complete and well-reasoned rejection of EE as a coherent theory, it serves us to evaluate Gregory Kavka’s position on EE. Kavka presents a relatively straightforward version of EE called rule egoism, and is discussed in an article by the same name.  Kavka is a Hobbesian, and so he takes Hobbes’ laws of nature to be what supplies the content of rule-egoism (RE). To wit, the Hobbesian laws of nature are, (1) Seek peace, (2) Lay down your rights to all things, and (3) Keep your covenants. Kavka’s view is that by following these rules, one serves his own interest most fully, and most successfully. Immediately, one may recognize that RE is susceptible to the same objection as RUT, or, the irrational rule worship objection. In the case of RE, we may say that the one is bound to obey the laws of nature because obeying the rules serves ones interest the most. This position ignores the possibility that an agent may be able to produce more optimific outcomes by breaking the rules than by adhering to them. And if it is the case that what ultimately matters for the egoist is the most happiness for the agent, then the RE position is an irrational one.

Kavka’s position seems to be one susceptible to a contradiction. Consider the following scenario. Your friend’s car is out of gas. Your group has 5 people. If 4 or 5 people push, the car will make it to the station. If 0 to 3 people push you don’t get it to the station. Kavka’s position seems to be to comply iff your compliance or non-compliance produces the most happiness for you, but that seems to be an AE position, even though he is trying to defend a RE position. There appears to be a contradiction here; by trying to avoid the rule worship objection, he develops this contradiction. But he would retort with the “Response to the Fool”[11], and say that by not complying in cases where its not crucial, no one would end up pushing, or, people would exclude you from schemes that you may benefit from, but require your participation. The only way to blunt the force of this objection is to utilize a minimizing case wherein no one follows the rules at all, and behaves as an act egoist (AE). But in blunting this objection, RE collapses into AE by stating that one should then simply act in accordance with the interest of the acting agent only in mind.

Because UEE contains an inherent contradiction, additional filtering is unnecessary, and we reject the theory. IEE is implausible as any kind of moral theory, and is thus, rejected with no further filter being necessary. Hypothetical egoism is basically a version of hedonistic UT, so we shall not consider it under EE. We therefore, have only RE, and AE to evaluate. However, as we will see, RE collapses into AE. Despite this, we shall apply our filters to determine if there is more plausibility to EE than is apparent.

Rule EE:

F1: RE possess the ability to solve moral conflicts by claiming that the agent should follow rules that, when adhered to, are in his best interest, so RE achieves DF1. As long as one simply follows the rules, then the theory is very simple, and so achieves DF2. Considering that EE’s entire system of motivational demands requires that a person pursue his own interests, it certainly achieves DF5. The theory warrants further evaluation.

F2: RE’s failure against the “irrational rule worship objection” leads to innumerable implausible implications that may cause the agent to not obtain the most optimific end for himself if he follows the rlues. To avoid this objection, the RE may simply add the escape clause that permits exceptions in situations where it is best to break the rules, but this causes the theory to collapse into AE. The theories failure to survive without collapsing into another theory is an insurmountable implausible implication. RE fails F2; the theory is rejected.

SECTION III: CONCLUSION

Since we have now distilled the remaining theories to CER, and AUT, we must evaluate them both by a final criterion to determine which one prevails as the most plausible moral theory. We will begin with CER.

We have seen that Pojman correctly reasons that if we were to adopt the position advocated by ER, cross-cultural criticism of immoral acts would not only be impermissible, but that all moral reformers are necessarily wrong. We also see that his reformulation of ER assails the dependency thesis as being false. In contrast to the dependency thesis, Pojman avers that there are, in fact, objective morals that apply to, and are binding upon everyone. That argument is represented as follows:

  1. Objectively valid moral principles are those which meet the most important needs and interests of persons
  2. Some principles fit this description
  3. Therefore, there are some objectively valid principles

One final comment on ER suggests that, “Perhaps the truth is not edifying.” This means that the counterintuitive conclusions of ER (CER) may merely require that we change our intuitions to suit the “truths” revealed by ER. This comment is not to suggest that Pojman subscribes to this view, rather he is demonstrating through hyperbole how offensive such a theory would be to our intuitions. In the end, this author believes that the consequence of making moral reformers necessarily wrong is too grave of an implausible implication that runs far too contrary to a rational mind’s sensibilities about morality. The theory, therefore, is rejected.

We are left at last with our final evaluation of AUT. To evaluate AUT against the most rigorous criterion yet, we will apply the “sadistic pleasures objection”. This objection claims that, based on AUT’s aim of maximizing the most good for the most people, that sadistic pleasures would be justified so long as they are held by a majority of the agents in the scenario.  One may think that AUT is done for because of the immediate appeal to such an objection. However, one can very simply overcome such an assault in two simple steps. The first parry against the sadistic pleasures objection is to recall Mill’s very definition of UT, “The creed which accepts as the foundation of morals, Utility, or the Greatest Happiness Principle, holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure, and the absence of pain; by unhappiness, pain, and the privation of pleasure.”[12] It is imperative to all UT that pain be just as mitigated and guarded against as the promoting of optimific ends! So much for claiming that sadistic pleasures are permitted on AUT! The second defense against such an objection, and all others like it, is to understand that Mill’s UT is based on an objectivist theory of value. An objectivist theory of value says that objectively valid moral principles are those that meet the most important needs and interests of people. This is what Mill is referring to when he discusses higher order, and lower order pleasures. The higher order pleasures, the ones that reliably lead to the most optimific ends, are the ones that we should pursue most. This does not, however, preclude our permission to pursue lower order pleasures when we wish. AUT says that what is right is the maximization of the good for the most people, and the objective theory of value on which it depends says that what is good what meets the most important needs of people. This insulates AUT from objections that suggest that AUT is susceptible to self-bias; one may have self-bias, but in the case that he acts based on self-bias he is no longer acting in accord with AUT, so the objection is really against some form of EE (perhaps AE).

All of this makes the theory profoundly easy for it has only one principle to follow, and one theory of value from which its values are derived. Because of its resilience in the face of multiple objections, its simplicity, and its minute number of possible implausible implications, act utilitarianism (with an objectivist theory of value) emerges as the true moral theory.


[1] Two examples are given in Against Utilitarianism to demonstrate this problem with negative responsibility. “In one, an unemployed scientist, George, is offered a job doing research in biological warfare, to which he is opposed. Yet it turns out that on utilitarian grounds he would be obligated to take this job, for it would be even worse if an unscrupulous scientist were involved in the research.” – Ethical Theory, p.245.

[2] Doing what maximizes the good.

[3] Ultimate Principles and Ethical Egoism, p.3.

[4] Miller sees “nation” and “state” as two separate things. He avers that whereas states are delineated by objective criteria, such as borders, language, or race, “…nationality is essentially a subjective phenomenon, constituted by the shared beliefs of a set of people…” (Miller, Nationality. P.648) The shared beliefs of that nation (as Miller sees them) are: 1) that each “member” of that nation belongs with the rest, 2) the affiliation is permanent, 3) the group/nation is marked off from other groups by distinct characteristics, 4) there is a mutual loyalty between the member and the rest of the group, and 5) the group enjoys some degree of political autonomy.

[5] Mill, Utilitarianism. “Ethical Theory.” P.200.

[6] We may read “extreme utilitarianism” to be the same as AUT, and “restricted utilitarianism” to be the same as RUT.

[7] Kant, Metaphysics. “Ethical Theory.” P. 289.

[8] Contrast with hypothetical imperatives that dictate the following: whoever wills the end, wills the means indispensably necessary, insofar as she is an adequate agent. In other words, the imperative is contingent upon the desired end.

[9] Concepts regarding the tests are taken from a handout titled, The Categorical Imperative Procedure, by Christine Korsgaard.

[10] Kant, Metaphysics. “Ethical Theory.” P.301.

[11] From Hobbes’ Leviathan

[12] Mill, Utilitarianism. “Ethical Theory.” P.200.

Advertisements

About facedownphilosophy

Proud recipient of the "Award for Outstanding Excellence in the Field of Unrivaled Superiority"
This entry was posted in Ethics, Justice, Philosophy, Psychology, Questions, Religion, Writing and tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s