There are at least three philosophy papers whose titles ask this question. They all argue that ethics does rest on a mistake. However, they disagree about the mistake and, therefore, about the solution. Below I’ll give a very brief overview of each paper.
Prichard, H. A. (1912). Does Moral Philosophy Rest on a Mistake? Mind, 21(81), 21–37. [HTML, open access]
- Answer: yes.
- The mistake: thinking that philosophical reasoning confers the motivating force of moral obligation.
- Solution: intuitionism — in the same way that we “know” or “have access” to the deductive force of logical entailment or mathematical proof, we have the ability to “know” or “have access” to motivational force of moral obligation.
Gettner, Alan. (1976). “Does moral philosophy rest on a mistake?” The Journal of Value Inquiry, 10(4), 241–252. [Online, behind paywall]
- Answer: yes.
- The mistake: the method of trying to find moral laws (or treating ethics as a science).
- The solution: challenge and supplant this method.
Jones, William Thomas. (1988, March). Does moral philosophy rest on a mistake? Humanities Working Paper, 132. California Institute of Technology, Pasadena, CA [Online, open access]
- Answer: yes.
- The mistake: thinking that ethics is not fundamentally different from psychology, economics, and anthropology. (Error theory: our philosophical vocabulary led us to make this mistake.)
- Solution: treat ethics as co-extensive with psychology, economics, and anthropology.
What Do you think?
- Does ethics rest on a mistake? If not, then where did these papers go wrong?
- If ethics rests on a mistake, what is the mistake?
- Is there a solution? If so, what is it?
Hello
I tend to agree with some of the arguments in the first paper, but I’m not sure the paper’s description of moral philosophy still applies to moral philosophy as practiced today (then again, I’m not a philosopher, so that might not tell you much).
To the extent to which moral philosophy actually does what the paper describes, I tend to think it’s partly correct. It’s not entirely so, since it goes too far in ruling out other means of finding moral (or mathematical) truth (but then again, it was written in 191).
For example, if we are in doubt as to whether 7×4=28, the only remedy would be to do the sum again (which is what the paper says is the only remedy)…or use a calculator instead…or ask a mathematician…
Generally, if the computation is very long and complex, and we get one result after doing it manually but a computer says otherwise, it’s reasonable to trust the computer instead (translated to 1912: or maybe trust an expert.)
With respect to ethics, Prichard proposes that the remedy when we doubt there is an obligation to A in situation B, is to put oneself (maybe imagining) in situation B, and see what our moral faculties say.
I am sympathetic to that procedure and I think that’s usually the best remedy by far, but I think it’s not in principle unique, nor does it preclude developing an alternative method for finding moral truth when our intuitions on specific cases are not clear – something moral philosophy could do, even though it’s very difficult.
For example, in the future, if philosophers and/or scientists could figure out the algorithm (with some room for vagueness, so there might be more than one) that from some inputs (describing a situation B) yields an output (about whether a person in B ought to A) in humans under ideal conditions (or good enough) and given enough processing time, a computer might then run it much faster and more reliably. In fact, in case of disagreement, the computer could tell us who’s right and who’s wrong, so there would be a procedure better than one’s own intuitions. Even without the computer, that would help if we can directly use the algorithm in cases in which we do not have clear intuitions, other people disagree, etc.
This may prove too difficult in the end, but it’s not in principle a misguided quest.
Moreover, it seems to me part of the different philosophical theories on ethics (e.g., different variants of consequentialism, Natural Law theory, etc.) propose something like that, even if considerably less precise than the future scenario. The problem I see with those theories is that they are (in my view) in conflict with moral intuitions on specific cases, and for that reason, we can tell they’re not true. But if a theory were in line with moral intuitions in all of the cases in which our intuitions are clear, it would be reasonable to use that theory to make moral assessments in those cases in which we do not have clear intuitions.
Perhaps, part of what those theories propose is illegitimate in the way Prichard describes, but the parts that yield methods to make moral assessments are not misguided per se, though they might in the end mistaken if the method yields false results.
So, in brief, I think Prichard may have found an error in a perhaps considerable portion of moral philosophy, but there is also a considerable portion left aside, which escapes his challenge.
As for Jones’s paper paper, I see a lot of issues with it, but – for example – I’ll raise the following one:
Jones proposes a reduced language that “equates an obligation to do x with the social expectations for some role”.
I would say that’s false. Sometimes, there is a moral obligation to do X and a social expectation that one will not do X (or vice versa), and even a general moral belief in the population that one should not do X.
Now, Jones provides an answer to that sort of objection, when he addressess slavery (on page 12).
But the answer seems inadequate: in fact, one can still consider the following statement (I say “most” and “some” to avoid problems with children who own slaves, people under threat who would expect to be tortured and/or murdered alongside their families if they set the slaves free, etc.):
S1: Most American slave owners had a moral obligation to set some of their slaves free.
A question that makes sense is: Is S1 true?
It seems clear to me that it is, and they had that obligation even before any abolitionist told them so. Jones’s theory says that those who answer in that manner are simply “resonating with the abolitionists”, but that does not answer the question of whether it’s true, and in fact, it seems his theory – if it makes sense at all – entails a moral error theory or some sort of noncognitivism. Yet, he doesn’t give any good reasons to think that that is true as far as I can tell, and doesn’t even make it clear that he advocates an error theory.
Other parts of his paper suggest something like culture-relativism instead. But that is also not clear.
In any case, Jones claims moral realists see the world through a “platonizing lens”, but my reply does not rely on anything like it, and I still think his reduced language fails, and also that he fails to identify a widespread mistake…though I think he gets an important point partly right: he says that what people experience (the “moral necessity”) “is not in the norms, which are just whatever they happen to be, but in people’s attitudes toward them.”.
I think that’s in a sense surely true, and – for example – an vastly intelligent alien who evolved from something like squid might not care at all about many of our moral norms, even if the alien squid isn’t mentally ill or anything like it. But I wouldn’t conclude on that basis that moral philosophy rests on a mistake (even if some philosophers might make mistakes vaguely related to that matter, in my view, like common interpretations of Twin Earth scenarios).
Hi Angra! I appreciate your thoughtful comments on specific parts of the two openly accessible papers.
Thank you for taking the time to share them.
My immediate thoughts are as follows.
I have to admit that I find myself wondering about intuitions and a moral sense. If I am honest, I am – at worst – suspicious of these notions and – at best – unclear on what they are. Further, I wonder what the diversity of intuition (or the deliverances of peoples’ moral sense) says about major projects in ethics – or about intuitionism, in general. You touch on this kind of worry at the end of your comment when you mention the intelligent aliens that evolved from squids who have different a different intuition or moral sense than we do. You say this this does not entail that ethics rests on a mistake. So I find myself wondering how ethics accommodates these differences if it assumes that intuitions or a moral sense have authority. Is it that (as you suggest with the math analogies) these authorities are subject to higher authorities (e.g., the equivalent of experts and calculators)? Something else (e.g., some for of relativism)?
Thanks again for your comments. I wish you well!
Hi Nick
Thanks for your reply, and for your thoughtful questions/comments!
Regarding the alien squid, I suppose that my answer could be classified as a form of species-relativism, though I think this is “objective” in the usual sense of the word. I would make a parallel with color: let’s say the alien-squid have all trichromatic color-like vision, but their experiences do not match the same wavelengths as ours.
For example, the part of the EM spectrum they see is different from ours; if we used a red light, they would see no light whatsoever, but on the other hand, some ultraviolet light is visible to them. Moreover, there are things they perceive as we perceive red (ruling out inverted spectrum in humans, etc., just to simplify and because I think it’s true), but those things we would perceive in some cases as blue, in some cases as green, and so on. So, in short, their color-like experiences are associated with different wavelengths from our color experiences.
If they came to Earth and they failed to see red traffic lights (i.e., they would see them as if they were off), and saw yellow and green lights differently, whatever statements they make in their language should be taken as very probably true alien-squid-color statements, and alien-squid-light statements, not as false color and light statements. In other words, their having different visual systems from ours won’t led them astray and get them into systematic error, but rather, the truth conditions of their statements will be different.
Even so, it seems to me there is an objective fact of the matter (in the usual sense of the expression) about whether a driver ran a red light, and some drivers do run red lights.
So, I would say that even if color is species relative (even with some small variations among humans, but let’s simplify), color language is (in that usual sense) objective.
I would say something like that happens with morality.
For example, if they have some language that works more or less like the terms “good”, “bad”, etc., and we take them to mean “good”, “bad”, etc., then there will be a problem, since despite considerable overlaps with our assessments (due to similar problems to be resolved during evolution), in the end their assessments of good and evil, and particularly the order of them, will be pretty different from ours, and from those of yet some other smart aliens evolved from something like elephants (e.g., what’s worse? That some Humboldt squid in our oceans starve to death, or that a young deer falls from a vegetation raft into the sea and gets brutally killed and eaten by Humbold squid? What’s worse: an elephant is horribly turned to pieces, or a whole pride of lions starves to death? Etc.).
Those errors would be systematic and could not in practice be corrected, since their moral system would be making them when working normally. But then again, I think that’s the wrong way to look at the matter. As I see it, just as the alien squid would be making true alien-squid-color statements rather than false color statements, they would be making generally true “alien-squid-good” and “alien-squid-bad” statements, rather than false statements about good and evil, better and worse, etc.
A similar situation works for moral obligation, though here’s a difference: one may interpret that their obligation-like statements are really about moral obligation (not alien-squid-moral-obligation), but their moral obligations are not linked to the moral good and the moral evil, but to the alien-squid-good and alien-squid-evil, etc., or that their obligation-like statements are also about alien-squid-moral obligation. For a couple of reasons, I’m inclined towards the second alternative.
As for Moral Twin Earth, the fact that they look like humans and they’re so similar might suggest their language is moral language rather than twin-moral language, but I think this is a mistake given how the scenarios are usually described; at any rate, the meaning of their words depends on their usage of them, not on ours (but if the Twin Earthers are mistaken speakers of moral language, so bad for them; the alien squid have no such problems).
Granted, this would mean some ethical theories are mistaken, but it wouldn’t imply all of ethics or even all of the parts of those theories rest on a mistake.
Back to Earth and moral intuitions, I think we need to distinguish between prima-facie intuitions, and intuitions after considering the matter more carefully, discussion, etc. The reason is that prima facie, immediate assessments may be mistaken due to failure to include some relevant variables, which in turn may result from the fact that different humans in today’s world may have radically different theories about how things work, what should be expected from certain behavior, etc. – a problem that almost certainly didn’t happen to hunter-gatherers within a same band in our ancestral environment -, or from the fact that we’re considering very complex hypothetical scenarios (more or less realistic) we are now capable of contemplating but weren’t when the part of the brain some of our intuitions came from evolved, and the computing power to compute obligations in those complex scenarios quickly and reliably would require a much bigger brain that what was evolutionarily available (so, we ended up with a less reliable moral system as a result of having a very complex capacity for constructing and contemplating very complex scenarios).
That said, I acknowledge that cases of persistent disagreement (or apparent disagreement) can be use perhaps for the strongest case in support of either culture relativism (akin to species relativism, but even within humans), or a moral error theory (depends on how it is); but I don’t think the evidence in that regard is strong. Several years ago, I used to think otherwise, but now I think most of the disagreement at least results either from disagreement on non-moral matters, or from failure to consider some predictable outcomes, or – actually – from the use of a false general moral theory instead of our own moral sense/intuitions (this often results from different religions/ideologies).
At any rate, Jones’s paper does not seem to focus on defending an argument from persistent disagreement or apparent disagreement to an error theory or culture relativism, so I would say that would be a different way of arguing for the
thesis that moral philosophy (or at least, all of it save for error theories/relativism, depending on the case) rests on a mistake.
All that aside, I haven’t addressed your doubts about what moral intuitions or a moral sense are. I don’t know how to define those notions, but briefly and preliminarily (I find these notions intuitive, but I haven’t given them enough thought, probably) if we have a mechanism that makes moral assessments of situations, people, etc. (e.g., “that’s a bad/good person”, “she doesn’t deserve to be punished”, “he shouldn’t treat his kids like that!”, etc.), that would be a moral sense. And our intuitions would be how we make assessments about the morality of people, situations, etc., perhaps after considering the matter from different perspectives, but not by means of consciously reasoning from some moral premises to a moral conclusion.
I wish you well too.
These follow-up comments are very helpful. Thank you for humoring my inquiries.
I think we might agree about some of the finer issues with Jones’s paper, but I wonder if we would agree that, broadly speaking, ethics might still be co-extensive with psychology (and related sciences). So — if I may borrow your delightful analogies — understanding alien squid ethics is largely a function of understanding alien squid psychology. Similarly understanding human ethics is largely a function of understanding human psychology.
On this view — and I might be deviating from Jones here — ethics might rest on a mistake whenever it rests on mistaken assumptions about human psychology.
Thanks for your thoughtful comments as well, and yes, I agree with that.
I don’t know how pervasive (in moral psychology) that kind of mistake is, though.
I don’t know either. I infrequently read traditional ethics. Admittedly, when I do I frequently find myself wanting evidence to support the assumptions about human psychology. For instance, the role of reflection seems to be standard furniture in ethical theories, but I rarely (if ever?) see ethicists support their (empirical) assumptions about the role of reflection with compelling evidence from experimental psychology – maybe I’m just missing it or reading the wrong people. I’m thinking here of concepts like reflective equilibrium, “reflective endorsement” (Korsgaard), and “reflective persons” (Sidgwick).
I tend to agree; I think as long as it’s clear that those are hypotheses in need of discussion – including empirical testing of the parts that are testable -, there should be no problem.
However, endorsement of some of those hypotheses looks like jumping to conclusions to me, and in part that is so due to some not properly supported assumptions about human psychology.
I don’t know whether this is a particular problem with ethics and assumptions about human psychology, though, or whether this difficulty spreads both to other areas of philosophy and to other assumptions (i.e., other than assumptions about human psychology).
Still, to some extent we can rely on our intuitive understanding of human psychology as long as we don’t have specific reasons to think it’s faulty; after all, we can and do that successfully in our daily lives, when we routinely predict the behavior of most of the people we encounter – and with enough precision to successfully interact with them, avoid conflict, etc. But I’m inclined to think that some (many) philosophical views go too far in their psychological assumptions.
I probably agree with all of that, depending on what we mean by ‘to some extent’ — it seems that there are plenty of charitable construals.
Thanks for discussing!
I didn’t try to specify an extent, because that’s a really difficult matter and I don’t know the answer. I was just talking about the kind of relying we do in daily life. I think we need specific reasons to stop relying in a given case, rather than generally refraining from relying on that intuitive understanding. Still, it might be argued that relying for philosophical purposes requires some more evidence than relying for daily purposes, which is another tricky issue.
Thank you for discussing too!