Sunday, May 15, 2011

Demarcation

What distinguishes a moral 'ought' from a non-moral 'ought'? Or, if you agree that all 'oughts' are relative to ends, what distinguishes a moral end from a non-moral end?

This is the question that has been bugging me as I fall asleep nights lately. I see two major contenders:

1. A distinctively moral end is one which everyone is expected (by someone) to weigh against any conflicting ends, perhaps even in an overriding way.

2. A distinctively moral end is one which has to do with the welfare of others.

The first option is attractive because it captures 'oughts' which don't seem to hinge on the welfare of others, but get clumped together in social discourse with 'oughts' that do. The second option is attractive because it tracks much more closely with what I personally consider the moral domain, or at least the morality I care about.

I suppose a third option would be to get even more narrow than (2) and only classify a specific normative ethic as morality properly-so-called. This is a historically popular route, but the next person who touts another normative ethic will reject it.

I'm leaning toward (1) because I'm not trying to demarcate the ends people should consider moral, but which ends they already do consider moral. And when people do consider an end to be moral, they tend to expect others to take it into account (at least) in all circumstances.

4 comments:

  1. Luke Mullhauser has these tirades against the "superdictionary". Respectfully, I think what you are doing here is trying to write an entry in it. If you answer "what distinguishes a moral end from a non-moral one", what will you answer?

    You are trying to describe moral discourses. That's a job for sociologists. Is that really the answer you seek? Is that the interesting question?

    Personally, I am interested in far different questions. Establishing definitions that just carve out what people talk about when they talk about "good" is merely outlining people's confusion and ignorance.

    Yair

    ReplyDelete
  2. I was anticipating a 'superdictionary' comment, which is why I mentioned at the end that I'm trying to describe a boundary rather than prescribe one.

    But then, yeah, that does end up favoring a sociologist's answer. I do happen to think the right answer to metaethics is fairly boring, and it's largely a matter of explaining why people think there's more to it.

    What questions are you interested in, Yair?

    ReplyDelete
  3. Garren, my initial response to your post was also along the lines of Yair's "superdictionary" objection. In my own ruminations on morality, I prefer to keep connotation-rich words like 'morality' out of the discussion until such time as they become useful as handles for concepts I've already introduced and defined with other language. (And if someone doesn't like me using the word 'morality' in that context, then they're free to pick a different word.)

    But, to the point of your post: I agree that there may be type (1) moral ends that don't directly influence the well-being of others. The most obvious examples that spring to my mind are in deeply religious societies in which everyone is expected to weigh religious duties against other moral ends. Many of these religious duties, like praying to Allah five times a day, have no direct influence on peoples' well-being (and in many ways are a complete waste of time because they have no basis in reality!).

    However, I think many sociologists would claim that religious traditions promote social cohesion and therefore well-being. These things are time-consuming, and therefore energetically expensive: presumably they would not persist if they had no real benefit to the society that practiced them.

    ReplyDelete
  4. "What questions are you interested in, Yair? "

    Well, I guess the main philosophical one is a clarification of (for lack of a better term) what we "want to want". It seems to me that we don't precisely want our current final ends so much as the final ends of a "more enlightened" version of us, but there is a conceptual bogmire here I am unable to navigate and that's what philosophy is for (at least as I understand it).

    A more empirical question is whether there is, in this more-rational sense, a certain Normal distribution of final values, especially ones relating to society. For example, consider the famous Mayor dilemma between Justice and the Common-Good; I suspect that people want both Justice and the Common-Good, but that the balance is highly determined by culture (Eastern cultures are collectivist so tend towards common good, while Western ones individualist and tend towards justice). Does this genesis of the balance impose any Normative conclusions? Should the balance change once we've recognized its origin? Is there a rational balance?

    Ultimately, I'd be interested in developing a theory of such Normal rationalized personal values, and its ramifications for normative ethics and political theory. And the psychological theories and techniques that I suspect should accompany it, such methods to achieve elf-knowledge and greater rationality. But that is so far beyond my current level that these issues are only theoretical desiderata at this point.

    Yair

    ReplyDelete