Friday, September 6, 2013

Moral Training Wheels

"Finally, on the theistic hypothesis God holds all persons morally accountable for their actions. Evil and wrong will be punished; righteousness will be vindicated. Good ultimately triumphs over evil, and we shall finally see that we do live in a moral universe after all. Despite the inequities of this life, in the end the scales of God’s justice will be balanced. Thus, the moral choices we make in this life are infused with an eternal significance. We can with consistency make moral choices which run contrary to our self-interest and even undertake acts of extreme self-sacrifice, knowing that such decisions are not empty and ultimately meaningless gestures."

— William Lane Craig, "Can We Be Good Without God?"
A friend of mine recently scolded her cat for starting to play with an electrical cord. It wouldn't do any good to lecture the cat about how dangerous electricity can be, so an imposed association between electrical cords and punishment are needed to keep her cat safe when no one is watching. The same applies to toddlers. Adult humans avoid chewing on electrical cords because they don't want to be shocked. No stand-in motivation needed!

When it comes to moral situations, some philosophers try to show that acting morally is in our own best interest, either all the time or often enough that we tend to come out ahead in life if we cultivate moral habits. Other philosophers (and many preachers) claim that acting morally is in our own best interest because we will be punished or rewarded in an afterlife. The quote at the top of this post is such an example: William Lane Craig believes that self-sacrifice is "empty" if it doesn't eventually turn into huge rewards for the person doing the sacrificing.

In other words, there's a tendency to reduce morality to self-interest. I believe this is a mistake. While it's true that moral action often works in our own favor, the essence of morality is other-interest.

But there's a problem: some people don't have much in the way of other-interest. How do we convince them to act in the interests of others anyway? Impose an association between harming others and punishment, or an association between helping others and reward. It's another kind of stand-in motivation.

Punishment and reward are training wheels for human beings who can grow in understanding (to better achieve what they want and avoid what they don't) and who can grow in empathy (to better care about what others want). Training wheels might keep your bike from falling over, but you aren't truly riding until you no longer need them. When I read things like the quote at the top of this post, I see a desire for perfect training wheels: the appearance of moral justice without any need to act out of the interest of others.

3 comments:

  1. Hi Garren.

    If I may - and I may not be able to make a persuasive case that this is so - I don't think what William Lane Craig is getting at really boils down to self-interest. At least, not in that quote.

    The intuition is much more to do with concepts like "meaning" and "worth". It's an intuition I share with him and though I can't really articulate it yet, it really doesn't feel like it's in the territory of self-interest. It's not that, for instance, were evil-doers not punished I would straight-forwardly feel that I had wasted time being "good". It's not that I need some "incentive" to make the right moral choices. It's that - in some deeply intuitive, inarticulate sort of way - unless reality upholds the bindingness of morality, the whole moral project feels trivialised and undermined.

    Eh, that's obviously not enough to explain this perspective and sensibility. And I don't think Craig necessarily does enough articulating work either (I tend to find that with him on morality generally, he doesn't do much to help folks who don't already share his theist-friendly intuitions).

    ReplyDelete
  2. Good evening, Martin. I certainly don't think everything he talks about in this area boils down to self-interest, but sometimes it sure looks like he doesn't think self-sacrifice is reasonable unless it either comes out as a good for the person doing the sacrificing, or (and maybe this is what you were talking about) has some kind of transcendent significance.


    Either way misses the possibility that a primary reason for self-sacrifice is nothing more or less than the benefit it brings to other people. No personal credits or cosmic moral concern is required for sacrifice to be a sensible choice for a caring person. Heck, I can understand some of the appeal in a worldview where cosmic concern exists beyond our human concern for each other, especially since our concern is too often outweighed by our lack of concern. What I find offensive from some people (whether or not WLC truly fits) is the notion that human suffering and human love are nothing without that cosmic layer.


    Thank you for commenting. You're always welcome!

    ReplyDelete
  3. Thanks for your reply (and the assurance of welcome!)

    I had an idea earlier in the week of a thought-experiment that shows one example of how moral motivation can legitimately be undermined by something like "significance", without that undermining simply consisting in a lack of reward-incentive for the agent. And how, if there's a lack of the proper significance, this can undermine the motivation for an action that still, in a sense, "benefits" the recipient.

    Imagine you reliably know that the world and all its inhabitants are going to be destroyed in 24 hours (and there is no afterlife). That is, in a day's time this whole human project is going to be ended. How do you think that this weighs upon your moral considerations? Obviously, moral choices that depend on long-term outcomes will be undermined. But I have intuitions to the effect that even short-term moral considerations now seem to have something counting against them.

    Perhaps you're normally the sort of person who helps old ladies carry their shopping. You see an old woman struggling. Perhaps you would still help her even while knowing that in 24 hours, this woman and her groceries will be obliterated. You can, after all, quite clearly still help her in that way. You can still benefit her. But would you not at least feel like the world's ending counted as a reason not to help her, even if not an ultimately decisive reason? Would you not feel, at least, that this fact saps some of your moral vitality?



    It's certainly clear to me, anyway, that the world's ending would have such an effect on me. And that its doing so isn't related to the fact that I won't be "rewarded". Rather it feels like some of the "point" is taken out of the moral activity.

    Now, granted, even if you do share these intuitions, that wouldn't say anything about the need for "cosmic" significance to properly motivate moral reasoning. I'm just trying to show that, for a person who finds themselves with intuitions that cosmic significance is morally important, it's intelligible how that need features in their moral motivations without it boiling back down to the need for reward, and that its absense could be undermining even in the face of a prima facie retaining of benefit for the recipient.

    ReplyDelete