Wednesday, November 17, 2010

Intuitions and Algorithms

Sentence A — My dog chased the ice cream truck.

Sentence B — My chased dog the ice cream truck.

Native English speakers immediately recognize Sentence A as valid and Sentence B as invalid. But if you ask them what exactly is wrong with Sentence B, their answers won't be so quick or unified; we can't necessarily explain how our intuitions give the results they do. Noam Chomsky referred to this intuitive ability as linguistic competence, then set out to discover explicit rules which generate the same answers. A more concrete way to think about this is:

What would it take to write a computer program that makes the same "valid" / "not valid" judgments as native English speakers?

Since computers don't operate on intuition, they have to be told how to process candidate sentences in a very explicit, step-by-step way until a solution is reached. Programmers call this process an algorithm. There are algorithms to sort numbers, compare dates (usually of the calendar sort), and simulate physics in video games. If anyone ever writes an algorithm which kicks out the same linguistic answers as intuition, we still might not know how the intuition works in our minds (there may be multiple ways to generate the same answers), but we would at least know what all the relevant factors are and how they interact with each other. Plus, non-native English speakers could ask a computer for any number of validity judgments without the computer getting annoyed. A computer could even be set up to generate millions of random but valid sentences.


There is another kind of intuitive judgment that can be hard to explain: moral judgments.

Situation A — Bill sees Charlene poison David's drink, but declines to tell David. David dies.

Situation B — Bill poisons David's drink. David dies.

Most people will immediately judge Bill's inaction in Situation A as crummy, but Bill's action in Situation B as significantly worse. Why? As with the language judgment at the beginning of this post, the answer to "Why?" won't come as quickly and it won't be as unified.

Imagine if we had a morality algorithm: a step-by-step procedure for making moral judgments that match up with our intuitions. We would know what all the morally relevant factors are and how they interact with each other. We could even program a computer to calculate moral judgments for any given situation.

Wait a minute! What about conflicts between moral intuitions? There are a few ways to handle that.
  • It could be the case that everyone's moral intuition works the same, but we disagree about morally relevant facts. We would just need to make sure we put the true non-moral facts into the computer. For example, we might need to determine the truth of some religious claims.
  • Once we see the morality algorithm works great for most of our deeply held intuitions and tuning it further doesn't seem to help, we might start to trust the algorithm over our own intuitions for the remaining cases. (Philosophers will recognize this state as reflective equilibrium.)
  • If our moral intuitions do sometimes operate in fundamentally opposed ways even when processing the same beliefs about non-moral facts, it might make sense to talk about different moralities. The algorithm could still accurately predict an individual's judgments if it were given the additional inputs of which moralities are under consideration and how they interact with each other.
As I will explain in a forthcoming post,1 I think the last option is the only realistic one. It would require something more like a set of algorithms depending on which kind[s] of morality an individual is using to draw conclusions.

1. http://wordsideasandthings.blogspot.com/2010/11/what-is-morality-anyway-pt-1.html

    No comments:

    Post a Comment