I got the strange feeling someone was looking at me... weird. Well anyways...
Without even being familiar with any of these other theories, the first thought that came to mind for a qualifier for having some moral status was sentience. Apparently that has been a relatively common first thought amongst philosophers. I'd also wager that is what earlier philosopher's meant by capacity to reason, but simply lacked the knowledge of the brain functions of lower species who couldn't come right out and tell them they were sentient. Religious beliefs of the era primarily associated with the reason theory supported a belief that man was the only intelligent being created, and that animals were pretty much food with legs placed here for us to hunt and eat and make jackets out of. Were Socrates and Plato born in Europe in the 1900s, they'd probably be on board with the sentience theory as well, as they were both highly rational people, and evidence indicates that they were overall ethical people as well, they simply lacked much of the information we now take for granted.
I'm not really seeing the problem here honestly, but that's because I don't factor emotion in to my views. I don't see any flaw in the reality that the sentience rule leaves a few animals out. The animals that get left out, don't care. They are, by nature incapable of caring that they are left out. Saying a baby with no brain has rights, is objectively no different than saying my car has rights. The two have the same intellectual capacity, the same sentience, the same desire to be protected by morality, the only difference is that the baby has a cute face that causes people to develop an emotional attachment to it, making some people uncomfortable with treating it as a non sentient being. The primary reason we try to extend human rights to nonhumans is that we try to anthropomorphize them. That is, we see something, we identify that it has life, and we then try to find common ground between its kind of life, and our own. We have an inherent belief that all life is somehow like us. This is simply not the case. In the case of animals, we look into their big puppy dog eyes, and see mental functions that may or may not be present. Now I'm not pro cruelty to animals or anything, but the fact that they have life does not necessarily mean they have the same form of life, requiring or desiring the same ethical protection. In the case of jellyfish, we look at an animal, and think of our puppies, because they're animals too. We look at the obvious thing in common, and ignore the world of difference separating them. Our puppy whimpers and is hurt if you kick it. It has some level of emotional reaction to this. It's mental growth will be affected in some way by this experience, and will change the course of its mental development for the worse. The jellyfish on the other hand doesn't really give a flying **** if you kick it. It isn't even aware of the fact. Kicking a puppy is cruel, kicking a jellyfish is no more cruel than kicking a rock. Neither the jellyfish nor the rock notice or care that you have kicked it. If no torment has been inflicted, one can hardly consider the action cruel. There is no need to have moral rules in place to prevent cruelty against something which by its nature is impossible to be cruel to any more than you need to impose a penalty for violating the unbreakable laws of physics. The rule CAN'T be broken, because the jellyfish CAN'T suffer. One can't attribute all human qualities to anything that has some commonality with a human. The puppy is the same as we are in the ways that matter, the jellyfish is not. The exclusion of the jellyfish is logical.Originally Posted by Alpha
Now in the case of the baby with no brain, this is a bit more difficult to accept. While it has MUCH more in common with a human than the jellyfish, it too is NOT the same in the way which matters. It possesses some properties of life, it possesses human appearance, many structural similarities to a human, but one thing which it is inescapably missing is humanity. While a biology textbook might still classify it as a living human being, it is lacking in the essential qualities of a person, primarily sentience. It is therefore not in need of the same moral protection as a person. But, the emotional human species doesn't like to accept that, because we see so many things we have in common with it, and not enough of the things we do not. The qualities that MATTER here are the ones we are NOT seeing.
Computers have many commonalities with the human brain. Why do people then not think that computers are entitled to moral protection? Because tehy don't look alive. It doesn't have the responses to stimuli we expect of a life form we try to anthropomorphize. It has some things in common with us, but not the right things to evoke that human response.
I recently made a post in a thread titled Murder, Arson, and Jay-Walking, my analysis of rules on that topic applies quite nicely here. When determining whether or not a rule need be followed, I ask myself two questions.
First, for what reason does this rule exist?
And second, does this reason currently apply to me?
For what reason do we have moral rules? To protect the interests of those the rules protect. Does the thing the rules now attempts to protect HAVE interests to protect? If not, the rule does not currently logically apply. This line of reasoning is likely where the sentience theory came from.
Insofar as I agree with morality in general (which is to say not at all, I'm a moral nihilist), I would tend to agree that only sentient beings need be protected by it.
Bookmarks