Here's a thread no one saw coming. Let's debate the rights of (sentient) robots.
For the purposes of the discussion beyond this point, a "robot" is a machine that has been granted artificial intelligence that rivals, or even surpasses, that of humans. But not only intelligence, but a state of consciousness -- the robot is aware of itself, why it was created, that it is similar to humans, and that it was created, by humans, to assist, replace or augment humans. For illustration, transpose your own consciousness into the guise of a robot. You are the same person you already are, except you are a robot. Hell, for kicks, just imagine that everyone who posts in this thread is actually a robot. Thus, "robots" in this discussion have a sense of pleasure and discomfort (broadly defined, and not necessarily physical). They are able to make their own decisions (for the purposes of this thread).
Why is this discussion important? Well should we wait until the science has reached this point, and then discuss the rights of sentient robots, or discuss it before they are created? Let's avoid future robot injustices (and the ensuing robot reprisals) by holding this important discussion.
So, on to the interesting questions.
-----------------------------
Under present law, robots are just inanimate property without rights or duties. Computers aren’t legal persons and have no standing in the judicial system. As such, computers and robots may not be the perpetrators of a felony; a man who dies at the hands of a robot has not been murdered. (An entertaining episode of the old Outer Limits TV series, entitled “I, Robot,” involved a court trial of a humanoid robot accused of murdering its creator.) But blacks, children, women, foreigners, corporations, prisoners, and Jews have all been regarded as legal nonpersons at some time in history. Certainly any self-aware robot that speaks English and is able to recognize moral alternatives, and thus make moral choices, should be considered a worthy “robot person” in our society. If that is so, shouldn’t they also possess the rights and duties of all citizens?
If you, as a robot, can choose between different moral options, are you entitled to rights? Why can't you vote, when you can understand the difference between the policies of different political parties? Why can't you be tried in a criminal court, when you understand that you chose to murder someone, or to destroy private property?
But the grey areas only become more immense:
Software versions: Consider a robot who commits a felony while running the aggressive “Personality A” program, but is running mild-mannered ‘Personality M” when collared by the police. Is this a false arrest? Following conviction, are all existing copies of the criminal software package guilty too, and must they suffer the same punishment? (Guilt by association?) If not, is it double jeopardy to take another copy to trial? The robot itself could be released with its aggressive program excised from memory, but this may offend our sense of justice.
Killing a robot: The bottom line is it’s hard to apply human laws to robot persons. Let’s say a human shoots a robot, causing it to malfunction, lose power, and “die.” But the robot, once “murdered,” is rebuilt as good as new. If copies of its personality data are in safe storage, then the repaired machine’s mind can be reloaded and up and running in no time – no harm done and possibly even without memory of the incident. Does this convert murder into attempted murder? Temporary roboslaughter? Battery? Larceny of time? We’ll probably need a new class of felonies or “cruelty to robots” statutes to deal with this.
Levying criminal charges against robots: How should deviant robots be punished? Western penal systems assume that punishing the guilty body punishes the guilty mind – invalid for computers whose electromechanical body and software mind are separable. What is cruel and unusual punishment for a sentient robot? Does reprogramming a felonious computer person violate constitutional privacy or other rights?
The life and liberty of robots: Robots and software persons are entitled to protection of life and liberty. But does “life” imply the right of a program to execute, or merely to be stored? Denying execution would be like keeping a human in a permanent coma – which seems unconstitutional. Do software persons have a right to data they need in order to keep executing?
Robo-entitlements: Can robot citizens claim social benefits? Are unemployed robo-persons entitled to welfare? Medical care, including free tuneups at the government machine shop? Electricity stamps? Free education? Family and reproductive rights? Don’t laugh. A recent NASA technical study found that self-reproducing robots could be developed today in a 20-year Manhattan-Project-style effort costing less than $10 billion.
Sexbots: According to sociologist and futurist Arthur Harkins at the University of Minnesota, “the advent of robots with sexual-service capabilities and simulated skin will create the potential for marriage between living and nonliving beings within the next twenty years.” For very lonely people, humanlike robots that don’t age and can work nonstop could become highly desirable as marriage partners for humans. In many instances, says Harkins, “such marriages may be celebrated with traditional wedding vows and country club receptions.”
Is it possible to love a robot? Do we recognise a robot-mance between two kindred robot-spirits?
Is it possible to rape, or be raped by, a robot?
--------------------------
I'm going to leave it here for the time being. I've been a bit lazy in this OP, quoting (in all instances of italics) from here: Legal Rights of Robots
For further information, try Wikipedia, or the Institute for the Future.
For the time being, consider the important message provided by two of the 21st-century's most eminent philosophers:
Bookmarks