As a programming student, I don't see nor understand why a robot would need rights. Robots are programmed to do what they do in order to function as intended by a human. I'm not saying it isn't possible to create a robot with high functioning morals or thoughts, but that would take many mathematical geniuses to come up with billions of formulas, algorithms and processes in order to create this robot you speak of.Originally Posted by Alpha
Creating a robot with a high amount of freewill is disturbing to say the least because it would be completely unpredictable - humans are born and raised, and usually become who they come because of factors throughout their life. A robot would put together parts of it's code and just become what is selected, therefore... why would someone design a robot with immense choices? Surely a robot would be designed for a purpose?
I'm reminded of the movie I, Robot and Sonny. Sonny, unlike all the other new robots in the movie was built to be more independent. By Lanning, who then ordered Sonny to kill him.
Think of online bots, and what they're capable of. They were designed my someone with a purpose. More or less, the ones we've come across on the internet have been advertising ones which spread the word of something in topics. Then think about the ones which are capable of fixing votes. Someone programmed the bot to fix the votes. What's to say the programmer coded the robot to his or her own morals in order to get another vote?
Alas, those were just my thoughts when first picking up this thread.
As I say, the robot would be unpredictable. If a robot was ever designed as I mentioned above with the formulas, algorithms and processes to make all the moral decisions a human could, and then had further construction so that the robot could mentally "grow" (and unlock parts of the coding gradually as it made choices throughout its "upbringing") as a human would, then perhaps robots would need robot rights and would need to follow the same rules and regulations as humans do.Originally Posted by Alpha
If I was a robot designed as such, then I'd need an education, I'd need to discover the differences between right and wrong, I'd need to understand feelings, and how things work in order to expand and unlock further parts of my programming in order to make my own moral decisions. I'd also need the right to vote, yet it would be difficult to determine when a good time/age for voting would be for me and other robots like me. I wouldn't/shouldn't have the right to vote immediately after creation because A) there's a chance I would be programmed to vote as such, or B) I've not discovered and/or unlocked parts of my own personality to make the moral choice, and C) in most countries, humans can't even vote until a certain age, and therefore not making it right.
It wouldn't be a false arrest as the robot had committed a crime, and switched personalities. That's like having a human with severe MPD having a turn, and then people ignoring it and setting them free. If a robot would commit a crime, then either its programming needs to be checked and corrected or the robot needs to be destroyed. With humans, it'd be prison and/or counselling, with some countries having the death penalty.Software versions: Consider a robot who commits a felony while running the aggressive “Personality A” program, but is running mild-mannered ‘Personality M” when collared by the police. Is this a false arrest? Following conviction, are all existing copies of the criminal software package guilty too, and must they suffer the same punishment? (Guilt by association?) If not, is it double jeopardy to take another copy to trial? The robot itself could be released with its aggressive program excised from memory, but this may offend our sense of justice.
A robot could be easily repaired or replaced if a back up was kept of its memories or personalities, or if its "vital organs" were still intact. I don't condone taking a gun and shooting anything for no great reason, but a robot is a robot. Human life is much more valuable than tin and electronics in my opinion. It would be destruction of human property, but no "life" would be taken.Killing a robot: The bottom line is it’s hard to apply human laws to robot persons. Let’s say a human shoots a robot, causing it to malfunction, lose power, and “die.” But the robot, once “murdered,” is rebuilt as good as new. If copies of its personality data are in safe storage, then the repaired machine’s mind can be reloaded and up and running in no time – no harm done and possibly even without memory of the incident. Does this convert murder into attempted murder? Temporary roboslaughter? Battery? Larceny of time? We’ll probably need a new class of felonies or “cruelty to robots” statutes to deal with this.
It'll all be in the programming somewhere. Memories of the robot could remain intact, yet removal of the bad programming wouldn't be a bad thing.Levying criminal charges against robots: How should deviant robots be punished? Western penal systems assume that punishing the guilty body punishes the guilty mind – invalid for computers whose electromechanical body and software mind are separable. What is cruel and unusual punishment for a sentient robot? Does reprogramming a felonious computer person violate constitutional privacy or other rights?
If a robot makes the choice to live, then let it live. If it wants to die, then I see no reason why it should be denied the right to end its life. It'd probably be able to reboot or restart itself somehow in order for a new robot to be born. Kind of like starting a new save file on a game, I suppose.The life and liberty of robots: Robots and software persons are entitled to protection of life and liberty. But does “life” imply the right of a program to execute, or merely to be stored? Denying execution would be like keeping a human in a permanent coma – which seems unconstitutional. Do software persons have a right to data they need in order to keep executing?
I don't see why not. However... what would a robot spend money on? Its electricity bill? This one is debatable because humans have needs a robot wouldn't need.Robo-entitlements: Can robot citizens claim social benefits? Are unemployed robo-persons entitled to welfare? Medical care, including free tuneups at the government machine shop? Electricity stamps? Free education? Family and reproductive rights? Don’t laugh. A recent NASA technical study found that self-reproducing robots could be developed today in a 20-year Manhattan-Project-style effort costing less than $10 billion.
Sexbots: According to sociologist and futurist Arthur Harkins at the University of Minnesota, “the advent of robots with sexual-service capabilities and simulated skin will create the potential for marriage between living and nonliving beings within the next twenty years.” For very lonely people, humanlike robots that don’t age and can work nonstop could become highly desirable as marriage partners for humans. In many instances, says Harkins, “such marriages may be celebrated with traditional wedding vows and country club receptions.”On sexbots... why not? Not my cup of tea, but it could be beneficial - a robot cannot contract or give STDs (unless it's "equipment" isn't sterile). Worst case scenario is that it might give out an electric shock or harm the human involved. Two robots having sex is a little weird as would they be able to need or feel sex?Originally Posted by Alpha
As for robot love: I don't even think humans can think up the formulas, algorithms and processes in order to design love. If a human wants to marry or have a relationship with a robot, then the robot probably won't feel anything back unless told (i.e. programmed) to do so.
If it's possible for a robot to make a choice to commit murder, then why not rape (provided it's "equipped")?
It's a very complicated and mind-**** of a topic here, Alpha. +rep
Definitely made me think... my post is probably not what you were looking for, might be inaccurate, or difficult to understand, but it's just what I thought. xD;
Bookmarks