Are We Accidentally Training Robots To Be Our Not-So-Benevolent Overlords?

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
— from Isaac Asimov’s Laws of Robotics

Let’s cut to the chase here: are we training computers and robots to essentially enslave us in, I dunnow, maybe 20, 50, 5 years or so?

Hahahaha, I’m just messing with you—I don’t really believe that. 🙂

That said, I found this article from Mysterious Universe—“Robots Hitting Humans Is Not A Good Sign”—very interesting. It reports on international efforts by researchers to encourage robots to beat the living tar out of humans.

For example, there is the “arm-punching robot” of Germany. Spearheaded by the Robotic Systems Business Unit at the Fraunhofer IFF Institute, this pendulum-arm robot was designed to find out how hard a human could be punched by said machine until they die.

Obviously, volunteers are used in the experiments, nobody has been punched to death by a Maximum Overdrive-level homicidal robot, and there is actual a practical application to the work—the results can be used to help program robots in factories and whatnot so they don’t accidentally  pummel to death their fellow skinbag workers.

Then there is the Swedish “face-slapping robot.” Ostensibly to be eventually used in the world’s most realistic Three Stooges android, this is an alarm clock with an actual robot hand built-in to it to slap you the hell awake.


Are we (you know: you, me, Gummo down the street, and the most brilliant minds in robotics) purposely encouraging these machines to violate science-fiction novelist Isaac Asimov’s First Law Of Robotics: “A robot may not injure a human being …”?

Or do humans have no choice in the matter but to try to be proactive about the whole robot-hurting-humans thing by studying how much deadly force they can exert? I mean, there was that robot who killed a factory worker last year in Germany (the story broken on Twitter by the prophetically-named Sarah O’Connor).


And…well…if and when robots and computers ever do achieve a sort of sentience and independence, isn’t kind of, for lack of a better term, “human-ist” of us to demand they follow Asimov’s laws? Even if asking them not to kill or maim humans is justified, what about the part regarding not letting a human come to harm? I mean, what if the robot in question just doesn’t want to get involved?

Furthermore, look at Law #3, about robots having to obey humans. This is basically compelling them to slave away for the skinbags. Certainly, a self-aware robot might have a problem with this?

Computer scientist and futurist Ray Kurzweil stated in TIME that it would be a moral imperative to give a “conscious” robot rights:

If an AI can convince us that it is at human levels in its responses, and if we are convinced that it is experiencing the subjective states that it claims, then we will accept that it is capable of experiencing suffering and joy. At that point AIs will demand rights, and because of our ability to empathize, we will be inclined to grant them.

In which case…the 3 Laws Of Robotics (which, you know, is a fictional thing but people always cite them in these types of discussions) might need to be revised.

But we’ll be sure to keep the “harm no humans” thing in there.