The Case Against Robot Rights

A Hardline Argument

02 February 2016

As technology advances, so too does the capabilities of robots. As a result, some philosophers wondered whether robots may one day acquire ‘rights’ equal to that of their human brethren.

In 2015, Time Maganize asked the question: “Will Robots Need Rights?” It also posted the answers of three people:

All three answers show a sympathetic outlook towards robots, treating the concept of “rights” as something to seriously consider as we deal with potential alien-sque entities. And while I support respecting robots as “potential alien-sque entities” instead of treating them as “just tools” to be used and abused, I believe that the case for robotic rights has been vastly overstated (and you don’t even need to bring in the nebulous idea of consciousness into the discussion).

In this blog post, I will lay out a hardline case against robotic rights. It’s not a case I fully belive in myself, but it’s a case that I believe that most people will end up using. Note that my argument is only against robot rights; it is silent on the question on whether humans have rights.

Premise

I rest my case on the premise that rights can only belong to entities that are able to exercise some control over their own actions and motives. I call these entities “autonomous actors”, to avoid any confusion with the term “autonomous agent”.

The Lovelace Test

The original Lovelace Test, outlined in “Creativity, the Turing Test, and the (Better) Lovelace Test”, was meant as a way to determine whether programs are intelligent. A program is intelligent if and only if it did all these things:

  1. The program must be able to design something ‘original’ and ‘creative’ (such as a story, music, idea, or even another computer program).
  2. The program itself is a result of processes that can be ‘reproduced’. (In other words, it does not rely on some bug in the ‘hardware’ that the program is running on.)
  3. The program’s creative output must not be a result of bugs within the program itself.
  4. The programmer must not know how the program actually works.

Criteria #4 is the kicker though. It is very possible for programs to design ‘original’ and ‘creative’ work. After all, robots are already writing fiction and nonfiction stories. But the programmers themselves know how the robots are able to produce creative outputs. Therefore, all of these programs fail the original Lovelace Test.

Even an algorithm that uses “machine learning” is unable to pass Criteria #4, as it can be argued that the programmers are able to control the robot by controlling the dataset the robot uses to understand the world. Thus, you can explain the robot’s creative output by deducing the dataset that it used.

In fact, the whole point of this original Lovelace Test is to argue against the idea that robots could ever be intelligent, by arguing that robots ultimately need programmers. Mark O. Riedl wrote that “any [programmer] with resources to build [the program] in the first place … also has the ability to explain [how the program generates the creative output]” [1].

I disagree with the original Lovelace Test, because it claims that intelligence is a trait that either exists or doesn’t exist. I prefer to think of intelligence either in terms of a continuum or through the idea of multiple intelligences.

But I think the original version of the Lovelace Test is useful when thinking about robotic rights. Robots do what programmers tell them to do. They do not have any independent ‘will’ of their own. Robots are essentially puppets. It’s hard to consider them autonomous actors, because there is always someone behind them (either a human or a dataset) pulling the strings. And we can see those strings.

The Parable of the Paperclip Maximizer

That doesn’t mean programmers has total control over these robots. Sloppy code, bad planning and codebase complexity can lead to unexpected outcomes.

Given enough time and patience, a programmer should be able to figure out why his code lead to the outcome that it did. If a problem is big enough though, practically speaking, many programmers will not have the time or patience. Shortcuts are taken. Rationalization sets in. Ignorance is accepted as best practice.

For example, I can easily imagine a lazy programmer write the following code before going to bed…

  1. Produce as many paperclips as physically possible.
  2. Increase your processing power so that you can more effectively produce paperclips.

…and thereby destroy humanity because his ‘Paperclip Maximizer’ has decided to turn the entire solar system into paperclips and computer mainframes.

But this is not an “artificial intelligence” problem. It’s a “human stupidity” problem. The programmer wanted to produce paperclips and did not think through the consequences of his code. The ‘Paperclip Maximizer’ simply followed orders.

The programmer, if he did think through his code carefully, would have likely noticed problems. But, of course, he had other priorities. He had to go to sleep so that he can be well-rested the next day to watch Netflix movies and contribute to open source projects. And his code was so elegant, and it passed all his automated tests. He had nothing to worry about.

So he goes to sleep and never wakes up.

Robots Follow Orders, Only We May Not Know What Those Orders Mean

A programmer may not know exactly why a program is doing what it is doing…but he has the theoretical capability to find out for himself (since, you know, the programmer wrote the program). And he should at least attempt to do that, so that you can reduce the chances of scenarios such as the above parable.

But what if a programmer is unable (or unwilling) to do this? Does the robot deserves rights then?

No. The programmer had the capability to understand what a robot is doing. He just decided not to use it. But the fact that the programmer could have found out suggest that the robot is not an autonomous actor.

The robot simply follow orders…only in this specific case, they are orders that we do not quite fully understand ourselves.

For debugging purposes, we should hire another programmer who will be more willing to figure out why the robot is acting the way it is. After all, if one human has the theoretical capability to find out what a robot is doing…then it is likely that another human will eventually gain that same theoretical capability.

Conclusion

If we want robots to actually think for themselves (instead of just being puppets of datasets and human programmers), we have to turn robots into autonomous actors. As the original Lovelace Test suggest, this is an impossible goal. If we are able to write a program, then we should be able to also know how that program works. There is no autonomy to be found anywhere.

If robots can never be free, then they can never deserve rights.

Footnotes

[1] Incidentally, Mark O. Riedl proposed a less strict version of the Lovelace Test, the Lovelace 2.0 Test, as a test that can actually be beaten. Instead of mandating that the programmer remain ignorant of the inner workings of his program, the program’s creative output must meet certain constraints as determined by an independent judge.

Return back to Blog Index