The Lovelace Test Responds

And A Correction To My Previous Blog Post

12 August 2015

In my previous blog post, I wrote about two tests used to determine whether AI is intelligent. However, it turns out that I did not fully understand one of those tests: the Lovelace Test.

The Lovelace Test was named after Ada Lovelace (the first computer programmer) who argued famously that computers will only follow instructions that programmers give it. According to my summary of this test, a program must match the following criteria:

“1. The program must be able to design something ‘original’ and ‘creative’ (such as a story, music, idea, or even another computer program).

  1. The program itself is a result of processes that can be ‘reproduced’. (In other words, it does not rely on some bug in the ‘hardware’ that the program is running on.)
  2. The programmer must not know how the program actually works.””

I thought that this test is flawed because it excludes the possibility of bugs or programs being so overly complex that a programmer would not be able to understand them.

But it turns out that my summary of this test was based on a Vice article, which neglected one additional criteria: the program must not be a result of bugs. In the original paper “Creativity, the Turing Test, and the (Better) Lovelace Test”, the authors specifically addressed the idea of bugs, and why their presence does not mean intelligence.

“Sure, we all know that computers do things we don’t intend for them to do. But that’s because we’re not smart and careful enough, or — if we’re talking about rare hardware errors — because sometimes microscopic events unfold in unforeseen ways. The unpredictability in question does not result from the fact that the computer system has taken it upon itself to originate something. To see the point, consider the assembling of your Toyota Camry. Suppose that while assembling a bumper, a robot accidentally attaches a spare tire to the bumper instead of leaving it to be placed in its designated spot in the trunk. The cause of the error, assume, is either a fluke low—level hardware error or a bug inadvertently introduced by some programmers. And suppose for the sake of argument that as serendipity would have it, the new position for the tire strikes some designers as the first glorious step toward an automobile that is half conventional sedan and half sport utility vehicle. Would we want to credit the malfunctioning robot with having originated a new auto? Of course not.”

Under this idea, the Lovelace Test does indeed have meaning. It may be impossible to actually pass (as originally designed), as I actually do support Ada Lovelace’s contentions that computer programs only do stuff we tell it to do. But I do not think that the ability to tell programs what to do automatically renders programs non-intelligent. Even intelligent species like humans have to receive instructions and learn from other intelligent beings.

But at least the test has rational and logical meaning, and places a higher barrier than the Turing Test. So this blog post is an apology of sorts for misrepresenting the Lovelace Test in my previous post. If you do agree with the premises of the test, then it would serve as a valid way of determining intelligence. That being said, this passage on bugs raises three questions about the Lovelace Test:

1) Who do we credit then for making the new car, if not the robot? We can’t credit the designers…they did not come up with the idea or build the prototype. We can’t credit the programmers or the low-level hardware error: they made mistakes and were not doing their job. The only entity that actually built the new auto was the malfunctioning robot, and is not creation a type of origination?

2) If machines do something new and unexpected, it is not a sign of machine intelligence, but human stupidity. This seems somewhat more disturbing to me, especially as we may soon easily build machines so complex that we cannot even begin to understand and comprehend how they work. How would the “stupid” humans be able to handle dealing with these mechanical brutes (especially in terms of debugging)?

3) These “complex” machines may, of course, accomplish their assigned tasks efficiently than a human can. Does that make the machines’ non-intelligence “better” than human intelligence? Is “intelligence”, then, an overrated concept?

Return back to Blog Index