EPICAC XIV Shows Potential Dangers With Human-Machine Collabaration
A Parable From Player Piano
15 October 2016
Recently, the term “Human-Machine Collabaration” has came in the vogue, to describe a sort of alliance between mankind and their tools to accomplish a specific purpose. There has been much talk about the potentials of such collabaration. There has been less talk about its dangers.
Creativity has always been a collaborative act. We can now increasingly count machines as members of our team as well.—The Coming Human-Machine Partnership in Creativity
The debate between advocates of artificial intelligence (AI) and defenders of human-centric approaches presents a false dichotomy. Machines can certainly help solve the problems facing humans, but they can rarely do so alone. To be most effective, machines must learn from people and about people. Creating and implementing accurate AI systems requires the input of human knowledge. —How human-machine collaboration has automated the data catalog
We will train machines that continue learning through interacting with other humans in different environments. In turn, the machines will train us on how to train them to get the best value out of human-machine collaboration.—Human-Machine Collaboration
At Tamr, “Machine Driven, Human Guided” goes well beyond a pithy tagline [not that we have anything against pith]. It’s our founding philosophy, the idea that humans and machines should not just collaborate, but iterate on problems that can’t be solved manually or through automation alone. In other words, algorithms v. humans is not an either-or decision. “Together” is possible, with the proper technology and process.—Tamr’s Take … On Human-Machine Collaboration
In a sense, a math model is the equivalent of a metaphor, a descriptive simplification. It usefully distills, but it also somewhat distorts. So at times, a human helper can provide that dose of nuanced data that escapes the algorithmic automaton. “Often, the two can be way better than the algorithm alone,” Mr. King said.—If Algorithms Know All, How Much Should Humans Help?
Player Piano is Kurt Vonguett Jr.’s first novel, about a dystopian United States government. It wasn’t a good novel to read, especially when compared to his later novels. The book’s text has not aged well, with its racism and sexism rather distracting. But the book’s ideas are still relevant to the modern day. I once had a plan to write a hard science-fiction novel about the dangers of uncontrolled technological progress without societial regulation, but such a novel would have just been a poorly-written, politically-correct version of Player Piano.
But Player Piano’s dystopian society also involved a darker, more pessimistic view of “human-machine collaboration” as well. This view is presented in the form of the supercomputer EPICAC XIV.
“This is EPICAC XIV,” said Halyard. “It’s an electronic computing machine - a brain, if you like. This chamber alone, the smallest of the thirty-one used, contains enough wire to reach from here to the moon four times. There are more vacuum tubes in the entire instrument than there were vacuum tubes in the State of New York before World War II.”
EPICAC XIV is a Narrow AI. It is narrowly designed to handle a specific task - resource allocation in the United States. You would not ask EPICAC XIV to write a poem, to bake a cake, or to drive a car. And yet…
EPICAC XIV, though undedicated, was already at work, deciding how many refrigerators, how many lamps, how many turbine-generators, how many hub caps, how many dinner plates, how many door knobs, how many rubber heels, how many television sets, how many pinochle decks - how many everything America and her customers could have and how much they would cost. And it was EPICAC XIV who would decide for the coming years how many engineers and managers and research men and civil servants, and of what skills, would be needed in order to deliver the goods; and what I.Q. and aptitude levels would separate the useful men from the useless ones, and how many Reconstruction and Reclamation Corps men and how many soldiers could be supported at what pay level and where, and . . .
EPICAC XIV is not Skynet. It still requires human lackies. Humans has to gather the Big Data that EPICAC XIV uses for its calcuations, and humans has to carry out EPICAC XIV’s recommendations. So…why exactly do you need to use EPICAC XIV as a middleman? Why not just use humans to perform the calculations? Because humans are just plain inferior to EPICAC’s genius.
Given the facts by human beings, the war-born EPICAC series had offered the highly informed guidance that the reasonable, truth-loving, brilliant, and highly trained core of American genius could have delivered had they had inspired leadership, boundless resources, and two thousand years.
… EPICAC XIV could consider simultaneously hundreds or even thousands of sides of a question utterly fairly, that EPICAC XIV was wholly free of reason - muddying emotions, that EPICAC XIV never forgot anything - that, in short, EPICAC XIV was dead right about everything.
There is an argument to be made that EPICAC XIV is engaging in “human-machine collaboration”. After all, it is the humans that built EPICAC XIV, curate the data that is then given to EPICAC XIV, and carried out EPICAC XIV’s recommendations. But this is not an equal collaboration, as this dialog between Paul and Katharine demonstrates.
As Paul passed Katharine Finch’s desk on his way into his office, she held out his typewritten speech. “That’s very good, what you said about the Second Industrial Revolution,” she said.
“Old, old stuff.”
“It seemed very fresh to me - I mean that part where you say how the First Industrial Revolution devalued muscle work, then the second one devalued routine mental work. I was fascinated.”
…
“Do you suppose there’ll be a Third Industrial Revolution?”
Paul paused in his office doorway. “A third one? What would that be like?”
“I don’t know exactly. The first and second ones must have been sort of inconceivable at one time.”
“To the people who were going to be replaced by machines, maybe. A third one, eh? In a way, I guess the third one’s been going on for some time, if you mean thinking machines. That would be the third revolution, I guess - machines that devaluate human thinking. Some of the big computers like EPICAC do that all right, in specialized fields.”
“Uh-huh,” said Katharine thoughtfully. She rattled a pencil between her teeth. “First the muscle work, then the routine work, then, maybe, the real brainwork.”
“I hope I’m not around long enough to see that final step. …”
EPICAC XIV is handling all of the “brainwork” in thinking about the problem of resource allocation, leaving the humans to worry about the menial tasks necessary to keep EPICAC XIV running. Collaboration matters, but the human involvement in this collaboration is token and marginal at best.
And, as a result, humanity suffers from an inferiority complex, even going so far as to trust implicitly the machine’s choices…
President Lynn was explaining what EPICAC XIV would do for the millions of plain folks, and Khashdrahr was translating for the Shah. Lynn declared that EPICAC XIV was, in effect, the greatest individual in history, that the wisest man that had ever lived was to EPICAC XIV as a worm was to that wisest man.
For the first time the Shah of Bratpuhr seemed really impressed, even startled. He hadn’t thought much of EPICAC XIV’s physical size, but the comparison of the worm and the wise man struck home. He looked about himself apprehensively, as though the tubes and meters on all sides were watching every move.
And so, the Shah of Bratphur decides to test the machine’s capabilities, by asking a question…
The Shah turned to a glowing bank of EPICAC’s tubes and cried in a piping singsong voice:
Allakahi baku billa,
Moumi a fella nam;
Serani assu tilla,
Touri serin a sam.”
“The crazy bastard’s talking to the machine,” whispered Lynn.
“Ssssh!” said Halyard, strangely moved by the scene.
“Siki?” cried the Shah. He cocked his head, listening. “Siki?” The word echoed and died - lonely, lost.
“Mmmmmm,” said EPICAC softly. “Dit, dit. Mmmmm. Dit.”
The Shah sighed and stood, and shook his head sadly, terribly let down. “Nibo,” he murmured. “Nibo.”
“What’s he say?” said the President.
“‘Nibo’ - ‘nothing.’ He asked the machine a question, and the machine didn’t answer,” said Halyard. “Nibo.”
“Nuttiest thing I ever heard of,” said the President. “You have to punch out the questions on that thingamajig, and the answers come out on tape from the whatchamacallits. You can’t just talk to it.” A doubt crossed his fine face. “I mean, you can’t, can you?”
“No sir,” said the chief engineer of the project. “As you say, not without the thingamajigs and whatchamacallits.”
It is almost irrelevant what question the Shah actually asked[1]. The problem here is that EPICAC XIV couldn’t handle artibrary input. That’s perfectly fine, because it wasn’t designed to handle that input. It was, after all, built by fallible human beings (who didn’t even anticipate that such input could be given).
But the fact that flawed humans built EPICAC meant that EPICAC itself is not perfect, and therefore flawed. Which again, is perfectly fine. It was built to handle the problem of resource allocation…answering riddles from Shah is just not suited for its talents. You don’t judge a fish by how fast it can climb trees.
The problem is with the humans. They praised EPICAC as the “greatest individual in history”, of being “dead right about everything”, etc., etc. They glorified vaccum tubes for deciding how many refrigerators should be built for the next fiscal year. Resource allocation is a very important job, true, but the hype that humans have built around EPICAC is rather excessive. In fact, the hype so disgusted the Shah that he called EPICAC a “baku” (false god).
This hype is bad for a very simple reason: it blinds humanity to any mistakes that EPICAC can make. If you believe that EPICAC is very wise, then you will not seek to question EPICAC’s choices. In fact, the humans don’t even bother asking EPICAC why it made the choices it did, making it hard to understand if the reasons for those choices were even valid to begin with. While EPICAC can be much better at resource allocation than humans, that does not mean that it will always make the right choice, every single time.
And even if EPICAC make the right choices, it doesn’t mean that the outcomes would be what we want. EPICAC’s economic planning is unbiased, but it does have end-goals in mind, and those end-goals are reflected by EPICAC’s programmers. The engineers believe in maxiziming the standard of living of the average citizen through economic growth and mass production of material goods. EPICAC has succeeded at this task admirably. However, EPICAC wasn’t focused on making people happy. It wasn’t programmed to do that.
Due to the proliferation of machine labor, the vast majority of humanity are unemployed in the United States…and unemployable. EPICAC can bear some blame for this, due to EPICAC’s strict enforcement of “I.Q. and aptitude levels” to seperate the wheat from the chaff, but most of the blame lies with its human collaborators who strongly supported technological progress, regardless of the social costs. Society has remained somewhat stable, due to the rise of make-work jobs created by the Army and the Reconstruction and Reclamation Corps (funded by taxes on private enterprises), but the average citizen has nursed a grudge against the machinery that has taken away their dignity and sense of purpose. The Ghost Shirt Society, a radical terrorist group, sought to capitalize on the average citizen’s greviances in their manifesto:
“Man has survived Armageddon in order to enter the Eden of eternal peace, only to discover that everything he had looked forward to enjoying there, pride, dignity, self-respect, work worth doing, has been condemned as unfit for human consumption.”
Later on in the novel, the Ghost Shirt Society launched a nationwide uprising against machinery. The uprising successfully destroyed three American cities, and got very close to eliminating EPICAC itself. It’s safe to say that mistakes were made.
Could EPICAC and its human collaborators have prevented this disaster? Yes…but only if they had a different end-goal in mind: maximizing “human dignity” instead of mere “standard of living”. But while it might have stopped this specific uprising, EPICAC and its collaborators could have caused other problems as well. How would you measure human dignity, anyway? What sorts of trade-offs must be made in maximizing human dignity? Are these trade-offs that we are really prepared to make? Do we even want to maximize human dignity, or do we instead want a balance between a variety of different, even contradictory goals? You could imagine a different terrorist group, the Adam Smith Society, writing a new manifesto and planning a new nationwide uprising:
“Man have survived Armageddon in order to enter the Eden of eternal dignity, only to discover that everything he had looked forward to enjoying there, wealth, freedom, economic growth, material goods worth owning, has been condemned as unfit for human consumption.”
At the end of the day, humans…fallible humans…have to make choices when building their machines and delivering the data. EPICAC did nothing more than to carry out those choices to their logical, chilling conclusions. And EPICAC will not be able to please everyone. Sacrifices must be made.
If we can’t agree on how to fix the machine collaborator in this creative partnership, could we fix the humans collaborators instead? The human collaborators could acknowledge the possibility of machine error, and try their best to minimize the error. Rather than blindly trust the machine, the humans can evaluate the machine’s advice, and decide whether to accept the advice, modify it to fit human whims, or decline it outright. Of course, this approach has its own faults. Humans are fallible, and it is possible that humans may unwisely reject the advice of a machine.
In the real world, an algorithm designed to hire employees did a much better job than managers who treated the algorithm as an advisory tool and used “discretion” to hire people, yet machine-human teams are able to win chess games more effectively than a standard human or a standard machine (though this advantage is shrinking). Is the problem of “resource allocation” like hiring (in which case, it may still be better to blindly trust the machine, because the human is unable to add any value) or like chess (in which case, we do need humans at the helm ready to engage in collaboration)? Or are we willing to agree to a little bit of inefficiency because the consequences of too much efficiency may be too great?
Obviously, EPICAC is just an idea in a science-fiction novel. But the broader questions EPICAC raises about human-machine collaboration are still relevant today. These are hard questions, but nobody ever said collaboration is ever easy. Human-machine collaboration is not a sure-fire way to victory or prosperity, nor should it be used as a mere buzzword to justify AI investment. There are many questions that a human collaborator has to wrest with greatly, and the odds of the human collaborator getting those questions wrong are high. The last thing today’s human collaborators should do though is to behave like EPICAC’s human collaborators – to blindly trust the machine and to obey its every whim. To do so would be to abdicate their responsibilites and to claim that humanity itself is obsolete.
“At no expense whatsoever to you,” said Halyard, “America will send engineers and managers, skilled in all fields, to study your resources, blueprint your modernization, get it started, test and classify your people, arrange credit, set up the machinery.”
The Shah shook his head wonderingly. “Prakka-fut takki sihn,” he said at last, “souli, sakki EPICAC, siki Kanu pu?”
“Shah says,” said Khashdrahr, “ ‘Before we take this first step, please, would you ask EPICAC what people are for?’ “
[1] If you are curious, the Shah asked a riddle to EPICAC. This is the translation of the riddle, as given by Khashdrahr.
“Silver bells shall light my way, And nine times nine maidens fill my day, And mountain lakes will sink from sight, And tigers’ teeth will fill the night.”
The answer to this riddle is posted in this Reddit thread.