Narrative Science's AI Beliefs

Revealing Hopes (and Fears)

22 August 2015

“Practical Artificial Intelligence for Dummies: Narrative Science Edition” is a free e-book that provides a quick introduction into the AI field. It is also a treatise that outlined Narrative Science’s approach towards dealing with AI.

One of the taglines of this e-book is that it will “demystify the hype, fear, and confusion about AI”. Though the book primarly focused on explaining AI, it did pay some attention to the ‘fears’ within popular culture, and attempted to address them in a “roundabout” way, without necessarily rebutting any specific fear.

Narrative Science believed that AI only need to display intelligent behavior, and not necessarily need to think like human beings. As a result, Narrative Science concluded that technologies that humans take for granted are already AI. Voice recognition, recommendation engines, autocompletes, etc. are all commonly accepted within society, and yet all of these algorithms display intelligent behaviors.

But these AI programs do not seek to take over the world or render people unemployed. Instead, these programs are just tools, there to help humans accomplish their day-to-day tasks. People are still in control and still receiving regular paychecks; all the AI did just made their lives easier. Narrative Science concluded that this trend would continue, and future AI programs will simply help humans instead of displacing them.

If that was the crux of Narrative Science’s argument, then I don’t think this book’s philosophy would have been interesting enough to blog about. But Narrative Science also surprisingly expressed some concern about AI proliferation. Narrative Science fears black boxes: AI programs that are able to give answers but fail to provide explainations for why it came up with those answers.

Black boxes are not hypothetical concerns. Self-learning entities are being built and used already, such as “evidence engines” (IBM’s Watson) and deep-learning networks (‘neural networks’). These entities are able to learn about the world, but they cannot tell other people how they learn the world. You ask them a question, and all they give you an answer…with no context or reasoning behind the answer.

Black boxes ruin trust. If you do not know how the AI came up with the answer, you cannot trust whether the answer is actually correct[1]. Without trust, AI loses their potential as useful tools for humanity. A calcuator that gives the wrong answer every 30 minutes is not a very useful calcuator.

Narrative Science claims that the best way to preserve trust in AI (and to keep their status as useful tools) is to enable the AI to communicate its internal thought process to human beings. That way, we can evaluate the AI’s thought process and decide whether it is correct or incorrect. Narrative Science has already implemented a “logic trace” within its own AI program, “Quill”, allowing people to understand why the AI program had made the choices that it did. Quill is currently being used to write newspaper articles and financial reports.

As black boxes are used in more and more critical industries, Narrative Science’s apprehension about them will only grow. Narrative Science recommends that other companies also implement “logic traces” in their own AI programs as a way to counteract the possibility of ‘black boxes’. Already, Google has tried visualizing how its own black boxes works through its “Google Dreams”. More work will have to be done to deal with the dangers of of black boxes..

[1]The alternative is to implicilty trust the AI’s thought process…but when the AI inevitably make mistakes, you will not know how to prevent it from making future mistakes.

Correction - On Janurary 17, 2016, this article was updated to correct mistaken impressions about Narrative Science.

Return back to Blog Index