AI, Artificial Intelligence, is the one element that many SF writers like to work into their stories, particularly robot-human interactions. The bad news is that this particular element is probably the hardest to get right and the furthest away from becoming reality. On the third day of Worldcon 75, which ended last week, authors Anthony Eichenlaub and S.B Divya, and research scientist and software engineer Greg Hullender, formerly of Amazon and Microsoft, poured icy cold water on the notion that human-like robots will populate the world any time soon.
Hullender probably had the most to say on the subject. He calls himself a Computational Linguist, and feels that the way readers talk about computer programs is wrong.
“IBM’s Watson is just a computer program for language, it’s not a person, a guy. In SF we talk about ‘Data’ [from Star Trek], like a person, a guy, but in AI it’s nothing like that. In ten to fifteen years’ time we could build something that talks, to which people would react the way we react to children or animals, in other words, at a very basic level. If you ask whether robots like those would need protection from people [who feel threatened by them], the answer is no. Someone who’d try to destroy a tool like that would be a sociopath.”
Would there really be human-like AIs in the future?
The panellists agreed that it is impossible to predict by when the world would have AIs in great numbers, like “Data” or those in films like Ex Machina, Westworld, or other futuristic novels. But they think – probably never.
“We cannot predict the future, so we would not know what information or materials we would need build it, so we would not know by when we could build it. Every time we find a way around a real-world problem in AI, we are faced with a wall of unknowns, and no-one so far has made a dent in the real intelligence…Copying of human neural networks is not possible and perhaps not the way to go.” – Greg Hullender
As for making robots and machines react or perform in a way that makes them indistinguishable from humans, Hullender explained that;
“I’ve seen how much work it takes to make software work, so doing that is just work on an impossible scale – it’s like believing if you mix all the chemicals that exist together, out would pop a human. It doesn’t work like that. Human ‘consciousness’ or ‘sentience’ cannot be mapped, located or recreated.”
So what does it take to write convincingly about AIs?
An author who got it right and followed the rules set out below, is Jay Posey, in his novel, Sungrazer, about an AI weapons system that goes off the rails. According to the panel, the rules are straightforward:
- Set the novel at least 1000 years in the future. By then something might have been created.
- Just state the facts of the plot, and do not try to explain how the AI works. Any explanation will be bad and implausible.
- Add real world experiences…AIs fail in the most wonderful ways. Throw any real world problem at it, and it will just fall over. Even in the future, there will be some little “quirk of language” that will throw it off.
- Consider using something like Google’s Alexa that gives limited responses and results, and acts like an assistant. Any writer who portrays an AI as a person definitely gets it wrong.
- Do not infuse your AI character with feelings, even though SF needs to draw on emotions to work.
- Avoid the term “Artificial Intelligence”, rather use “Augmented Intelligence”, since every project being developed in the industry is to produce a tool, not a person.