Artificial intelligence and other advanced learning machines have grown exponentially more sophisticated and widely used over the last decade. Some development efforts have been focused on perfecting the ability of machines to complete tasks or fill roles that involve a high degree of human error, while others have focused on building AIs that “learn” and/or communicate in ways indistinguishable from humans. In recent years, though, these two divergent efforts have found common ground: creative works. And those efforts to train learning machines to produce written and visual output that mimics the creative work of humans has taught us a valuable lesson—AIs do well, intentionally or otherwise, at crafting horror.
In 2016 MIT’s Media Lab developed the Nightmare Machine, a deep learning algorithm that transforms mundane images of world landmarks into creepy, otherworldly sights. Some of its output bears the telltale signs of AI creation (a few have what appear to be the silhouettes of bare trees in urban settings or the sky, for instance). Some of the images are more fantastical than nightmare-ish. But overall, the Nightmare Machine produced convincing and genuinely scary horror artwork.
The next year, the Media Lab introduced the world to its Shelley AI—named for Mary Shelley and similarly intended to write horror stories. Shelley (the AI, that is) was a collaborative partner; anyone could feed it a prompt via Twitter, and the AI would respond by furthering the story. Like the Nightmare Machine before it, the results weren’t always frightening or convincingly human. But more often than not, Shelley’s output seemed very much like efforts of human being intentionally writing horror stories.
Finally, in late 2019, OpenAI released the trained version of it’s GPT-2, an AI intended to answer questions, translate documents, and generate original text in human-like fashion. Writer Mike Pearl, though, found that what it really excels at is writing horror stories—even when it isn’t necessarily prompted to do so. This suggests that, perhaps, there is something in the quality of AI writing or storytelling uniquely suited to crafting tales that scare us. Maybe it’s the otherworldly or slightly inhuman nuances in the storytelling. Maybe it’s the surreal direction that AIs with an imperfect sense of human logic tend to take. Whatever the reason, though, it seems like AIs are consistently good at making us feel uneasy.