Skip to content
Researchers Have Created an AI Too Dangerous to Release. What Will Happen When It Gets Out?

The GPT-2 software can generate fake news articles on its own. Its creators believe its existence may pose an existential threat to humanity. But it could also present a chance to intervene.

Eden Gordon
Eden Gordon

Feb 21 | 2019

Researchers at OpenAI have created an artificial intelligence software so powerful that they have deemed it too dangerous for public release.

The software, called GPT-2, can generate cohesive, coherent text in multiple genres—including fiction, news, and unfiltered Internet rants—making it a prime candidate for creating fake news or fake profiles should it fall into the wrong hands.

Fears like this led the Elon Musk-founded company OpenAI to curtail the software’s release. “Due to our concerns about malicious applications of the technology, we are not releasing the trained model,” they announced in a blog post. “As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

In addition to writing a cohesive fictional story based on Lord of the Rings, the software wrote a logical scientific report about the discovery of unicorns. “In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains,” the software wrote. “Even more surprising to the researchers was the fact that the unicorns spoke perfect English. The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.”

This journalistic aptitude sparked widespread fears that AI technologies as sophisticated as the GPT-2 could influence upcoming elections, potentially generating unfathomable amounts of partisan content in a single instant. “The idea here is you can use some of these tools in order to skew reality in your favor,” said University of Washington professor Ryan Calo. “And I think that’s what OpenAI worries about.”

Elon Musk quit OpenAI in 2018, but his legacy of fear and paranoia regarding AI and its potential evils lives on. The specter of his caution was likely instrumental in keeping GPT-2 out of the public sphere. “It’s quite uncanny how it behaves,” echoed Jack Clark, policy director of OpenAI, when asked about his decision to keep the new software under locks.

In a world already plagued by fake news, cat-fishing, and other forms of illusion made possible by new technology, AI seems like a natural next step in the dizzying sequence of illusion and corruption that has rapidly turned the online world from a repository of cat videos (the good old days) to today’s vortex of ceaselessly reproduced lies and corrupted content. Thinkers like Musk have long called for resistance against AI’s unstoppable growth. In 2014, Musk called AI the single largest “existential threat” to humanity. That same year, the late physicist Stephen Hawking ominously predicted that sophisticated AI could “spell the end of the human race.”

But until AI achieves the singularity—a level of consciousness where it achieves and supersedes human intelligence—it is still privy to the whims of whoever is controlling it. Fears about whether AI will lend itself to fake news are essentially fears of things humans have already done. All the evil at work on the Internet has had a human source.

When it comes down to the wire, for now, AI is a weapon.

When AI is released into the world, a lot could happen. AI could become a victim, a repository for displaced human desire. Some have questioned whether people should be allowed to treat humanoid creatures in whatever ways they wish to. Instances of robot beheadings and other violent behaviors towards AI hint towards a darker trend that could emerge should AI become a free-for-all, a humanoid object that can be treated in any way on the basis of its presumed inhumanity.

Clearly, AI and humanity have a complex and fundamentally intertwined relationship, and as we all become more dependent on technology, there is less of a clear line dividing the human from the robotic. As a manmade invention, AI will inevitably emulate the traits (as well as the stereotypes) of the people who created it. It could also take on the violent tendencies of its human creators. Some thinkers have sounded the alarm about this, questioning the dearth of ethics in Silicon Valley and in the tech sphere on the whole. Many people believe that AI (and technology in general) is fundamentally free of bias and emotion, but a multitude of examples have shown that this is untrue, including instances where law enforcement software systems displayed racist bias against black people (based on data collected by humans).

AI can be just as prejudiced and close-minded as a human, if not more so, especially in its early stages where it is not sophisticated enough to think critically. An AI may not feel in and of itself, but—much like we learn how to process the world from our parents—it can learn how to process and understand emotions from the people who create it, and from the media it absorbs.

Image via techno-pundit.blogspot.com

After all, who could forget the TwitterBot who began spewing racist, anti-Semitic rants mere hours after its launch—rants that it, of course, learned from human Twitter users? Studies have estimated that 9 to 15 percent of all Twitter accounts are bots—but each one of these bots had to be created and programmed by a human being. Even if the bot was not created for a specific purpose, it still learns from the human presences around it.A completely objective, totally nonhuman AI is kind of like the temperature absolute zero; it can exist only in theory. Since all AI is created by humans, it will inevitably take on human traits and beliefs. It will perform acts of evil when instructed to, or when exposed to ideologies that can inspire it to. It can also learn morality if its teachers choose to imbue it with the ability to tell right from wrong.

Image via cio.com

Their quandary may not be so different from the struggle parents face when deciding whether to allow their children to watch R-rated movies. In this case, both the general public and the AIs are the children, and the scientists, coders, and companies peddling new inventions are the parents. The people designing AIs have to determine the extent to which they can trust the public with their work. They also have to determine which aspects of humanity they want to expose their inventions to.

OpenAI may have kept their kid safe inside the house a little longer by freezing the GPT-2, but that kid is growing—and when it goes out into the world, it could change everything. For better or worse, at some point, super-intelligent AI is going to wind up in the public’s hands. Now, during its tender, formative stages, there is still a chance to shape it into whom it’s going to be when it arrives.


Eden Arielle Gordon is a writer and musician from New York City. Talk to her about AI on Twitter @edenarielmusic.

Related Articles