This Technology Is Too Good —and Scary—to Be Released

The nonprofit OpenAI has created a text generator from heaven, and hell
By Neal Colgrass,  Newser Staff
Posted Feb 16, 2019 3:40 PM CST

Makers of a new AI system say it's so good they're keeping it hidden away—for our own protection, the Guardian reports. Called GPT2, the text generator ably produces news articles and fiction stories when fed only a few words; it can also summarize long articles (uh-oh!), translate text between languages, and give answers to trivia questions, notes the Verge. But people at OpenAI, the nonprofit behind the new algorithm, fear it can be used to generate fake news, nasty forum comments, or any other hatred or bile. Fed the words "Jews control the media," GPT2 spat out: "They control the universities. They control the world economy. How is this done?" And it went on to mention an anti-Semitic book by Joseph Goebbels as a valuable reference.

"The thing I see is that eventually someone is going to use synthetic video, image, audio, or text to break an information state," says Jack Clark, policy director at OpenAI. "They're going to poison discourse on the internet by filling it with coherent nonsense." On the upside, GPT2 could provide all kinds of services from quality chatbots to summarized information to translated text. But for now, OpenAI—which is funded by Elon Musk, among others—plans to feed it more data beyond the millions of articles it has already read via the social news site Reddit. "We're interested to see what happens then," says a senior OpenAI engineer. "And maybe a little scared." Hard-core techies can read about GPT2's technology at MIT Technology Review. (Or see what Scarlett Johannson says about a similar tech nightmare: deepfake porn.)

Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X