OpenAI: There Is No ChatGPT Without Copyrighted Content

AI company, citing fair use doctrine, accuses 'NYT' of deception in lawsuit response
By Arden Dier,  Newser Staff
Posted Jan 9, 2024 8:56 AM CST
OpenAI Accuses NYT of Deception in Lawsuit Response
A ChapGPT logo is seen on a smartphone in West Chester, Pa., Wednesday, Dec. 6, 2023.   (AP Photo/Matt Rourke)

OpenAI says the New York Times is "not telling the full story" about the AI company's use of Times content to fuel its chatbot ChatGPT. The Times is seeking billions of dollars in damages from OpenAI and its leading investor Microsoft for unlawful use of its copyrighted content to train the large language model (LLM) that powers ChatGPT. In a lawsuit filed late last month, the newspaper said the chatbot produced "near-verbatim excerpts" from its copyrighted articles. But in its response Monday, OpenAI said the lawsuit is "without merit" and the Times seems to have "intentionally manipulated prompts" to force the chatbot to regurgitate the text, which is against the terms of use, per TechCrunch.

OpenAI did acknowledge a "rare bug" that can cause the "regurgitation" described, per Fortune. Systems may memorize text "when particular content appears more than once in training data, like if pieces of it appear on lots of different public websites," but it's something the company is "working to drive to zero," it said in a Monday blog post. It also said "intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use." Though the Times takes issue with the use of its content to train the LLM behind ChatGPT, OpenAI and other artificial intelligence companies argue they can reproduce some copyrighted work, without seeking the owner's permission, under the "fair use" doctrine.

Indeed, in a submission to the UK House of Lords communications and digital select committee, OpenAI claims it would be "impossible" to train LLMs without access to copyrighted material "because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents," per the Guardian. The company, which has agreed to work with governments to safety test its models, adds that "limiting training data to public domain books and drawings created more than a century ago ... would not provide AI systems that meet the needs of today's citizens." It has vowed to develop tools to allow copyright holders to exclude their works from LLMs, per the Telegraph. (More OpenAI stories.)

Get the news faster.
Tap to install our app.
X
Install the Newser News app
in two easy steps:
1. Tap in your navigation bar.
2. Tap to Add to Home Screen.

X