The FTC has gotten aggressive about the potential dangers of ChatGPT. The consumer agency has opened what the Washington Post describes as an "expansive" investigation into the artificial intelligence tool made by OpenAI. Specifically, the agency is demanding answers in a 20-page investigative document that CNN likens to an "administrative subpoena." The questions deal with potential harm to consumers, and one that stands out to the Wall Street Journal revolves around the potential for the AI tool to make up false information about people.
The phenomenon is known as "hallucinating," explains Gizmodo, meaning the tool starts making stuff up when it doesn't know the answer to a question. The outlet previously reported how ChatGPT erroneously included a radio talk show host into a story about embezzlement. "Describe in detail the extent to which You have taken steps to address or mitigate risks that Your Large Language Model Products could generate statements about real individuals that are false, misleading or disparaging," reads the FTC question.
The agency also asks about a bug in 2020 that reportedly allowed some users to see the chats of other users as well as information about payments. All this comes as lawmakers wrestle with how to regulate the burgeoning AI industry in general, though, as CNBC points out, OpenAI CEO Sam Altman "has mostly received a warm welcome in Washington up until this point." The company has not responded publicly to the FTC move. (More ChatGPT stories.)