ChatGPT’s Hallucination Problem Under Investigation

ChatGPT’s Hallucination Problem Under Investigation

The Federal Trade Commission is investigating OpenAI to determine whether or not its ChatGPT AI chatbot and other products violate consumer protection laws by putting people’s personal reputations and data at risk. CEO Sam Altman, who has so far managed to smile and nod his way around most governmental roadblocks, may face his first real opposition from the openly combative consumer protection agency.

The FTC asked OpenAI to hand over a lengthy list of documents dating back to June 1, 2020, including details on how it assesses risks in its AI systems and how it safeguards against AI making false statements about real people, according to a 20-page letter obtained by the Washington Post. Investigators are asking for a wide breadth of information on the way OpenAI trains its large language models, including the exact types of data it’s trained on, how it obtained that data, and the extent to which that data was collected by scraping the open web. Despite the company’s name, OpenAI has been tight-lipped about the precise origins of the data used to train its models.

Similarly, the FTC letter demands OpenAI provide a description of complaints it’s received to this point of its system making “false, misleading, disparaging or harmful” statements about people. Researchers and journalists have repeatedly shown examples of ChatGPT and other LLMs “hallucinating” fabricated information against subjects. In an example first reported by Gizmodo, ChatGPT allegedly inserted a talk show host into an embezzlement court case he had nothing to do with. That radio host is now suing OpenAI for libel. The FTC says it wants to know the steps OpenAI took to filter or anonymize personal information included in its dataset and the steps it’s taken to decrease the chance its models conjure up fabricated statements about people.

The FTC also wants to know more about OpenAI’s policies and procedures for assessing safety and risk before releasing its products to the public. In addition to documents detailing the steps taken prior to its systems’ releases, the agency called on OpenAI to provide examples of times when the company opted not to launch an LLM over safety concerns. While most of the demands in the FTC letter were broad in nature, the agency honed in specifically on a March security incident where a bug in its system allowed some users to view other people’s chat logs and payment-related information. OpenAI had to briefly take ChatGPT offline to address that issue.

OpenAI and the FTC did not immediately respond to Gizmodo’s request for comment.

Tech watchdog groups concerned with the rapid rollout of OpenAI’s models like the Tech Oversight Project welcomed the FTC’s increased action. In a statement sent to Gizmodo, Oversight Project Deputy Executive Director Kyle Morse called OpenAI’s history of rushing out new products to the public “reckless and irresponsible.”

“Big Tech behemoths are playing hide the ball on AI with Congress and the American people by painting an apocalyptic future, while right now aggressively cutting off competition, abusing people’s privacy, and allowing scams to run rampant,” Morse said.

FTC could present a roadblock to Sam Altman’s charm offensive

The FTC’s investigation could mark the biggest regulatory test for OpenAI in the U.S. to date. To this point, CEO Altman has managed to step out in front of much of the justified, and sometimes sensational fears around AI parroted by lawmakers from both sides of the political spectrum. Altman testified before the Senate Judiciary subcommittee earlier this year and mostly nodded and in agreement with lawmakers expressing anxiety over his system. The CEO said he too was worried and welcomed the idea of new regulations, even going as far as to advocate in favor of setting new AI testing and licensing requirements for developers. Altman politely declined a request by one senator to lead a new agent overseeing those standards.

“I love my current job,” Altman said.

Altman has reportedly left a lasting impression on dozens of lawmakers in closed-door meetings and was among several participants who met with Vice President Kamala Harris at the White House to discuss responsible AI. Outside of Congress, Altman has signed letters warning of ways unchecked AI could threaten humanity.

All of those outreach efforts have painted Altman and OpenAI as responsible actors just as lawmakers consider drafting a slew of new bills meant to reign in AI. The FTC, on the other hand, has seemed less receptive to Altman’s char offensive. In recent months, the agency has released multiple blog posts warning companies against overzealously hyping up their AI system’s abilities and warning consumers of scammers using AI to commit fraud. FTC chair Lina Khan hasn’t minced words either and even published an op-ed in the New York Times with the blunt title “We Must Regulate AI Now.”

“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” Khan wrote.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.