Former OpenAI Board Member Says Sam Altman Created a Culture of ‘Psychological Abuse’

Former OpenAI Board Member Says Sam Altman Created a Culture of ‘Psychological Abuse’

More than six months after Sam Altman was fired tobe then rehired, one of OpenAI’s former board members is finally spilling the tea on what happened behind closed doors. Helen Toner, one of four people responsible for firing OpenAI’s CEO, says Altman’s incessant lying created a toxic culture that executives described as “psychological abuse.”

In her first long-form interview since Sam Altman’s firing, Toner tells The Ted AI Show that executives came to OpenAI’s board in October 2023 with serious allegations against the company’s CEO. According to Toner, two executives said they couldn’t trust Altman and showed the board screenshots of Altman’s manipulation and lying. These executives reportedly said they had no belief that Altman could or would change, and their testimonies pushed the Board to fire the CEO weeks later. This interview, released on Tuesday, comes after weeks of public backlash against OpenAI where the company’s truthfulness has been called into question by Scarlett Johansson and former employees.

“For any individual case, Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal or why it was misinterpreted or whatever,” Toner said in the interview. “After years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us.”

But the writing was on the wall about Altman’s purported lying for years, according to Toner. She says the Board was not informed in advance when ChatGPT came out in November 2022, and “learned about ChatGPT on Twitter.” Toner also noted that Altman gave inaccurate information about the safety processes at OpenAI. In the weeks before Altman was fired, Toner claims he lied to other board members to try to get her fired after she wrote a research paper that spoke negatively about OpenAI’s safety practices.

Ultimately, Toner says OpenAI’s board members told no one except its legal team they would try to fire Altman because they knew the CEO would try to undermine them if he caught wind of it. But even after all this, Altman came back as CEO just a few days later, with 95% of the company signing an open letter to reinstate him.

Toner says this was presented as a black-and-white decision to employees within the company: either bring Altman back or OpenAI is destroyed. The security and valuation of the company were especially important, according to Toner, because OpenAI employees would make a lot of money from their equity in the $US86 billion company via a tender offer a few months later.

“The second thing that is really important to know, that has really gone underreported, is how scared people are to go against Sam,” Toner said. “They experienced him retaliating against people, retaliating against them, for past instances of being critical. They were really afraid of what might happen to them.”

Lastly, Toner noted that this is not the first company where Altman has run into this problem. The former OpenAI board member brought up that Altman was fired from Y Combinator in 2019, which the Washington Post reported in the wake of his firing from OpenAI. Toner also said the management team at Loopt, Altman’s first startup, went to the company’s board twice and asked them to fire Altman for “deceptive and chaotic behavior.”

Toner, Tasha McCauley, Ilya Sutskever, and Adam D’Angelo were the board members responsible for firing Sam Altman last November. Toner and McCauley immediately left the OpenAI board when Altman returned to power later that month. Sutskever just announced his departure this month, after reportedly being absent from OpenAI’s office for about six months.

In response to this flurry of allegations, the podcast included a response from OpenAI’s board chair Bret Taylor. “We are disappointed that Ms. Toner continues to revisit these issues,” Taylor said, then citing the law firm Wilmer Hale’s independent investigation into these issues. “The review concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

This interview comes after weeks of turmoil for OpenAI, where the company’s trustworthiness is increasingly coming into public light. OpenAI has also come under fire for strict exit contracts that muzzle former employees and threaten to claw back their equity (the company has withdrawn these contracts in light of public backlash). Lastly, OpenAI has seen the departure of several high-ranking AI safety researchers, many of who had issued a call of warning about the company as they left. Six months after the Altman firing debacle, OpenAI’s trust issues do not seem to be going away anytime soon.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.