Former OpenAI Executive Raises Safety Concerns
Parting shot. Former head of OpenAI’s safety team criticises safety practices and OpenAI’s focus on ‘shiny products’
OpenAI has suffered another high-level departure after a safety executive left the AI pioneer – just days after launch of its latest AI model, GPT-4o.
Jan Leike (alongside Ilya Sutskever) had been the co-leaders of OpenAI’s “superalignment” project, that had been launched in 2023 to steer and control AI systems. At the time, OpenAI said it would commit 20 percent of its computing power to the initiative over four years.
But last week Leike announced he was departing OpenAI and he publicly criticised senior management, after a disagreement over key aims had reached “breaking point”.
Safety claims
Last week also saw the departure of OpenAI’s co-founder and chief scientist Ilya Sutskever, months after he had played a role in the shock firing and rehiring of CEO Sam Altman last November.
Jan Leike tweeted last Friday on X (formerly Twitter) that “yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.”
“Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us,” he wrote in a series of tweets.
“I joined because I thought OpenAI would be the best place in the world to do this research,” he wrote. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
He claimed that “over the past years, safety culture and processes have taken a backseat to shiny products” at OpenAI.
He also raised concerns that the company was not devoting enough resources to preparing for a possible future “artificial general intelligence” (AGI) that could be smarter than humans.
Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.
— Jan Leike (@janleike) May 17, 2024
OpenAI response
OpenAI at the weekend defended its safety practices, and Sam Altman tweeted he appreciated Leike’s commitment to “safety culture” and said he was sad to see him leave,
“I’m super appreciative of @janleike’s contributions to openai’s alignment research and safety culture, and very sad to see him leave,” Altman tweeted. “He’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.”
i’m super appreciative of @janleike‘s contributions to openai’s alignment research and safety culture, and very sad to see him leave. he’s right we have a lot more to do; we are committed to doing it. i’ll have a longer post in the next couple of days.
— Sam Altman (@sama) May 17, 2024
On Saturday, OpenAI co-founder Greg Brockman tweeted a statement attributed to both himself and Altman on X, in which they asserted that OpenAI has “raised awareness of the risks and opportunities of AGI so that the world can better prepare for it.”
But CNBC has reported, citing a person familiar with the situation, that OpenAI has disbanded the team focused on the long-term risks of artificial intelligence just one year after the company announced the group.
The source said some of the team members are being reassigned to multiple other teams within the company.