X

OpenAI researcher left the company, says safety took a backseat

Featured image for OpenAI researcher left the company, says safety took a backseat

When it comes to developing AI, it’s not just about making products powerful, it is about making them safe. However, when a company decides to deprioritize safety, problems can occur. An OpenAI researcher named Jan Leike has left the company, saying that safety has taken a back seat.

Leike isn’t the only person to leave the company. Recently, Ilya Sutskever, OpenAI’s former Chief scientist, resigned from his post. He left the company to pursue a project that was personally important to him, yet we don’t know what that project is. In any case, the parting was on good terms, as other OpenAI members, including Sam Altman, expressed their sadness about his leaving.

An OpenAI researcher has left the company

Jan Leike was a key researcher at the company, and he announced his resignation just a few days ago. He posted an extended thread on x.com about why he left.

In one of the several posts, he mentioned that “Over the past years, safety culture and processes have taken a backseat to shiny products.”, he also said, “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time until we finally reached a breaking point.”

OpenAI once had a team of people who would address the long-term risks of AI, and that team was called the Super Alignment team. It was formed last July, and Leike was the head of that team. Obviously, when dealing with AI, every safety team is a welcome addition.

However, recently, OpenAI disbanded that team. That’s never a good sign. Also, before being disbanded, the team was being heavily deprioritized and was getting diminishing resources and compute to perform important work.

This latest resignation outlines a bit of an issue going on within OpenAI. It appears that there is some tension growing within the company between the workers, the CEO, and the overall direction of the company. We’re not sure if this is going to impact OpenAI’s goal of achieving AGI. We will just have to wait and see.

  翻译: