OpenAI just dissolved its team dedicated to managing AI risks, like the possibility of it ‘going rogue’

OpenAI just dissolved its team dedicated to managing AI risks, like the possibility of it ‘going rogue’
Pls share this post

Listen to this article
Sam Altman next to Ilya Sutskever
Ilya Sutskever played a key role in ousting Sam Altman last year, and recently announced he was leaving the company.

  • OpenAI’s Superalignment team was formed in July 2023 to mitigate AI risks, like “rogue” behavior.
  • OpenAI has reportedly disbanded its Superalignment team after its co-leaders resigned.
  • One of the former leaders critiqued OpenAI’s focus on “shiny” products over safety in a post on X.

In the same week that OpenAI launched GPT-4o, its most human-like AI yet, the company dissolved its Superalignment team, Wired first reported.

OpenAI created its Superalignment team in July 2023, co-led by Ilya Sutskever and Jan Leike. The team was dedicated to mitigating AI risks, such as the possibility of it “going rogue.”

The team reportedly disbanded days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week. Sutskever said in his post that he felt “confident that OpenAI will build AGI that is both safe and beneficial” under the current leadership.

READ ALSO  It's possible to spend over $3,000 now for a maxed-out iPad with all the bells and whistles

He also added that he was “excited for what comes next,” which he described as a “project that is very personally meaningful” to him. The former OpenAI executive hasn’t elaborated on it but said he will share details in time.

Sutskever, a cofounder and former chief scientist at OpenAI, made headlines when he announced his departure. The executive played a role in the ousting of CEO Sam Altman in November. Despite later expressing regret for contributing to Altman’s removal, Sutskever’s future at OpenAI had been in question since Altman’s reinstatement.

Following Sutskever’s announcement, Leike posted on X, formerly Twitter, that he was also leaving OpenAI. The former executive published a series of posts on Friday explaining his departure, which he said came after disagreements about the company’s core priorities for “quite some time.”

READ ALSO  Google is co-funding a guaranteed basic income trial that gives families $12,000 a year to help keep them housed

Leike said his team has been “sailing against the wind” and struggling to get compute for its research. The mission of the Superalignment team involved using 20% of OpenAI’s computing power over the next four years to “build a roughly human-level automated alignment researcher,” according to OpenAI’s announcement of the team last July.

Leike added “OpenAI must become a safety-first AGI company.” He said building generative AI is “an inherently dangerous endeavor” and OpenAI was more concerned with releasing “shiny products” than safety.

Jan Leike did not respond to a request for comment.

The Superalignment team’s objective was to “solve the core technical challenges of superintelligence alignment in four years,” a goal that the company admitted was “incredibly ambitious.” They also added they weren’t guaranteed to succeed.

READ ALSO  Take a closer look at the main regions of Neom, Saudi's epic megacity project

Some of the risks the team worked on included “misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance.” The company said in its post that the new team’s work was in addition to existing work at OpenAI aimed at improving the safety of current models, like ChatGPT.

Some of the team’s remaining teammembers have been rolled into other OpenAI teams, Wired reported.

OpenAI didn’t respond to a request for comment.

Read the original article on Business Insider


Pls share this post
Previous articleTrump’s hush-money trial is almost over. Deliberations could begin Thursday — especially if Trump doesn’t testify.
Next articleEx-OpenAI exec calls out Sam Altman for choosing ‘shiny products’ over AI safety