Header Ads Widget

OpenAI dissolved its team dedicated to preventing rogue AI

OpenAI logo and Sam Altman in background on screen.
The 'superalignment team' existed for less than a year at OpenAI. Deposit Photos

OpenAI has disbanded its “superalignment team” tasked with staving off the potential existential risks from artificial intelligence less than a year after first announcing its creation. News of the dissolution was first confirmed earlier today by Wired and other outlets, alongside a lengthy thread posted to X by the company’s former superalignment team co-lead, Jan Leike. Prior to today’s explanation, Leike simply tweeted “I resigned,” on May 15 without offering any further elaboration.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X today. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

OpenAI formed its superalignment project in July 2023. In an accompanying blog post, the company claimed superintelligent AI “will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

“Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment,” the company continued, while announcing Leike and OpenAI chief scientist and cofounder Ilya Sutskever as superalignment team co-leads. Sutskever has also since departed OpenAI allegedly over similar concerns, while the group’s remaining members have reportedly been absorbed into other research groups.

The company’s top executives and developers, including CEO Sam Altman, have repeatedly warned of the supposed threat of “rogue AI” that could circumvent human safeguards if designed improperly. Meanwhile, OpenAI—alongside the likes of Google and Meta—regularly touts their latest AI product advancements, some of which now can produce nearly photorealistic media and convincingly human audio. Earlier this week, OpenAI announced the release of GPT-4o, a multimodal generative AI system with lifelike, albeit occasionally still stilted, responses to human prompts. A day later, Google revealed its own, similar progress. Despite the allegedly grave risks, these companies routinely claim to be precisely the experts capable of tackling the issues while lobbying to dictate industry regulations.

[Related: Sam Altman: Age of AI will require an ‘energy breakthrough’]

Although the exact details remain unclear behind Leike’s departure and the superalignment team’s shutdown—recent internal power struggles hint at stark differences in opinion on how to move the industry forward in a safe, equitable way. Some critics maintain that the AI industry is quickly approaching an era of diminishing returns, prompting tech leaders to temper product expectations and shift goalposts. Others, such as Leike, appear convinced AI may still soon pose a dire threat to humanity, and that companies like OpenAI are not taking this seriously enough.

But as many critics point out, generative AI remains far from self-aware, much less capable of “going rogue.” Regardless of sentient chatbots, however, the existing technology is already affecting subjects like misinformation, content ownership, and human labor rights. And as companies continue to push nascent AI systems into web searches, social media, and news publications, society is left to deal with the consequences.

The post OpenAI dissolved its team dedicated to preventing rogue AI appeared first on Popular Science.

Articles may contain affiliate links which enable us to share in the revenue of any purchases made.



from Popular Science https://ift.tt/h4cukKg

Post a Comment

0 Comments