- AiNews.com
- Posts
- OpenAI Disbands AGI Safety Team as Advisor for ‘AGI Readiness’ Exits
OpenAI Disbands AGI Safety Team as Advisor for ‘AGI Readiness’ Exits
Image Source: ChatGPT-4o
OpenAI Disbands AGI Safety Team as Advisor for ‘AGI Readiness’ Exits
OpenAI has recently disbanded its “AGI Readiness” team, a unit tasked with assessing the company's preparedness to manage artificial general intelligence (AGI) and its potential societal impacts. This decision follows the resignation of Miles Brundage, the senior advisor for AGI Readiness, who cited that his research may have greater impact outside of OpenAI.
Background on AGI Readiness and Brundage's Departure
Artificial general intelligence (AGI) refers to highly advanced AI that could match or exceed human intelligence across a range of tasks. As this technology continues to evolve, experts like Brundage have focused on the readiness of both OpenAI and the world to manage its far-reaching implications. Announcing his departure in a Substack post, Brundage expressed his desire to pursue policy research independently, highlighting that his mission could be more effective outside the corporate sphere.
Brundage also emphasized that, in his view, neither OpenAI nor any other frontier lab is fully prepared to manage AGI. “AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so,” he noted, indicating plans to start a nonprofit focused on AI policy research.
Former Team Members and OpenAI’s Statement
Following Brundage’s exit, OpenAI announced that former members of the AGI Readiness team would be reassigned to other projects. OpenAI expressed support for Brundage’s decision, stating that his move to independent research offers him “the opportunity to have an impact on a wider scale, and we are excited to learn from his work and follow its impact. We’re confident that in his new role, Miles will continue to raise the bar for the quality of policymaking in industry and government.”
Previous Disbanding of OpenAI’s Superalignment Team
This disbandment echoes a similar restructuring move earlier this year when OpenAI dissolved its Superalignment team, a group focused on “steering and controlling” advanced AI to ensure it doesn’t “go rogue.” The Superalignment team, led by OpenAI co-founder Ilya Sutskever and Jan Leike, was dissolved after only a year, as both leaders left OpenAI, with Leike voicing concerns that safety processes had taken a backseat to product development.
Additional Safety and Structural Changes
OpenAI’s decision to disband these teams comes amid a series of significant internal changes and external scrutiny:
Leadership Changes: In recent months, several high-profile executives, including CTO Mira Murati and research chief Bob McGrew, exited the company.
Safety Committee Overhaul: OpenAI’s Safety and Security Committee, established to oversee security protocols, recently completed a 90-day review and has now transitioned to an independent oversight board.
Policy Shifts and Funding: OpenAI recently raised $6.6 billion in funding, pushing its valuation to $157 billion but expects financial losses in its pursuit of developing advanced AI.
Growing External Concerns and Government Involvement
OpenAI’s restructuring efforts and leadership changes align with growing industry-wide concerns about AGI safety. In June, former employees called for greater oversight of AI development, warning that corporate interests might outweigh safety considerations. In their letter, current and former employees stated that AI companies possess “substantial non-public information” about their technology’s capabilities, the safety measures they’ve implemented, and the various levels of risk that the technology poses for different kinds of harm. This sparked inquiries from government agencies, including the Federal Trade Commission (FTC) and the Department of Justice (DOJ), to examine AI firms’ alliances with cloud providers and evaluate market fairness.
Looking Ahead: The Path to Safe AGI Development
OpenAI’s latest team disbandments and executive departures reveal a complex balancing act between rapid innovation and responsible development. As OpenAI continues to expand its AI capabilities, the spotlight on AGI safety and ethical governance will likely intensify, with former advisors like Brundage advocating from outside the company. The future of AGI development may depend not only on technological breakthroughs but on building frameworks to ensure safety and transparency.