• AiNews.com
  • Posts
  • OpenAI to Give U.S. AI Safety Institute Early Model Access

OpenAI to Give U.S. AI Safety Institute Early Model Access

An illustration showing OpenAI collaborating with the U.S. AI Safety Institute. The central element features the logos of OpenAI and the U.S. AI Safety Institute connected by a digital data stream. Surrounding elements include icons representing AI safety, generative AI models, and federal oversight. The background includes subtle references to government buildings and technology symbols. The color scheme incorporates OpenAI’s branding colors with elements of red, white, and blue to represent the U.S. government

OpenAI to Give U.S. AI Safety Institute Early Model Access

OpenAI CEO Sam Altman announced that OpenAI will provide the U.S. AI Safety Institute with early access to its next major generative AI model for safety testing. This agreement aims to address risks in AI platforms and demonstrates OpenAI’s commitment to AI safety.

Collaboration with U.S. AI Safety Institute

The announcement, made on X late Thursday, comes alongside a similar agreement with the U.K.’s AI safety body. These moves are seen as efforts to counter criticism that OpenAI prioritizes the development of powerful generative AI technologies over safety.

Past Controversies and Safety Measures

In May, OpenAI disbanded a unit dedicated to developing controls for “superintelligent” AI systems. The team’s safety research was reportedly sidelined in favor of launching new products, leading to the resignation of co-leads Jan Leike and Ilya Sutskever. In response to growing criticism, OpenAI pledged to eliminate non-disparagement clauses, create a safety commission, and dedicate 20% of its compute to safety research. However, the initial safety team never received this compute allocation.

Senate Scrutiny and Regulatory Moves

Despite these measures, concerns persist. Five senators, including Brian Schatz of Hawaii, recently questioned OpenAI’s policies in a letter to Altman. OpenAI’s chief strategy officer, Jason Kwon, responded by affirming the company’s dedication to rigorous safety protocols.

Potential Regulatory Influence

The timing of OpenAI’s agreement with the U.S. AI Safety Institute coincides with its endorsement of the Future of Innovation Act, a proposed Senate bill that would establish the Safety Institute as an executive body setting AI standards and guidelines. This could be seen as an attempt by OpenAI to exert influence over federal AI policymaking.

Federal Involvement and Lobbying Efforts

Altman’s involvement with the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board further highlights OpenAI’s influence in AI policy. Additionally, OpenAI has significantly increased its federal lobbying expenditures, spending $800,000 in the first half of 2024 compared to $260,000 in all of 2023.

Role of the U.S. AI Safety Institute

The U.S. AI Safety Institute, part of the Commerce Department’s National Institute of Standards and Technology, collaborates with companies including Anthropic, Google, Microsoft, Meta, Apple, Amazon, and Nvidia. The institute works on guidelines for AI red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content, as outlined in President Joe Biden’s October AI executive order.