- AiNews.com
- Posts
- US Lawmakers Demand AI Safety Commitments from OpenAI in New Letter
US Lawmakers Demand AI Safety Commitments from OpenAI in New Letter
US Lawmakers Demand AI Safety Commitments from OpenAI in New Letter
Senate Democrats, along with an independent lawmaker, have sent a letter to OpenAI CEO Sam Altman, raising concerns about the company's safety standards and treatment of whistleblowers. The letter, initially obtained by The Washington Post, includes a critical question: “What percentage of computing resources is OpenAI dedicating to AI safety research?”
Government Testing Request
A significant point in the letter asks whether OpenAI will allow U.S. government agencies to conduct pre-deployment testing, review, analysis, and assessment of its next foundation model. This question is part of a broader inquiry into the company's practices.
Additional Safety Concerns
The lawmakers’ letter outlines 11 additional points, including securing OpenAI’s commitment to allocate 20% of its computing power to safety research and establishing protocols to prevent the theft of AI products by malicious actors or foreign adversaries.
Whistleblower Allegations
The letter follows whistleblower reports alleging lax safety standards for GPT-4 Omni, aimed at avoiding delays in the market release of the product. Whistleblowers have claimed that raising safety concerns with management led to retaliation and illegal non-disclosure agreements. These allegations prompted a complaint to the U.S. Securities and Exchange Commission in June 2024.
Board Resignations
In response to increased regulatory scrutiny, tech giants Microsoft and Apple withdrew from OpenAI’s board in July. This decision came despite Microsoft’s substantial $13-billion investment in OpenAI in 2023.
Former Employee’s Warning
Former OpenAI employee William Saunders recently revealed that he left the company due to concerns about the potential existential threat posed by OpenAI’s research. Saunders likened OpenAI’s trajectory to the infamous Titanic disaster of 1912. He emphasized that his concerns were not about the current iteration of OpenAI’s ChatGPT but focused on future versions and the potential development of advanced superhuman intelligence. Saunders argued that AI sector employees have a right to warn the public about the dangers posed by rapid advancements in synthetic intelligence.