- AiNews.com
- Posts
- Microsoft Leaves OpenAI Board Amid Regulatory Scrutiny
Microsoft Leaves OpenAI Board Amid Regulatory Scrutiny
Microsoft Leaves OpenAI Board Amid Regulatory Scrutiny
Months after Microsoft gained a non-voting observer seat on OpenAI’s board, the company has decided to leave the position. In a letter sent to OpenAI on Tuesday, Microsoft expressed confidence in the AI company’s progress and direction, according to Axios.
Following Microsoft’s departure, OpenAI announced there would be no more observers on its board, likely ruling out reports of Apple gaining an observer seat. “We’re grateful to Microsoft for voicing confidence in the Board and the direction of the company, and we look forward to continuing our successful partnership,” OpenAI stated to TechCrunch.
New Approach to Partner Engagement
Under CFO Sarah Friar’s leadership, OpenAI is establishing a new approach to inform and engage key strategic partners such as Microsoft and Apple, along with investors like Thrive Capital and Khosla Ventures.
Board Reshuffling and New Members
Microsoft took the observer position after Sam Altman was fired and later rehired by OpenAI last year. The board underwent significant changes, now consisting of:
Former Salesforce co-CEO Bret Taylor
Former Treasury Secretary Larry Summers
Instacart CEO Fidji Simo
Ex-Sony Corp EVP Nicole Seligman
Former Bill & Melinda Gates Foundation CEO Sue Desmond-Hellmann
Ex-NSA head Paul Nakasone
Sam Altman
Quora CEO Adam D’Angelo
Changes and Departures at OpenAI
Since the reshuffling, some top researchers, including Andrej Karpathy and Ilya Sutskever, have left OpenAI. Sutskever has since founded a new AI company, Safe Superintelligence Inc., focusing on improving AI safety.
Microsoft's Continued Investment and Regulatory Concerns
Despite leaving the observer seat, Microsoft still owns 49% of the for-profit OpenAI after investing nearly $13 billion. This partnership has allowed Microsoft to integrate current AI into its products while providing OpenAI with crucial computing resources. The partnership has yielded high-profile products like ChatGPT and the image generator DALL-E.
However, this partnership has also drawn the attention of antitrust regulators. The European Commission and U.S. regulators have scrutinized the relationship between the two AI powerhouses. While the EU acknowledged that the observer seat didn’t threaten OpenAI’s autonomy, it is still seeking third-party opinions on the deal.
Strategic Retreat
Microsoft’s retreat from the board appears aimed at avoiding regulatory scrutiny. Alex Haffner, a competition partner at U.K.-based firm Fladgate, suggested that Microsoft's decision was heavily influenced by ongoing competition and antitrust investigations into its influence over emerging AI players like OpenAI.
Senate Hearing on AI Privacy
In addition to Microsoft’s strategic moves, the Senate Commerce Committee is set to tackle AI-driven privacy concerns in a hearing scheduled for Thursday (July 11). The U.S. lags in privacy legislation compared to states and other countries, creating a patchwork of regulations difficult for companies to navigate.
A bipartisan effort, the American Privacy Rights Act, aimed to give consumers more control over their data, but it faced obstacles when House GOP leaders delayed its progress. The upcoming hearing will feature testimony from legal and tech policy experts, including representatives from the University of Washington and Mozilla.
Expert Calls for New Regulatory Approach
Amid these developments, Brookings Institution fellows Tom Wheeler and Blair Levin have called for a new regulatory approach to balance competition and safety in the AI industry. They propose a model featuring three key components:
A supervised process for developing evolving safety standards.
Market incentives to reward companies exceeding these standards.
Rigorous oversight of compliance.
To address antitrust concerns, they suggest the FTC and DOJ issue a joint policy statement clarifying that legitimate AI safety collaborations won’t trigger antitrust alarms, similar to a cybersecurity policy released in 2014.
Their proposal aims to strike a balance between unleashing AI’s potential and safeguarding public interest, providing a roadmap for nurturing a competitive yet responsible AI ecosystem.