- AiNews.com
- Posts
- OpenAI Appoints AI Safety Expert Zico Kolter to Board of Directors
OpenAI Appoints AI Safety Expert Zico Kolter to Board of Directors
OpenAI Appoints AI Safety Expert Zico Kolter to Board of Directors
OpenAI has announced the appointment of Zico Kolter, a prominent professor and director of the machine learning department at Carnegie Mellon University, to its board of directors. Kolter’s expertise in AI safety makes him an invaluable addition to OpenAI’s leadership, especially during a time when the company is under increased scrutiny for its handling of safety concerns.
A Focus on AI Safety
Kolter’s research has primarily focused on AI safety, a critical issue for OpenAI as it continues to develop advanced AI systems. His appointment follows the departure of several key executives and employees from OpenAI's safety-focused teams, including co-founder Ilya Sutskever. Many of these departures were from the “Superalignment” team, which was tasked with finding ways to govern “superintelligent” AI systems. According to sources, the team faced challenges, including being denied the computing resources initially promised.
Joining the Safety and Security Committee
In addition to his role on the board, Kolter will join OpenAI’s Safety and Security Committee, where he will collaborate with other directors, including Bret Taylor, Adam D’Angelo, Paul Nakasone, Nicole Seligman, CEO Sam Altman, and OpenAI’s technical experts. This committee is charged with making safety and security recommendations for all of OpenAI’s projects. However, the committee has faced criticism for being composed mostly of insiders, leading to questions about its effectiveness in overseeing such a crucial aspect of the company’s operations.
Zico Kolter’s Background and Expertise
Zico Kolter brings a wealth of experience to his new role at OpenAI. He previously served as the chief data scientist at C3.ai and completed his PhD in computer science at Stanford University in 2010, followed by a postdoctoral fellowship at MIT from 2010 to 2012. His research includes demonstrating how existing AI safeguards can be bypassed through automated optimization techniques, underscoring his deep technical understanding of AI safety.
Kolter is also actively involved in industry collaborations, serving as the “chief expert” at Bosch and the chief technical advisor at AI startup Gray Swan. His extensive experience and focus on AI safety will provide OpenAI with critical insights as the company navigates the challenges of developing safe and beneficial AI systems.
Strategic Timing
Kolter’s appointment comes at a pivotal moment for OpenAI, as the company faces mounting criticism over its safety practices, particularly following the recent resignations and reports from former employees. By bringing in an expert like Kolter, OpenAI is making a strategic move to strengthen its commitment to AI safety and address the concerns of its critics.