• AiNews.com
  • Posts
  • OpenAI Loses Key Safety Researcher Lilian Weng Amid Leadership Changes

OpenAI Loses Key Safety Researcher Lilian Weng Amid Leadership Changes

Image Source: ChatGPT-4o

OpenAI Loses Key Safety Researcher Lilian Weng Amid Leadership Changes

Lilian Weng, a key safety researcher and OpenAI’s VP of Research and Safety, announced she will be leaving the company on November 15. Weng, who has been with OpenAI since 2018, recently led the startup’s safety systems team, overseeing critical research efforts to build AI safeguards.

Weng’s Legacy at OpenAI

In a post on X, Weng expressed mixed emotions about her departure, writing, “After 7 years at OpenAI, I feel ready to reset and explore something new.” She added, “Looking at what we have achieved, I’m so proud of everyone on the Safety Systems team and I have extremely high confidence that the team will continue thriving.” Weng did not disclose her next steps.

Over her years with OpenAI, Weng played an integral role in several key initiatives:

She first joined the company in 2018, working on its robotics team and contributing to a project that built a robotic hand capable of solving a Rubik’s Cube - a project that took two years to complete, according to her post.

As OpenAI’s focus shifted to large language models, Weng transitioned in 2021 to build the applied AI research team, and in 2023, she was appointed to lead a dedicated safety systems team.

Today, OpenAI’s safety team includes over 80 scientists, researchers, and policy experts—a significant increase under Weng’s leadership.

Rising Concerns Over AI Safety at OpenAI

Weng’s departure is the latest in a series of exits from OpenAI’s safety and research leadership, highlighting potential internal concerns over the balance between commercial goals and AI safety. Several former employees, including Weng’s former colleagues Ilya Sutskever and Jan Leike—who led OpenAI’s now-dissolved Superalignment team—have left to work on AI safety at other organizations.

Notably, former policy researcher Miles Brundage left in October following the dissolution of OpenAI’s AGI Readiness team, which he had advised. On the same day, The New York Times profiled former OpenAI researcher Suchir Balaji, who said he departed due to concerns that OpenAI’s technology posed more risks than benefits to society.

OpenAI’s Response and Future Plans

OpenAI has acknowledged Weng’s departure and expressed gratitude for her work. “We deeply appreciate Lilian’s contributions to breakthrough safety research and building rigorous technical safeguards,” said an OpenAI spokesperson in an emailed statement to TechCrunch. “We are confident the Safety Systems team will continue playing a key role in ensuring our systems are safe and reliable, serving hundreds of millions of people globally.”

The company’s safety team is working on a transition plan to replace Weng, ensuring continued focus on AI safety in her absence.

An Ongoing Exodus of Talent

Weng’s departure is part of a broader trend of high-profile exits at OpenAI. Other executives who have recently left include:

  • CTO Mira Murati

  • Chief Research Officer Bob McGrew

  • Research VP Barret Zoph

  • Prominent researcher Andrej Karpathy and co-founder John Schulman, who departed in August.

Some of these researchers, including Leike and Schulman, have joined Anthropic, an OpenAI competitor focused on responsible AI development. Others have gone on to pursue independent ventures, adding to concerns that OpenAI’s leadership changes could impact its future direction.

What This Means

As more of OpenAI’s top safety and research experts leave the company, questions arise about its ability to maintain rigorous safety standards while advancing cutting-edge technology. For OpenAI, retaining safety-focused talent will be essential to sustaining public trust in its technology as it competes in the fast-evolving AI market. With the departure of Weng and other key figures, OpenAI may face challenges in balancing commercial ambitions with its commitment to AI safety—a priority increasingly scrutinized by regulators, consumers, and the tech community alike.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.