• AiNews.com
  • Posts
  • OpenAI’s Former Chief Scientist Launches Safe Superintelligence Inc

OpenAI’s Former Chief Scientist Launches Safe Superintelligence Inc

A modern, sleek office setting with a futuristic vibe, showcasing advanced AI technology. In the center, a glowing holographic brain symbolizes artificial intelligence. The background features large windows with a view of a tech cityscape, suggesting innovation and progress. The scene exudes a professional, high-tech atmosphere with subtle lighting highlighting the advanced technology elements

OpenAI’s Former Chief Scientist Launches Safe Superintelligence Inc.

Ilya Sutskever, a co-founder and former chief scientist at OpenAI, is embarking on a new venture with the launch of Safe Superintelligence Inc. (SSI), an AI startup dedicated to ensuring the safety of AI systems. Announced on Wednesday, SSI aims to develop a powerful and safe AI system with a singular focus on safety over commercial pressures.

Safety Over Commercial Pressures

In his announcement, Sutskever emphasized that SSI will tackle safety and capabilities concurrently, enabling rapid advancements in AI technology while prioritizing safety. He highlighted the unique business model of SSI, which insulates the company from the typical commercial pressures faced by AI teams at larger companies like OpenAI, Google, and Microsoft.

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” Sutskever stated. “This way, we can scale in peace.”

A Strong Team and Focus

SSI is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, previously a technical staff member at OpenAI. The trio aims to create a focused environment free from management overhead or product cycles, allowing the team to concentrate solely on building a safe superintelligence.

Sutskever's departure from OpenAI came after his efforts to remove CEO Sam Altman. Following his exit in May, other key figures at OpenAI, including AI researcher Jan Leike and policy researcher Gretchen Krueger, also left, citing concerns over safety processes being overshadowed by product development.

Future Prospects

While OpenAI continues to forge partnerships with tech giants like Apple and Microsoft, SSI will maintain its exclusive focus on developing safe superintelligence. Sutskever indicated in an interview with Bloomberg that SSI’s first and only product will be a safe superintelligence system, with no plans to diversify until this goal is achieved. Read Sutskever’s statement.

SSI is set to operate out of Palo Alto and Tel Aviv, leveraging its deep connections to recruit top-tier technical talent. The company is assembling a lean team of the world's best engineers and researchers, all dedicated to the mission of creating safe superintelligence.

By keeping safety at the forefront, Safe Superintelligence Inc. aims to tackle one of the most significant technical challenges of our time, ensuring that advancements in AI do not come at the cost of safety and security.