• AiNews.com
  • Posts
  • OpenAI Co-Founder Sutskever’s AI Startup SSI Raises $1B for Safe AI

OpenAI Co-Founder Sutskever’s AI Startup SSI Raises $1B for Safe AI

An abstract digital brain surrounded by protective shields, symbolizing AI safety and Safe Superintelligence (SSI). The background features secure servers and advanced technology, emphasizing the focus on developing safe AI systems. Cool blue and green tones evoke a sense of security and innovation.

Image Source: ChatGPT-4o

OpenAI Co-Founder Sutskever’s AI Startup SSI Raises $1B for Safe AI

Safe Superintelligence (SSI), a new AI startup co-founded by former OpenAI Chief Scientist Ilya Sutskever, has raised $1 billion in funding to build AI systems that surpass human capabilities while ensuring safety. The company, currently valued at $5 billion according to sources, aims to focus on foundational AI research, acquiring computing power, and hiring top talent.

Investors Backing SSI’s Mission

SSI, which has only 10 employees so far, will split its operations between Palo Alto, California, and Tel Aviv, Israel. Major investors in this early funding round include Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and the investment firm NFDG, co-led by SSI CEO Daniel Gross. The funds will support research and development over the next few years before the company brings its products to market.

Gross emphasized the importance of surrounding the company with investors who understand SSI’s mission of achieving safe superintelligence, stating, it’s crucial "to spend a couple of years doing R&D on our product before bringing it to market.”

The significant funding demonstrates that, despite a broader decline in investor enthusiasm for AI startups—many of which can remain unprofitable for years—there is still a strong willingness to back exceptional talent focused on groundbreaking AI research. This commitment comes at a time when several startup founders have shifted to larger tech companies due to the challenges of securing sustained funding.

Focus on AI Safety and Talent Acquisition

AI safety—preventing AI from causing harm—has become a critical issue, especially as discussions about rogue AI potentially threatening humanity continue to grow. While some companies, such as OpenAI and Google, oppose safety regulations in California, others, like Anthropic and xAI, support legislative efforts to address AI safety concerns.

SSI's small team of experts includes Sutskever as Chief Scientist, Daniel Levy as Principal Scientist, and Gross leading fundraising and computing operations. The company’s recruitment efforts focus on finding candidates who align with SSI’s values, placing character and passion above credentials. “One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype,” Gross explained.

Sutskever’s Transition from OpenAI to SSI

Sutskever, one of AI's most influential figures, co-founded SSI in June after departing from OpenAI. His departure came after the dismantling of OpenAI’s "Superalignment" team, which worked on keeping AI aligned with human values. Sutskever expressed that his move to SSI made sense as he had identified new challenges to tackle, saying, “I identified a mountain that’s a bit different from what I was working on.”

SSI has adopted a traditional for-profit structure, differentiating itself from OpenAI’s unique hybrid structure. The startup plans to collaborate with cloud providers and chip companies to meet its computing needs, though it has yet to decide which firms to partner with.

A New Approach to AI Scaling

Sutskever, an early advocate of scaling AI models with massive computing power, remains committed to the scaling hypothesis that AI models would improve in performance given large amounts of computing power, but hinted that SSI will take a different approach compared to his former employer. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special," Sutskever remarked.