• AiNews.com
  • Posts
  • Eric Schmidt Warns Against ‘Manhattan Project’ for AGI Development

Eric Schmidt Warns Against ‘Manhattan Project’ for AGI Development

A conceptual illustration of a high-stakes AI arms race. On one side, a glowing AI brain represents rapid AGI development, while on the other, a globe symbolizes global stability. Between them, a delicate balance is depicted, illustrating the tension between technological dominance and international security. Digital elements in the background suggest cybersecurity, deterrence strategies, and global cooperation in AI policy.

Image Source: ChatGPT-4o

Eric Schmidt Warns Against ‘Manhattan Project’ for AGI Development

In a newly published policy paper, former Google CEO Eric Schmidt, along with Scale AI CEO Alexandr Wang and Center for AI Safety Director Dan Hendrycks, argue that the United States should avoid a Manhattan Project-style initiative to develop artificial general intelligence (AGI).

Titled “Superintelligence Strategy,” the paper warns that an aggressive push for exclusive U.S. control over AGI could provoke hostile responses from rival nations—particularly China—potentially escalating to cyberattacks and destabilizing global security.

The Risks of an AGI Arms Race

Schmidt and his co-authors caution against viewing AGI development as a zero-sum competition. The paper suggests that if the U.S. were to seek overwhelming dominance in superintelligent AI, other nations might perceive this as an existential threat and take preemptive countermeasures.

“A Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it,” the co-authors write. “What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure.”

The argument runs counter to recent calls by U.S. policymakers and industry leaders advocating for a government-backed AGI program, modeled after the 1940s effort to build the atomic bomb. Just months ago, a U.S. congressional commission proposed such an approach, and Secretary of Energy Chris Wright recently referred to the country’s AI development as “the start of a new Manhattan Project” while speaking at a supercomputing facility alongside OpenAI co-founder Greg Brockman.

A Defensive Approach: AI Deterrence Through MAIM

Rather than racing to AGI dominance, Schmidt, Wang, and Hendrycks propose a defensive strategy that prioritizes deterrence. They introduce the concept of Mutual Assured AI Malfunction (MAIM), arguing that instead of trying to "win" the AGI race, the U.S. should focus on:

  • Cyber-defense and countermeasures to prevent adversaries from weaponizing AGI.

  • Restricting access to advanced AI hardware, such as high-end chips and open source models.

  • Developing cyberattack capabilities to disable foreign AGI projects deemed threatening.

This measured approach contrasts with what the authors describe as a stark divide in AI policy thinking:

  • On one side are the "doomers," who believe AI poses catastrophic risks and advocate slowing down development.

  • On the other side are the "ostriches," who push for rapid AI advancement and assume risks can be managed later.

The authors propose a third path—one that acknowledges the risks of AGI while focusing on safeguards and strategic deterrence rather than outright acceleration.

Mutual Assured AI Malfunction (MAIM): A Third Path for AGI Strategy

Rather than pursuing a Manhattan Project-style race for AGI dominance, Schmidt, Wang, and Hendrycks propose a strategy they call Mutual Assured AI Malfunction (MAIM). This approach draws from Cold War-era deterrence models, particularly Mutual Assured Destruction (MAD), which prevented nuclear powers from launching first strikes by ensuring catastrophic retaliation.

In the case of AGI, the authors argue that nations will not sit idly by while a rival races toward an AI monopoly. If one country gains a significant AGI advantage, its adversaries may fear either global destabilization (if the AI goes rogue) or strategic dominance (if the AI remains under control). Rather than risk either outcome, other nations may intervene preemptively—not necessarily through military force, but through cyberattacks, sabotage, and intelligence operations designed to disable AGI projects before they become a threat.

How MAIM Works: Cyber-Based Deterrence Instead of an Arms Race

The authors suggest that MAIM will emerge naturally as the default geopolitical stance if nations recognize that AGI dominance is an unacceptable risk. To deter adversaries from developing destabilizing AI systems, they outline several strategic measures:

  • Cyber Espionage & Sabotage – Intelligence agencies would monitor rival AI projects, using insiders or hackers to degrade AI models, corrupt training data, or introduce subtle weaknesses that make AGI less effective.

  • Covert & Overt Cyberattacks – If espionage fails, nations could escalate to direct cyberattacks on data centers, disrupting cooling systems or power supplies to slow AGI development.

  • Restricting AI Hardware Access – Governments could limit access to high-performance AI chips, preventing rivals from acquiring the necessary computing power for AGI breakthroughs.

  • AI Transparency Agreements – Just as nuclear treaties required verification, the authors suggest global AI transparency measures to ensure that AI projects comply with safety standards without revealing proprietary technology.

While extreme cases could include physical attacks on data centers, the authors believe that cyber-based deterrence—such as Stuxnet-style attacks that subtly sabotage AI training—would be sufficient to maintain global balance.

Stabilizing AI Development Without a Race to the Finish

By establishing a global framework for deterrence, MAIM aims to delay the emergence of destabilizing AGI until better safety measures, oversight, and international agreements can be put in place. The paper suggests that nations can cooperate on AI safeguards, just as they once agreed on nuclear nonproliferation, ensuring that AGI development proceeds cautiously rather than chaotically.

The MAIM strategy reframes AGI development as a shared global responsibility rather than a competitive arms race. Instead of an all-or-nothing push for AI dominance, the paper suggests a balance of power approach, where nations deter reckless AGI projects while investing in controlled, transparent AI progress.

A Shift in Schmidt’s AI Stance

Schmidt’s endorsement of a cautious AGI strategy marks a shift from his previous stance. As a long-time advocate for U.S. AI supremacy, he has previously warned about China’s growing AI capabilities and, in a recent op-ed, described the launch of China's DeepSeek AI as a major turning point in the AI race.

The authors argue, however, that America’s AGI decisions do not exist in isolation. A reckless pursuit of superintelligence could provoke dangerous geopolitical consequences, making restraint and strategic deterrence the smarter path forward.

What This Means

The Manhattan Project led to the creation of nuclear weapons, but it also triggered a global arms race that reshaped geopolitics for decades. Schmidt and his co-authors warn that applying the same approach to AGI could lead to similar destabilization, provoking cyberattacks, preemptive countermeasures, or an accelerated AI arms race with unpredictable consequences.

America’s decisions around AGI don’t exist in a vacuum. If the U.S. pushes ahead unilaterally, other nations—especially China—may see this as an existential threat and respond aggressively, heightening global tensions and making it harder to implement meaningful safety measures. Instead of ensuring security, a race for AGI dominance could undermine stability and increase risks for everyone.

The paper advocates for international collaboration rather than confrontation. Instead of seeking absolute control, the U.S. and its allies should work to establish global safeguards, deterrence strategies, and ethical AI policies. By securing AI supply chains, improving oversight, and using measured deterrence strategies like MAIM, nations can mitigate AI risks without escalating geopolitical conflict.

This approach aims to stabilize AGI development, ensuring that when superintelligent AI eventually emerges, it is built under conditions that prioritize security, cooperation, and long-term global safety.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.