- AiNews.com
- Posts
- OpenAI Invests in Deepfake Cybersecurity Startup Adaptive Security
OpenAI Invests in Deepfake Cybersecurity Startup Adaptive Security

Image Source: ChatGPT-4o
OpenAI Invests in Deepfake Cybersecurity Startup Adaptive Security
Adaptive Security, a cybersecurity company focused on defending against AI-powered attacks, has raised $43 million in a funding round co-led by OpenAI and Andreessen Horowitz. The investment, announced Wednesday, marks OpenAI’s first financial backing of a cybersecurity firm.
Adaptive Security specializes in simulating sophisticated phishing and deepfake attacks, helping organizations prepare for threats that leverage artificial intelligence to mimic real individuals using voice, likeness, and personal details scraped from public sources.
“The technology is getting better and better every day,” said CEO Brian Long during an appearance on CNBC’s Squawk Box. “It’s not just voice and likeness—it’s trained on all of the open-source information out there about you.”
According to Long, the use of AI in social engineering attacks has grown significantly in the past year. He called the rise of AI-powered threats "one of the most urgent cybersecurity threats of our time."
Who Joined the Round
In addition to OpenAI and Andreessen Horowitz, the funding round included participation from:
Abstract Ventures
Eniac Ventures
CrossBeam Ventures
K5 Ventures
Executives from Google, Workday, Shopify, Plaid, and others
What Adaptive Security Does
The company uses AI to simulate real-world attacks that go beyond simple impersonation. These simulations help businesses train their teams to recognize and defend against evolving threats.
“Adaptive is building exactly what the industry needs — an AI-native defense platform that evolves as fast as the attackers,” said Ian Hathaway, partner at the OpenAI Startup Fund.
Customers and Future Plans
Adaptive Security's clients include the Dallas Mavericks, First State Bank, and BMC. The company said the new funding will accelerate the development of its engineering solutions to help companies and their employees counter increasingly complex cyberattacks.
What This Means
OpenAI's investment in Adaptive Security signals a strategic expansion beyond AI development into AI defense. As the creator of powerful generative models like ChatGPT, OpenAI is now acknowledging the risks these tools can pose in the wrong hands—particularly in deepfake and social engineering attacks.
For OpenAI, this move could mark the beginning of a broader effort to support a safer AI ecosystem, where innovation is matched by investment in protective technologies. It also reinforces the company's commitment to responsible AI deployment, especially as regulators and the public scrutinize how AI can be used maliciously.
For OpenAI users, especially those integrating AI tools into business operations, this investment may lead to improved security features, awareness training, or even partnerships with platforms like Adaptive Security. It shows that OpenAI is not just advancing AI capabilities, but also actively working to mitigate the new wave of risks these capabilities introduce.
In a broader sense, this partnership underscores the growing need for AI-native cybersecurity solutions—tools built from the ground up to keep pace with the evolving threat landscape powered by AI.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.