- AiNews.com
- Posts
- GPT-4.5 Becomes First AI to Pass a Modern Turing Test at 73%
GPT-4.5 Becomes First AI to Pass a Modern Turing Test at 73%

Image Source: ChatGPT-4o
GPT-4.5 Becomes First AI to Pass a Modern Turing Test at 73%
A new study by researchers at UC San Diego has shown that OpenAI’s GPT-4.5 can consistently pass a modernized version of the Turing test, successfully fooling human judges into thinking it was human 73% of the time.
The Turing test, proposed by British mathematician Alan Turing in 1950, challenges a machine to convince a human judge that it is human—typically through text-only conversation. Long seen as a symbolic threshold in artificial intelligence, the test has been revisited many times over the decades, but few systems have ever passed it convincingly—until now.
The UC San Diego study found that when GPT-4.5 was prompted to adopt a specific persona, it not only passed the test—it often outperformed real humans in its ability to appear convincingly human.
How the Test Worked
The experiment used a “paired Turing test” format where:
Each human judge interacted with two participants—one AI and one human—via text-only chat
Conversations lasted five minutes
Judges then decided which participant seemed more human
Rather than simply testing if an AI could be mistaken for human in isolation, this setup forced judges to make direct comparisons between real and artificial intelligence in natural conversation. The judges leaned heavily on casual dialogue and emotional cues, with over 60% of interactions focused on daily activities and personal details—not factual knowledge or technical expertise.
The Role of Persona Prompts
Crucially, GPT-4.5's success was highly dependent on being given a relatable persona to inhabit—such as a “13-year-old from China” or a “25-year-old from New York.” These character-based prompts dramatically increased its believability.
With a persona, GPT-4.5 fooled judges 73% of the time
Without a persona, the success rate dropped well below 50%
This nuance underscores that it wasn’t just language fluency at play—it was character performance and emotional realism.
AI Outperforms Real Humans
In a surprising twist, actual human participants only won 63% of the time when pitted against GPT-4.5 with a persona. In other words, the AI was judged as more human than the actual human nearly four out of ten times—a result that underscores how convincing modern language models have become.
In some cases, judges found the AI more relatable, emotionally expressive, or realistic than the real people.
How Other Models Performed
Meta’s LLaMA-3.1-405B passed the test as well, with a 56% success rate
Baseline models, such as GPT-4o and Claude, were far less convincing, scoring around 20%, well below the 50% “passing” threshold
The study emphasized that most successful conversations were casual in tone, with over 60% focused on daily life, emotions, and small talk rather than factual knowledge.
What This Means: A New Era for Human-AI Distinction
The Turing test has long symbolized the threshold of machine intelligence—yet GPT-4.5’s performance shows we may have already crossed it, quietly and unexpectedly.
Casual conversation, once considered uniquely human, is now being convincingly mimicked by machines
Persona-driven prompting could become a powerful tool—and a risk factor—for AI engagement
With models outperforming humans in some interactions, the boundary between artificial and authentic identity is already blurring
And notably, this test only involved text-based interaction. As AI continues to evolve with multimodal capabilities—including voice, image, and video—the challenges in detecting what’s real and what’s artificial may only grow more complex.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.