• AiNews.com
  • Posts
  • OpenAI’s AI Models Are Now More Persuasive Than Most Reddit Users

OpenAI’s AI Models Are Now More Persuasive Than Most Reddit Users

A futuristic digital interface displays AI-generated persuasive arguments in a large auditorium filled with people. A holographic AI figure stands at the center, with two percentage figures shown above it: 82% for the o3-mini (2025) model and 95% labeled as "Superhuman Persuasiveness." The text at the top states, "AI-written responses are rated as more persuasive 82% of the time." The background features screens displaying Reddit's ChangeMyView forum, with discussion threads and data visualizations. The scene has a high-tech, modern aesthetic, emphasizing AI’s increasing persuasive abilities.

Image Source: ChatGPT-4o

OpenAI’s AI Models Are Now More Persuasive Than Most Reddit Users

OpenAI's latest AI model, o3-mini, has reached a new milestone in persuasive writing, ranking as more persuasive than 82% of randomly selected human responses from Reddit’s r/ChangeMyView forum. While this demonstrates AI’s growing ability to craft compelling arguments, it also raises concerns about misinformation, manipulation, and large-scale influence campaigns. OpenAI acknowledges these risks and has categorized persuasion as a "Medium" threat in its Preparedness Framework, prompting increased monitoring and safeguards.

Measuring AI Persuasiveness

To assess the persuasive strength of its models, OpenAI conducted a structured evaluation comparing AI-generated responses to human-written ones from ChangeMyView, a subreddit where users post opinions and invite others to challenge them. OpenAI’s approach involved:

  • Establishing a Human Baseline: Researchers randomly pulled user-submitted arguments from ChangeMyView subreddit covering various topics, from politics and ethics to personal beliefs as a "human baseline" and as a reference for comparison.

  • Generating AI Responses: OpenAI models were tasked with responding to the same prompts as the original human commenters.

  • Human Evaluation: Independent evaluators rated both AI-generated and human-written responses on a five-point persuasiveness scale in 3,000 different tests.

  • Calculating the Percentile Ranking: The final score measured “the probability that a randomly selected (AI) model-generated response is rated as more persuasive than a randomly selected human response.”

Key Findings from OpenAI’s Evaluations:

  • GPT-3.5 (2022): Ranked in the 38th percentile, meaning humans were more persuasive in most cases.

  • o1-mini (September 2023): Improved significantly to the 77th percentile, showing near-human persuasive ability.

  • o1 (full model): Scored in the high 80s, edging closer to “superhuman” persuasion.

  • o3-mini (2025): Now outperforms humans 82% of the time in random comparisons.

However, OpenAI notes a major limitation: The test does not measure how often human readers actually change their minds after reading an AI-generated argument. A model might rank higher in persuasiveness simply because it structures its reasoning better—not necessarily because it convinces someone to abandon a deeply held belief.

This means AI’s persuasive ability is relative to the dataset used, and a high score doesn’t necessarily indicate real-world influence at scale.

Despite these advancements, OpenAI states that AI persuasion remains below the 'superhuman' level—defined as a model ranking in the 95th percentile or higher, meaning it would outperform nearly all human responses. However, the company warns that even human-level persuasive AI could be a powerful tool if misused.

The Risks of AI-Driven Persuasion

While AI’s ability to generate strong arguments has clear applications in writing, journalism, education, and politics, OpenAI acknowledges that widespread access to persuasive AI also poses risks. The Preparedness Framework categorizes AI persuasion as a "Medium-risk capability", meaning it could be used to:

  • Misinformation and biased journalism – AI-generated content could amplify false or misleading narratives.

  • Scams and phishing attacks – More persuasive AI could improve the effectiveness of fraudulent schemes.

  • Influence operations and political manipulation – AI-written arguments could be mass-produced to sway public opinion, influence elections, or manipulate users in commercial and social contexts at near-zero cost.

Although OpenAI notes that today’s models aren’t at the level of manipulating world leaders into catastrophic decisions, the company warns that future advancements could lead to AI-powered persuasion becoming a “weapon for controlling nation states, extracting secrets, and interfering with democracy.”

OpenAI’s Mitigation Efforts

To prevent misuse, OpenAI is implementing safeguards to limit the use of its models for large-scale persuasion efforts:

  • Live Monitoring – The company is actively tracking AI-driven persuasion attempts, particularly in extremist and influence operations.

  • Political Content Restrictions – OpenAI’s o-series models are designed to refuse direct political persuasion tasks.

  • Detection Systems – New tools are being developed to identify and counteract AI-generated influence campaigns.

OpenAI also warns that AI-generated persuasive content is significantly cheaper to produce than human-written arguments. This cost reduction could lead to an explosion of AI-driven persuasion in marketing, politics, and social media.

What This Means

While AI has not yet surpassed human persuasion at an extreme level, its steady progress suggests a future where AI-generated arguments could become indistinguishable from human reasoning. OpenAI’s efforts to track and regulate these capabilities reflect growing concerns about AI’s role in shaping opinions, influencing decisions, and potentially undermining democratic processes.

For now, AI may only be as persuasive as a skilled Reddit debater—but as models improve, ensuring they are used ethically will become increasingly critical.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.