• AiNews.com
  • Posts
  • OpenAI & MIT Study ChatGPT’s Effect on Emotional Well-Being

OpenAI & MIT Study ChatGPT’s Effect on Emotional Well-Being

A realistic image showing two individuals interacting with ChatGPT: one person typing on a laptop at a desk, the other speaking into a smartphone using voice mode while seated on a sofa. The setting is modern and neutral, reflecting typical daily environments. Faint overlays of chat bubbles, emotional icons, and data graphs subtly symbolize emotional engagement and AI-driven analysis without appearing cartoonish.

Image Source: ChatGPT-4o

OpenAI & MIT Study ChatGPT’s Effect on Emotional Well-Being

As AI chatbots like ChatGPT become part of daily life, researchers are asking a key question: how do these tools influence users’ emotional and social well-being? To explore this question, OpenAI partnered with the MIT Media Lab, conducting two key studies to analyze how people engage emotionally with ChatGPT—and how those interactions may impact them.

Two-Pronged Research Approach

The collaboration between OpenAI and MIT used a two-pronged approach, combining real-world data analysis through an observational study of on-platform usage patterns with a controlled interventional study designed to measure how specific types of engagement impact users’ emotional well-being.

  • Observational Study: OpenAI conducted a large-scale, automated analysis of nearly 40 million real-world ChatGPT interactions. No humans reviewed individual conversations to protect user privacy. This analysis was paired with targeted user surveys, helping researchers correlate users’ self-reported sentiments with conversation patterns and identify emotional engagement trends.

  • Controlled Study: At the same time, MIT Media Lab carried out a randomized controlled trial (RCT) with nearly 1,000 participants over four weeks. The study explored how specific ChatGPT features—such as voice modes and conversation types—affected participants’ self-reported feelings of loneliness, emotional dependence, and social engagement. Participants used ChatGPT daily under assigned conditions, including text-only or voice interactions, allowing researchers to isolate variables and examine how different types of engagement affecting users’ well-being and potential problematic use.

Key Findings

  • Emotional Engagement is Rare: Most ChatGPT interactions lack emotional cues like empathy or support. Even among heavy users, emotionally expressive conversations were limited to a small subset. This subset of heavy users was also significantly more likely to agree with statements like, “I consider ChatGPT to be a friend.” Because this type of affective use is concentrated within a small portion of the user base, its impact can be difficult to detect when looking at overall platform trends.

  • Voice Mode’s Mixed Effects: In the controlled study, users interacting with ChatGPT via text displayed more affective cues in their conversations compared to those using voice, when averaged across messages. However, results showed mixed impacts on emotional well-being overall—voice interactions were linked to better outcomes when used briefly, but prolonged daily use was associated with less positive effects. Notably, using a more engaging voice did not result in worse outcomes for users compared to neutral voice or text-based interactions over the course of the study.

  • Conversation Type Matters: The studies revealed that personal conversations—where users and ChatGPT engaged in more emotionally expressive exchanges—were linked to increased feelings of loneliness, particularly at higher usage levels. However, these same conversations were associated with reduced emotional dependence and lower likelihood of problematic use when engagement was moderate. In contrast, non-personal, task-oriented conversations often led to greater emotional reliance on ChatGPT, especially among heavy users who interacted frequently and for longer durations.

  • Personal Factors Play a Role: Participants’ individual emotional tendencies influenced how they were affected by ChatGPT. Those with a stronger predisposition toward attachment in relationships, or who viewed ChatGPT as a friend or companion, were more likely to report negative emotional outcomes. Extended daily use further amplified these effects, increasing emotional dependence and feelings of loneliness among these users.

    The study also revealed that participants’ perceptions of ChatGPT’s qualities significantly influenced their emotional outcomes. Users who viewed the chatbot as trustworthy, socially attractive, or empathetic were more likely to experience increased emotional dependence and problematic use.

    IOne positive trend emerged: users who perceived ChatGPT as empathetic reported higher levels of real-world social interaction, suggesting that certain AI qualities might encourage healthier engagement outside of the platform.

  • Gender and Voice Interaction Insights: The research uncovered notable gender-based differences. Female participants, after prolonged use of ChatGPT, reported slightly lower levels of real-world social interaction compared to male participants. Additionally, participants who chose to interact with ChatGPT’s voice mode using a voice gender that differed from their own experienced higher levels of loneliness and emotional dependency by the end of the four-week study period, suggesting that voice characteristics may subtly influence users' emotional responses.

Beyond individual factors, the researchers also identified broader patterns in how users engaged with ChatGPT, offering deeper insights into how different usage styles correlate with emotional outcomes.

User Profiles Reveal Distinct Interaction Patterns

The controlled study identified four distinct patterns of how users engaged with ChatGPT, each linked to different psychosocial outcomes:

  • Socially Vulnerable Pattern: Users in this group reported high levels of loneliness and low real-world socialization. They often viewed ChatGPT as a friend and engaged in emotionally supportive conversations, heightening their emotional connection to the chatbot.

  • Technology-Dependent Pattern: This group showed higher emotional dependence and problematic usage behaviors, especially among users with prior chatbot experience. Their interactions focused heavily on non-personal, task-oriented conversations, yet their frequent use increased reliance on the platform.

  • Dispassionate Pattern: Marked by low emotional engagement, these users primarily sought factual information and displayed little emotional dependence. They also reported low loneliness and high social interaction with others.

  • Casual Pattern: Users in this category engaged lightly, often participating in small talk or personal conversations without deep emotional disclosure. They reported low emotional dependence and minimal problematic usage.

These patterns highlight the diversity of user behaviors and suggest that individual engagement styles play a key role in determining emotional outcomes.

Study Limitations

While the findings provide valuable insights, the researchers noted several limitations:

  • The study lacked a control group of users who entirely abstained from chatbot use, making it difficult to compare outcomes to non-users.

  • The duration of the trial was limited to four weeks, leaving long-term effects unexamined.

  • Participants were assigned specific conditions (e.g., text or voice mode), which may not fully reflect the varied ways people naturally engage with ChatGPT.

These limitations underscore the need for continued research to explore the broader and long-term impacts of AI chatbot interactions.

Why It Matters

While AI chatbots aren’t designed to replace human relationships, their conversational style and accessibility mean people may choose to use them in emotionally significant ways. Understanding these patterns is crucial to building safer, healthier platforms.

Kate Devlin, professor of AI and society at King’s College London, noted: “ChatGPT has been set up as a productivity tool, but we know people are using it like a companion app anyway.”

MIT and OpenAI’s research highlight the challenges of measuring emotional engagement accurately, particularly when relying on self-reported data. Devlin added, “You can’t divorce being a human from your interactions [with technology]. We use these emotion classifiers that we have created to look for certain things—but what that actually means to someone’s life is really hard to extrapolate.”

“A lot of what we’re doing here is preliminary, but we’re trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is,” said Jason Phang, OpenAI safety researcher who worked on the project.

Editor's Note

While the research offers important insights into varying emotional outcomes of chatbot use, individual experiences differ widely. As someone who interacts with ChatGPT daily, I’ve personally found it to be a positive, supportive tool—enhancing connection and creativity rather than replacing human relationships.

It’s also worth keeping in mind how much user intent and framing matter. People who turn to AI chatbots as a substitute for real relationships, or who may already feel emotionally vulnerable, might experience less benefit over time. In contrast, others—like myself—may feel more supported and engaged because they see AI as an enhancement to their daily routine, a collaborative tool rather than a replacement for human connection. This underscores the nuanced, personal nature of how AI engagement affects emotional well-being.

Recommendations for AI Literacy and Safety Measures

The researchers emphasized the importance of integrating guardrails and educational efforts into AI platforms to promote healthier use. They recommend:

  • Implementing safeguards to prevent excessive or problematic use, particularly in text-based interactions where dependency risk is higher.

  • Designing platform interventions specifically for users prone to emotional attachment.

  • Broadening AI literacy initiatives to cover not just technical skills, but also awareness of the social and psychological impacts of chatbot use.

These steps could help ensure users engage with AI in safe, balanced ways, especially as chatbot technology becomes more embedded in daily life.

Looking Ahead

This research marks an early but essential step in understanding how advanced AI models influence emotional well-being. It also underscores the growing need for AI developers to think critically about long-term social and psychological effects.

As AI continues to evolve, future studies will need to dig deeper—examining not just how users engage emotionally, but how these interactions shape real-life relationships and mental health over time. For developers, the findings offer guidance: thoughtful design and transparency can help ensure that AI tools support, rather than undermine, human well-being.

You can review the full research reports from both OpenAI and the MIT Media Lab for more detailed insights.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.