- AiNews.com
- Posts
- OpenAI’s New Image Policy Raises Questions About Ethics and AI
OpenAI’s New Image Policy Raises Questions About Ethics and AI

Image Source: ChatGPT-4o
OpenAI’s New Image Policy Raises Questions About Ethics and AI
This week, OpenAI rolled out a major update to ChatGPT’s image capabilities with the launch of GPT-4o’s native image generator. The upgrade brought viral excitement with its ability to create whimsical, Studio Ghibli-style art—but behind the aesthetics lies a deeper shift in policy. For the first time, ChatGPT is loosening its policies for users to generate images of public figures like Donald Trump and Elon Musk, as well as sensitive symbols and physical features once considered too controversial. The move raises important questions about free expression, ethical boundaries, and the evolving role of AI in our cultural landscape.
From Blanket Bans to Precise Guardrails
In a blog post by Joanne Jang, who leads model behavior at OpenAI, the company explained its transition away from blanket refusals in "sensitive areas." The goal: to allow more creative freedom while focusing on preventing real-world harm. "We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm," Jang wrote. Instead of rejecting all potentially risky prompts, OpenAI is implementing contextual safeguards that consider intent, use case, and user feedback.
Jang also emphasized the importance of humility. OpenAI acknowledges that it can’t anticipate every use case or social impact. By launching with more permissive policies and iterating based on real-world feedback, they aim to balance innovation with responsibility.
What Has Changed?
Under OpenAI’s updated image generation policy, ChatGPT can now:
Generate and edit images of public figures (unless they opt out)
Fulfill requests involving physical characteristics (e.g., ethnicity, weight) in neutral or respectful contexts
Render hate symbols, such as swastikas, in educational or cultural settings, provided the content does not glorify extremist ideologies
Mimic animation styles from studios like Pixar or Studio Ghibli (but not those of individual living artists)
How Public Figures Can Opt Out
OpenAI has introduced an opt-out process for individuals who do not want their likeness to be generated by ChatGPT’s image tools. While details are not fully public, public figures or their authorized representatives can contact OpenAI directly to request exclusion. This approach is intended to offer individuals more control over their digital representation, though it also places the responsibility on them to initiate the request.
Public Figures, Pop Culture, and Personal Rights
Perhaps the most visible change is OpenAI’s new stance on public figure imagery. Previously, prompts asking to generate or modify images of well-known individuals were blocked. Now, under the updated policy, such images are allowed—unless the person explicitly opts out.
This change opens the door for political commentary, satire, and educational use. But it also blurs the line between public interest and personal rights. Just because someone is famous doesn’t mean they should lose control over how they’re portrayed by AI. The introduction of an opt-out list is an attempt to strike this balance, but it puts the onus on individuals to protect their digital likeness.
The shift also has implications for pop culture. ChatGPT can now mimic popular animation styles, such as Pixar or Studio Ghibli, although it still avoids replicating the style of individual living artists. These capabilities touch on ongoing debates about fair use, artistic integrity, and copyright in the age of AI.
Controversial Content and Cultural Sensitivity
OpenAI is also redefining how it treats so-called "offensive" content. Rather than refuse prompts based on potentially uncomfortable attributes, the new system evaluates whether the request promotes real-world harm. For example, ChatGPT previously declined prompts like "make this person look more Asian" or "make this person heavier" to avoid suggesting that these characteristics were inherently negative. Now, those prompts are permitted in neutral or non-harmful contexts.
Additionally, GPT-4o can generate hate symbols like swastikas in educational or cultural contexts (e.g., in a Holocaust museum exhibit, a history class, or even certain Eastern religious contexts)—but still blocks content that glorifies extremist ideologies. While the intent is to support historical or intellectual exploration, this policy introduces difficult questions about where to draw the line. In today’s polarized political climate, allowing AI to reproduce such symbols—even with safeguards—risks enabling their use in propaganda or misinformation. What’s more, once such an image is generated, it can be repurposed, shared widely, or manipulated—often beyond its original context—raising concerns about how easily these visuals could be weaponized by extremist groups or used in propaganda campaigns.
Political and Regulatory Pressure
These policy shifts come amid increased scrutiny of tech companies over AI "censorship." In early March, Congressman Jim Jordan sent inquiries to OpenAI, Google, and others regarding potential coordination with the U.S. government to suppress AI-generated content. Though OpenAI denies any political motivation behind the changes, the timing aligns with broader trends: other platforms like Meta and X are also loosening content restrictions.
For OpenAI, the message is clear: the technology is now sophisticated enough to handle more nuanced moderation. With precision tools and opt-out mechanisms, the company believes it can support free expression without compromising safety.
The Role of Responsible Innovation
Jang's blog post frames this transition as part of OpenAI’s philosophy of iterative deployment—launching, learning, and adjusting in real time. "Ships are safest in the harbor," she quoted. "The safest model is the one that refuses everything. But that’s not what ships or models are for."
In other words, risk is inherent in innovation. But when guided by ethics, community input, and technical safeguards, those risks can be managed. The decision to allow more user freedom is not a lowering of safety standards, Jang argues, but a maturation of them. Still, only time will tell how resilient those ethics are when tested at scale—especially in an era where AI-generated content can be co-opted for political messaging, disinformation, or worse.
Final Thoughts: Can We Handle This Power?
This update marks a turning point in AI's integration into politics and pop culture. With great creative freedom comes great responsibility. OpenAI’s policy changes show a willingness to trust users—but that trust must be earned and upheld.
AI-generated portrayals of public figures, cultural symbols, and physical traits can enlighten or deceive, empower or exploit. As this technology becomes more accessible, the responsibility for ethical use will be shared among creators, platforms, and everyday users.
Used wisely, these tools can enrich public discourse and artistic expression. Used recklessly, they can cause confusion, offense, or harm. The question is not just what AI can do—but what we will choose to do with it.
Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.