• AiNews.com
  • Posts
  • OpenAI Rolls Out Advanced Voice Mode for ChatGPT Plus and Teams

OpenAI Rolls Out Advanced Voice Mode for ChatGPT Plus and Teams

An image representing OpenAI's Advanced Voice Mode rollout. At the center is a blue animated sphere symbolizing the new voice feature, surrounded by icons such as sound waves, language symbols, and microphones. The background is a gradient of blue and white, with subtle digital elements highlighting technological advancement. The ChatGPT logo is positioned at the top left, and a text overlay reads "Advanced Voice Mode" in a clean, futuristic font.

Image Source: ChatGPT-4o

OpenAI Rolls Out Advanced Voice Mode for ChatGPT Plus and Teams

OpenAI has begun rolling out Advanced Voice Mode (AVM) to a broader group of ChatGPT users, specifically targeting paying customers in the Plus and Teams tiers. This new feature, which aims to make interactions with ChatGPT feel more natural, will also be available to Enterprise and Edu customers starting next week.

Enhanced Design and Features for AVM

As part of this rollout, AVM is receiving a design update. The feature is now represented by a blue animated sphere, replacing the animated black dots first introduced during OpenAI’s showcase in May. Users will be notified via a pop-up next to the voice icon in the ChatGPT app when AVM becomes available to them.

New Voices and Improved Functionality

In addition to the new design, ChatGPT now includes five new voices: Arbor, Maple, Sol, Spruce, and Vale, bringing the total number of available voices to nine. These voices, inspired by nature, are intended to enhance the user experience by making interactions more fluid and natural. However, one voice, Sky, is notably absent from the lineup. This omission follows a legal challenge from actress Scarlett Johansson, who claimed that Sky’s voice was too similar to her own. OpenAI removed the voice after the incident, stating that any resemblance was unintentional.

Missing Features and Future Updates

The rollout does not include the video and screen-sharing capabilities that OpenAI announced during its spring update. These features, which would allow GPT-4o to process visual and audible information simultaneously, remain without a release timeline. Despite this, OpenAI has made several improvements to AVM, including better understanding of accents and smoother, faster conversations.

Expanded Customization Options

OpenAI is also expanding ChatGPT’s customization options to include AVM. Users can now access Custom Instructions, which enable them to tailor how ChatGPT responds, and Memory, allowing the AI to remember past conversations for future reference. These features aim to personalize the user experience further and enhance interaction continuity.

Regional Availability and Restrictions

Currently, AVM is not available in certain regions, including the EU, the U.K., Switzerland, Iceland, Norway, and Liechtenstein. An OpenAI spokesperson confirmed that the company is working on making the feature accessible in these areas but did not provide a specific timeline.

OpenAI’s Vision for AVM

With the addition of AVM, OpenAI is striving to create a more interactive and natural experience for ChatGPT users. As the company continues to refine this feature and expand its availability, it remains committed to enhancing the overall usability and accessibility of its AI tools.