- OPENAI retires the standard Chatgpt vocal mode in September
- Only faster and more expressive advanced voice mode will be available
- Many users are thwarted by change, preferring sound and approach to the OpenAi voice
The voice that people came to associate with Chatgpt take their retirement on September 9, and everyone is not happy with it. The “standard” voice of Chatgpt disappears in favor of the “advanced” voice option published for the first time at a limited selection of chatgpt users last year. Simply represented as “Chatgpt Voice”, it will be the only choice in the future.
The original “standard” vocal mode made its debut in 2023, built on a simple pipeline: you speak, the Openai servers would transcribe your entry, would generate a response using the GPT model, then read it using a relatively neutral synthetic voice.
The advanced chatgpt vocal mode is designed to react more quickly, to be more human in its tone and its mode of speaking, and to generally function at a level higher than its predecessor. However, many people think it’s a mistake.
“The standard voice offers warmth, depth and natural connection that the advanced voice simply does not correspond,” wrote a user in an article on the Openai forum. The advanced voice appears as robotic and detached, devoid the moving and understanding tone that I appreciate. “”
More than one person has described the new voice as less engaging to speak. There were also complaints that the new model is talking too quickly, as if he was trying to pass the interaction with.
“The standard voice is reflected and has a natural and comforting voice and cadence. Powers,” posted a Reddit user. “The advanced voice does not have the same characteristics, does not give reflected answers, has limits of restrictive content and always seems that they are trying to rush through a mediocre response.”
Advanced
Even if you don’t mind how the new voice sounds, some chatgpt users are annoyed because they found that it will not even work the same as the previous voice.
The advanced vocal mode incorporates your voice, AI answers and its vocal expression in a real -time process. The integrated process means that AI does not quote the written answer textually. Instead, he expresses more conversational ideas, sometimes jumping sentences, condensing the clauses or adjusting the tone based on the context. Technically impressive, but not what some chatgpt users want.
“The standard voice will literally read the exact answer that the chatpt would normally give you. It was a direct line, you know?” Read an example of a post on Reddit. “But this new one? It seems that it is paraphras or in summer [sic] It’s instead. He jumps on the small details and makes all the conversation more disconnected. “”
This may seem minor in the great scheme of AI progress, but it echoes a broader trend in technology where people are upset when there is a big change, even if it is an upgrade.
Of course, not everyone likes the new vocal option. Some love its realism and speed, and how it makes a more fluid conversation. Openai has also promised more improvements to come. But, given that the complaints concerning the deletion of GPT-4O when GPT-5 made its debut led to the return of the older model, I would not be too surprised to see the stage of the standard vocal mode also.