0
0 Cart (empty)

Search in Blog

Brands

New products

All new products

OpenAI has delayed the arrival of the advanced voice mode for ChatGPT

Published on 2024-06-26

The update, which integrates the GPT-4 model into the chatbot, aims to enable more natural conversations with "emotions and non-verbal signals," ensuring they meet "high standards of security and reliability".

Introduced in May, GPT-4 is designed to facilitate interactions that are more natural by processing text, audio, and image inputs rapidly and generating corresponding responses. It also features an advanced 'Voice Mode' allowing users to select from various voices to personalize their interaction with the chatbot. However, this feature has sparked controversy, leading to the removal of the 'Sky' voice, resembling Scarlett Johansson's voice from the 2013 AI assistant role in the movie Her.

Originally slated for a small-scale user trial in July, the launch has been postponed by OpenAI for an additional month to meet their self-imposed standards. This decision was communicated through a statement on X (formerly Twitter).

During this period, improvements will focus on enhancing the model's ability to detect and reject inappropriate content, as well as optimizing user experience infrastructure for large-scale, real-time usage.

The roadmap anticipates a fall release of the voice mode for Plus subscription users, with potential adjustments. Future updates may include new video capabilities and screen-sharing features.

"ChatGPT's advanced voice mode can understand and respond with emotions and non-verbal cues, moving closer to real-time, natural conversations with AI," asserts OpenAI. Consequently, availability timelines hinge on meeting their "high standards of security and reliability".

COMMENTS

No customer comments for the moment.

Add a comment