Scroll Top

ChatGPT gains the power to see, hear, and speak

WHY THIS MATTERS IN BRIEF

Giving ChatGPT more multi-modal capabilities makes it even more powerful, and again changes the AI playing field.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

ChatGPT has a new upgrade that lets the viral Artificial Intelligence (AI) tool “see, hear, and speak”, according to OpenAI. The update will allow users to have voice conversations with the AI chatbot and interact with it using images as well, the firm said in a blog post on Monday.

 

See also
Capable of full autonomy, we go inside the stealth destroyer USN Zumwalt

 

“ChatGPT can now see, hear, and speak,” the firm also said in a post on X.

The features will be rolled out “over the next two weeks” and enable users to “use voice to engage in a back-and-forth conversation” with the AI assistant.

With the new features, ChatGPT can be used to “request a bedtime story for your family, or settle a dinner table debate,” according to the company, bringing it closer to the services offered by Amazon’s Alexa or Apple’s Siri AI assistants.

 

 

 

Providing an example of how the feature works, OpenAI shared a demo in which a user asks ChatGPT to come up with a story about “the super-duper sunflower hedgehog named Larry.”

The chatbot replies to the query with a human-like voice and also responds to questions such as “What was his house like?” and “Who is his best friend?”

OpenAI said the voice capability is powered by a new text-to-speech model that generates synthetic human-like audio from just text and a few seconds of sample speech, the company said.

 

See also
Body morphing clothing powered by ancient bacteria helps you exercise harder

 

“We collaborated with professional voice actors to create each of the voices. We also use Whisper, our open-source speech recognition system, to transcribe your spoken words into text,” the company said.

The AI firm believes the new voice technology is capable of crafting realistic-sounding synthetic voices from just a few seconds of real speech, and could opens doors to many creative applications.

However, the company also cautioned that the new capabilities may also present new risks “such as the potential for malicious actors to impersonate public figures or commit fraud” – which is a standard byline now for almost everything they release into the public domain.

Another major update to the AI chatbot allows users to upload an image and ask ChatGPT about it.

“Troubleshoot why your grill won’t start, explore the contents of your fridge to plan a meal, or analyze a complex graph for work-related data,” OpenAI explained.

 

See also
Google Stadia’s AI lets game developers instantly change the style of their games

 

This new feature, according to the company, also lets users focus on a specific part of the image using a drawing tool in the ChatGPT mobile app.

This kind of multimodal recognition by the chatbot has been forecast for a while, and its new understanding of images is powered by multimodal GPT-3.5 and GPT-4. These models can apply their language reasoning skills to a range of images, including photographs, screenshots and documents.

OpenAI said the new features will roll out within the next two weeks in the app for paying subscribers of ChatGPT’s Plus and Enterprise services.

“We’re excited to roll out these capabilities to other groups of users, including developers, soon after,” the AI firm said.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This