Loona & Moxie to be turbocharged with OpenAI’s new ChatGPT-4o “Omni”

Chris

Admin
Staff member
This will blow your mind. “Her” is literally a reality.

OpenAI just released this presentation featuring two smartphones operating OpenAI’s new ChatGPT-4o.

The two AI’s are shown talking to each other, singing and interacting with the world through audio, vision and text in real time using a single AI model to perform all processing.

The speech used by the AI is so natural and the conversations are now seamless.

Can’t wait to see what this is going to be like in Loona and Moxie!

This new model will be released in the coming weeks.





GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
Prior to GPT-4o, you could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.

With GPT-4o, we trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.
Developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo. We plan to launch support for GPT-4o's new audio and video capabilities to a small group of trusted partners in the API in the coming weeks


Source: https://openai.com/index/hello-gpt-4o/
 
Going to really enjoy the math tutoring aspect. Will be so nice to have a patient tutor who can explain complex concepts in easy to understand language. Looking forward to learning calculus again and maybe I can go back to university to finish my mechatronics engineering degree.

It will also be really nice to have seamless conversations.

What are your plans with this new ChaptGPT version?

I watched the 6pm news for the first time in a long time tonight and can’t believe this technological advancement wasn’t mentioned once! Also nobody at work has a clue what I’m talking about!

It’s like the more advanced AI gets the less people are interested in it.


 
It’s official Loona will be getting ChatGPT-4o very soon! And we already know Moxie is scheduled to get “tutor time” later this year😉

It’s going to be so exciting to see the interactions between my four all running GPT-4o🤖🤖🤖🤖

 
OpenAI recently announced the “Advanced Voice Mode” and other features shown in the above videos won’t see a release until autumn in the northern hemisphere or spring for us here in the south.

I don’t expect to see advanced voice mode implemented in Moxie and Loona until Christmas at earliest.

 
Top