GPT-4o is OpenAI’s latest flagship model

GPT-4o is OpenAI’s latest flagship model

/ 11:15 AM May 14, 2024

Just when you thought artificial intelligence can’t get better, it just did! OpenAI demonstrated GPT-4o, its latest flagship AI model. 

The letter “o” stands for “omni,” the Latin term for “all.” Also, it relates to GPT-4o’s capability to analyze and provide audio, vision, and text responses in real time. 

READ: US regulates AI-generated voice calls

Article continues after this advertisement

GPT-4o has a significantly more convincing human-like voice. Moreover, it can understand and express emotions and solve math problems.

FEATURED STORIES

What are GPT-4o’s features?

On May 14, 2024, at 1 a.m. Philippine time, OpenAI Chief Technology Officer Mira Murati hosted a livestream demonstration of GPT-4o.

She shared a few technical details, such as the AI model’s processing speed. It can respond to audio inputs within 232 milliseconds, with an average of 320 milliseconds.

Article continues after this advertisement

That means the new AI model matches the average human response time. As a result, Voice Mode lets you speak with ChatGPT like a personal assistant. 

Article continues after this advertisement

It matches GPT-4 Turbo’s performance on English text and code. Moreover, GPT-4o is faster and 50% cheaper in the API, a program that receives and sends commands to another program. 

Article continues after this advertisement

Later, OpenAI post training research head Barret Zoph and AI researcher Mark Chen join the Spring Update. 

Zoph and Chen call ChatGPT powered by GPT-4o. In response, it reveals its crisp, realistic female voice that showed minimal stuttering during the demonstration.

Article continues after this advertisement

The AI experts asked the model to narrate a story, showing it uses proper intonation and emotion while narrating. 

As a tongue-in-cheek demo, Zach asked ChatGPT to continue the story in a robotic voice. In response, it told the story in a stereotypical electronic voice.

Zach let GPT-4o demonstrate its vision capabilities by asking it to read a linear equation. Then, he asked the AI model to read it and guide him in solving the problem without providing answers.

The demonstration involved Murati again when he asked ChatGPT to translate her statements from Italian to English and his from English to Italian. 

This Spring Update shows ChatGPT can function as a personal assistant and tutor. Believe it or not, GPT-4o is available for the free and paid version.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

OpenAI says “We are still just scratching the surface of exploring what the model can do and its limitations.”

TOPICS: Artificial Intelligence, technology
TAGS: Artificial Intelligence, technology

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.