back to top
HomeTechOpenAI's New Voice Models Want to Do More Than Talk Back

OpenAI’s New Voice Models Want to Do More Than Talk Back

- Advertisement -

OpenAI is pushing deeper into voice.

The company just launched three new realtime audio models in its API. GPT-Realtime-2 for conversational reasoning, GPT-Realtime-Translate for live multilingual translation, and GPT-Realtime-Whisper for streaming speech transcription.

GPT-Realtime-2 can now handle longer conversations, recover from interruptions more naturally, use tools while someone is still talking, and respond with different reasoning levels depending on the task. OpenAI says the model is designed for things like customer support, scheduling, travel assistance, and other workflows where the AI actually has to keep track of context instead of just replying quickly.

OpenAI is no longer treating voice as a side feature attached to chatbots. It’s starting to position voice as the interface itself. That means live translation during conversations. Real time transcription while meetings are still happening. AI agents that can check your calendar, pull information from apps, or complete actions while the conversation keeps moving.

Voice models are starting to behave more like agents

Three audio models in the OpenAI API

The most interesting part of this launch is not the voices themselves. It’s the fact that OpenAI keeps framing these systems around actions and workflows instead of conversations.

The company highlighted examples like Zillow building voice agents that can search for homes and schedule tours, Deutsche Telekom testing multilingual customer support, and Priceline exploring trip planning that happens conversationally from start to finish.

That points to a shift happening across AI right now. Voice assistants used to exist mainly to answer questions. These new systems are being designed to stay active while tasks are unfolding like checking calendars, updating bookings, pulling information from apps, translating conversations live, or handling interruptions without restarting the interaction.

That’s also why OpenAI focused heavily on realtime reasoning and tool use in this launch. A voice assistant that simply sounds natural is no longer enough. The hard part is making the system useful while the conversation is still moving.

Voice changes how people use software

Typing naturally creates pauses. People send a prompt, wait for a response, then move on.

Voice interactions work differently. Conversations keep moving even when requests change halfway through or multiple things happen at once.

That creates a much harder problem for AI systems. The model has to listen continuously, decide when to respond, remember context across longer sessions, and sometimes use tools without interrupting the flow of the conversation itself.

And that’s probably why companies like OpenAI are suddenly investing so heavily in realtime infrastructure.

You May Like: Open-Source TTS Models That Can Clone Voices and Actually Sound Human

GPT-Realtime-Translate may end up being the sleeper feature

The reasoning upgrades will get most of the attention, but the realtime translation model could end up having the bigger commercial impact.

OpenAI says GPT-Realtime-Translate can handle more than 70 input languages and translate into 13 output languages while keeping pace with live conversations. That opens the door for customer support, meetings, events, travel assistance, and sales calls where people no longer need to speak the same language fluently to communicate smoothly.

And unlike older translation systems, OpenAI is clearly pushing for conversations that continue naturally while the translation happens in the background.

The company also highlighted testing from BolnaAI, which said the model handled regional Indian languages like Hindi, Tamil, and Telugu with lower word error rates and fewer fallback failures compared to other systems they tested.

Vimeo is experimenting with the model as well. The company says it’s using GPT-Realtime-Translate for live translation during broadcasts so creators can reach global audiences while streaming in real time. According to Vimeo, one of the biggest improvements was how well the system handled multilingual conversations without breaking flow mid-interaction.

Multilingual voice AI becomes much more useful once it starts handling accents, interruptions, and regional speech patterns reliably in real time.

You May Like: SubQ’s 12M Token Model Could Change How AI Handles Long Context. If It’s Real.

Pricing and availability

All three models are available through OpenAI’s Realtime API.

GPT-Realtime-2 is priced at $32 per million audio input tokens and $64 per million audio output tokens, while GPT-Realtime-Translate costs $0.034 per minute and GPT-Realtime-Whisper costs $0.017 per minute.

Developers can also test the models through OpenAI’s Playground before integrating them into apps and workflows.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
SubQ 12M context AI model

SubQ’s 12M Token Model Could Change How AI Handles Long Context. If It’s Real.

0
Every few years something shows up in AI that makes people stop and argue. Not argue about which model is better or whose benchmark is more honest. Argue about whether the rules just changed. SubQ is that argument right now. A Miami-based startup called Subquadratic came out of stealth last week with a single claim that's either the most important architectural shift since the 2017 transformer paper or the most sophisticated AI hype in recent memory. They say they've built the first LLM that doesn't rely on quadratic attention and that this lets them run a 12 million token context window at roughly one-fifth the cost of frontier models. The AI research community split within hours. Half are losing their minds. Half are explaining why this doesn't count. The truth is probably more interesting than either camp. Here's what we actually know.
Claude doubled limits

Claude Just Doubled Its Usage Limits. The Real Story Is the SpaceX Deal Behind...

0
Claude users have spent months playing a very specific game, how much work can you squeeze out of Opus before the rate limits slam shut? Anthropic is finally loosening things up. The company says it's doubling Claude Code limits, removing peak-hour reductions for paid users, and significantly raising Opus API caps. The reason is also there in the same announcement. Anthropic now has access to all the compute capacity at SpaceX's Colossus 1 data center. That's over 220,000 NVIDIA GPUs. That's the kind of announcement that makes you realize AI companies aren't just shipping models anymore. They're building power infrastructure.
zaya1 8B AI model

ZAYA1-8B Matches DeepSeek-R1 on Math with Less Than 1B Active Parameters.

0
Who should care If you work with math, science problems, or complex coding tasks and you're looking for something small enough to run locally or cheaply via API, this is worth serious evaluation. The benchmark numbers at 760M active parameters are not normal and the Markovian RSA boost means performance scales with compute budget rather than hitting a fixed ceiling. If you're building agent workflows that need reliable tool calling or multi-step instruction following, look elsewhere for now. The agentic numbers are honest about that gap. Researchers working on test-time compute methods will find the Markovian RSA implementation worth studying regardless of whether they deploy the model itself. The co-design approach — training the model specifically to work with the inference method rather than applying the method after the fact — is an interesting direction that most labs haven't published on at this level of detail. The AMD training story is also worth paying attention to if you care about where the hardware ecosystem goes next. This is the most capable model trained end to end on AMD hardware that anyone has published. That matters beyond just this one release.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy