back to top
HomeTechGoogle's Gemini Omni Can Write Math on a Chalkboard. AI Video's Hardest...

Google’s Gemini Omni Can Write Math on a Chalkboard. AI Video’s Hardest Problem May Be Getting Easier

- Advertisement -

Google hasn’t announced Gemini Omni. A reddit user just found it anyway.

Someone opened their Gemini app, got a pop-up for a model they’d never heard of, and started generating video. What came out has been making rounds on Reddit for the past few days, mostly because of one clip.

The chalkboard video is why.

A professor writing a full mathematical proof in chalk, narrating as he goes, text legible, delivery natural, physics mostly convincing. AI video has never handled text well. This one does. And it’s not just text, the audio, the movement, the realism are all hitting at a level that has people genuinely uncomfortable in a way the usual AI video demos don’t.

It’s a leak. Google hasn’t said a word. I/O is next week. But whatever Omni is, the early results suggest something shifted.

What we know about Omni

Not a lot, officially. The user got a pop-up in the Gemini app prompting them to “create with Gemini Omni”, described as a new video generation model with the ability to remix videos, edit directly in chat, and use templates. That’s the entire official description. Google hasn’t confirmed anything exists.

Max Weinbach dug into the metadata and found that Omni appears to be an extension of Veo rather than something built from scratch. Which tracks. Google has been developing Veo for a while and the output here looks like a significant step forward from what Veo was producing, not a completely different direction.

I/O is next week. That’s almost certainly when this becomes official and we get actual details on what Omni is and how it fits into the broader Gemini lineup.

Where it still lacks

The chalkboard result is impressive. The spaghetti test is a different one.

The original Will Smith prompt got blocked by Omni’s guardrails, so the user rewrote it, two men at a seaside restaurant, white tablecloth, approaching the table and eating spaghetti while having a conversation.

Spaghetti appears from nowhere on plates that were empty seconds earlier. The eating doesn’t match the bites. The inconsistencies that the chalkboard video mostly avoids stack up quickly here. Another Reddit user ran the same prompt through ByteDance’s Seedance 2 and got a noticeably more consistent result.

So Omni isn’t uniformly ahead of everything. The text handling is genuinely new. Physical realism on complex interactions still has the rough edges you’d find elsewhere.

You May Like: daVinci-MagiHuman Finally Makes Open-Source AI Video Feel Real

The usage question most ignore

Those two generations, the chalkboard and the spaghetti consumed 86% of this user’s daily quota on a Google AI Pro plan. There was some Gemini Flash usage the same day so it’s not a perfectly clean number, but the direction is clear. Video generation on Omni is expensive in terms of quota, and that’s going to be the conversation nobody is having right now but everyone will be having the moment Google makes this official and people hit their limit inside the first two prompts.

What comes next

Google said video is here to stay after OpenAI shut down Sora earlier this year. Omni looks like the proof of that commitment. The chalkboard result alone suggests something real has shifted on the text rendering problem, even if the model isn’t consistent across all prompts yet.

We’ll know the full picture next week. Until then the chalkboard video is the thing worth watching twice.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
DramaBox An Open-Weight TTS Model Built for Emotion, Voice Cloning, and AI Voice Acting

DramaBox: An Open-Weight TTS Model Built Around Stage Directions

0
Dramabox just landed on Hugging Face and the demo space is live. Resemble AI built it on top of Lightricks' LTX-2.3, and the thing that makes it different from every other TTS model is simpler than you'd expect, you don't give it text to read. You write it a scene.
MiniCPM-V 4.6 The 1.3B Model Running on Your Phone That Challenges Much Larger Rivals

MiniCPM-V 4.6: The 1.3B Model Running on Your Phone That Challenges Much Larger Rivals

0
The assumption has always been that serious AI runs on serious hardware. Your phone gets the watered-down version, good enough for a demo but not for real work. MiniCPM-V 4.6 is a direct challenge to that assumption. 1.3 billion parameters. Runs on iOS, Android, and HarmonyOS. Needs 4GB of GPU memory or 2GB on CPU via GGUF. And on the Artificial Analysis Intelligence Index it scores 13 against Qwen3.5-0.8B's score of 10 at 19x lower token cost, and against Qwen3.5-0.8B-Thinking's score of 11 at 43x lower token cost. These parts matter when it comes to a model which runs on your phone.
OpenAI’s Daybreak Wants to Fix Vulnerabilities Before Hackers Exploit Them

OpenAI’s Daybreak Wants to Fix Vulnerabilities Before Hackers Exploit Them

0
OpenAI just launched Daybreak, a new cybersecurity initiative built around one uncomfortable reality, AI is speeding up vulnerability discovery faster than most companies can patch the damage. Earlier this year, HackerOne temporarily paused parts of its bug bounty program because maintainers were getting flooded with AI-assisted vulnerability reports. Some were valid. Some were hallucinated. Either way, humans still had to read them all. And that’s the change happening underneath all the AI hype. Finding bugs is getting cheaper. Faster too. What used to take weeks of manual research can now happen in hours with the right models and enough compute. Security teams are starting to deal with something closer to triage overload than a tooling shortage. OpenAI seems to think the answer is more AI, but aimed at defenders instead of attackers. That’s where Daybreak comes in. The company says Daybreak combines its latest models, Codex Security, and a group of security partners like Cloudflare, CrowdStrike, Cisco, and Palo Alto Networks to help security teams identify vulnerabilities, validate fixes, generate patches, and monitor risky code before attackers get there first. What makes this launch interesting is that it arrives just weeks after Anthropic introduced Mythos, its own cybersecurity-focused AI system. Both companies are chasing the same problem. But they’re handling access very differently.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy