back to top
HomeTechAI ModelsLTX 2.3 Is Here: The AI Video Generator That Runs on Your...

LTX 2.3 Is Here: The AI Video Generator That Runs on Your PC and Challenges Veo 3.1

- Advertisement -

Two years ago, if you wanted to generate a decent AI video, the only real option was a subscription. Pick a tool, pay monthly, generate on their servers. That was just how AI video worked.

Open source models eventually closed the gap on quality, but running them locally meant terminals, dependency errors, and a lot of patience. Not everyone wanted that headache. Most people didn’t.

But now that just changed. On March 5th, Lightricks dropped two things at once. LTX 2.3, a major upgrade to their open source video model, and LTX Desktop, a proper video editor built entirely on top of it. Its Open Source & you can install it like any other app on your computer.

If you have the hardware, we’re talking 32GB VRAM for the full experience, you genuinely don’t need a subscription for video generation anymore. Just your GPU doing the work.

And if you’re not there on hardware yet, LTX still offers an API. Paid, but flexible. The point is you have options now.

So What Did Lightricks Actually Build?

Lightricks has been building LTX for a while now. LTX 2 was already turning heads in the open source community, and 2.3 is the most complete version they’ve shipped yet. It’s a diffusion-based video model that generates both video and audio together in a single pass. The sound, the motion & the visuals all come from the same model at the same time. That alone separates it from most of what’s out there.

The base model is 22B parameters and uses Google’s Gemma 3 12B as its text encoder, which is the part that reads your prompt and figures out what to generate. It’s a capable brain for a capable model.

Here’s what actually changed in 2.3.

Sharper output: Lightricks rebuilt the VAE, the part responsible for encoding and decoding visuals. Previous versions softened fine details like hair and edges, especially at lower resolutions. Cleaner textures now, less fixing in post.

Better prompt following: Complex prompts with multiple subjects or specific spatial relationships used to drift from what you asked for. If you’ve been dumbing down your prompts to get consistent results, you can stop.

Image to video that actually moves: Previous versions would often produce a slow pan or freeze entirely when animating a still image. They reworked the training specifically to fix this.

Native portrait video: Vertical resolution up to 1080×1920, trained on actual vertical data. First time in LTX. Relevant if you’re making content for TikTok, Reels, or Shorts.

Cleaner audio: New vocoder, cleaner training data. The audio actually aligns with what’s on screen instead of feeling tacked on.

LTX-Desktop: AI Video Without the Setup

ltx desktop AI video editor

This is where it gets better. LTX-Desktop is a proper video editor built on LTX 2.3 and you install it like any other app.

If you have at least 24GB VRAM, everything runs fully local. Your GPU does the work, simply no cost per generation or if you don’t have the hardware? LTX Desktop connects to their API instead but generation just happens on their servers & its paid.

Either way you’re working inside one app. And if one small detail looks off, the Retake tool lets you fix just that area without regenerating the whole clip. On a paid cloud tool you’d re-roll the entire thing and pay for it. Here you just fix it.

LTX 2.3 vs Veo 3.1

Veo 3.1 is probably the most talked about AI video model right now, so that’s the one worth comparing directly.

MetricLTX 2.3 (Local)LTX 2.3 (API)Veo 3.1 (API)
PriceFree$0.04–$0.24/sec$0.15–$0.60/sec
4K pricingFree (your GPU)$0.16–$0.24/sec$0.35–$0.60/sec
AudioNative, includedNative, includedNative, included
Privacy100% localCloudCloud
Retake/EditYes, in Desktop appYesNo
Open SourceYes YesNo
Runs offlineYesNoNo

Pricing is based on official pages as of March 2026 and may change.

Before You Start Using LTX 2.3

Lightricks recommends a Windows machine with a CUDA GPU and at least 32GB VRAM to run this locally. That’s the honest requirement. Not everyone has that sitting on their desk right now, and that’s fine.

If you’re not there on hardware yet, the API route through LTX Desktop is still a solid option. You get the same editor, the same Retake feature, just without the local generation. When you go through the setup you’ll see the pricing plans and you can pick what works for your usage.

The local route is where the real freedom is. But the API keeps it accessible while you get there.

So, Is LTX 2.3 Worth It?

If you have the hardware, honestly yes. A free, open source video model with native audio, portrait support, a proper desktop app, and no subscription attached to it — that’s not a small thing. Six months ago this combination didn’t exist in open source.

If you don’t have the hardware yet, it’s still worth keeping an eye on. The model is only going to get faster, the community is already building on it, and the API option means you can start using it today without waiting on a GPU upgrade.

Paid tools aren’t going away tomorrow. But they’re going to have a harder time justifying their price tags every time something like this drops.

LTX 2.3 is one of those releases that quietly shifts what people expect for free.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
Small AI models running locally on laptop

7 Small But Powerful AI Models You Can Run Locally on Your System —...

0
Most small AI models come with a catch. They're either too slow, too limited, or need hardware that feels impractical. But a handful of models have changed that conversation completely, they're small enough to run locally, capable enough to outperform models like GPT-4o on specific tasks. I went through the benchmarks, the docs, and the community feedback on dozens of models to find the ones actually worth your time. These seven made the cut.
Just After Launching Qwen3.5, Qwen's Core Team Walked Out. Is This the Last Great Qwen Model

Just After Launching Qwen3.5, Qwen’s Core Team Walked Out. Is This the Last Great...

0
Yesterday I was testing Qwen3.5-4B on my machine, genuinely impressed by what a 4B model was doing with images and reasoning. Then I opened X and saw a five word post from Junyang Lin, the man who built Qwen from the ground up: "bye my beloved qwen." That was it. No explanation, no drama, just a goodbye. Within hours the replies were flooding in. Developers, researchers, open source contributors all asking the same thing — what just happened? And then Elon Musk's comment on Qwen3.5 calling it "impressive intelligence density" surfaced, and Lin replied with a simple "thx elon." People in the comments started connecting the dots — was he already gone when he replied? Did he know? Nobody is quite sure what to make of that exchange but it made the whole thing feel even stranger. Lin wasn't alone. Yu Bowen, who led post-training for Qwen, resigned the same day. Hui Binyuan, a core contributor focused on coding, had already left in January. Three of the most important people behind one of the best open source AI model families in the world, gone within months of each other. I had just tested the model. I had just written about why it was worth your attention. And now the people who built it had walked out.
Qwen3.5-4B The Small AI Model That Thinks, Sees, and Runs on Your Machine

Qwen3.5-4B: The Small AI Model That Thinks, Sees, and Runs on Your Machine

0
Most small AI models are a compromise. You give up reasoning for size, or vision for speed. Qwen3.5-4B doesn't seem to have gotten that memo. Alibaba just dropped Qwen3.5, and the 4B version is the one worth paying attention to. It thinks before it answers, reads images and video, handles 201 languages, and sits on a context window of 262,144 tokens, longer than most models ten times its size. All of that in something small enough to run on your own machine. I went through the benchmarks and architecture docs so you don't have to. Here's what actually matters.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy