back to top
HomePicksAI Picks4 Open-Source TTS Models That Can Clone Voices and Actually Sound Human

4 Open-Source TTS Models That Can Clone Voices and Actually Sound Human

- Advertisement -

Voice cloning used to mean expensive studio software, proprietary APIs with per-character pricing, or models so heavy they needed server infrastructure just to run. That changed quietly over the last few months.

Four open source models exist right now that do something the previous generation struggled with. They do not just generate speech. They clone a voice from a short audio sample and produce output that is genuinely difficult to compare from the original speaker.

The gap between open source and commercial TTS has been closing for a while. These four models suggest it has effectively closed for voice cloning specifically. Here is what each one actually does and who it is for.

1. OmniVoice

OmniVoice TTS
OmniVoice Demo

OmniVoice supports over 600 languages. It is the broadest language coverage of any Open Source zero-shot TTS model released to date.

Built on a diffusion language model architecture with Qwen3-0.6B as the text encoder, OmniVoice is surprisingly small for what it does. Give it a short reference audio clip and it clones the voice and generates speech in whatever language you need. It also supports voice design, where you describe speaker attributes like gender, age, pitch, accent, or even whisper mode, and the model constructs a voice matching those characteristics without needing a reference clip at all.

Inference speed sits at an RTF of 0.025. That means it generates 40 seconds of audio for every one second of compute. For a model covering this many languages that number is genuinely impressive.

I would include this for any project touching multilingual voice generation. Nothing else at this size comes close to the language coverage.

Try it on HuggingFace Spaces

limitations

  • Voice design feature requires familiarity with attribute prompting
  • Best results with clean reference audio, background noise affects cloning quality

2. LongCat-AudioDiT

LongCat-AudioDiT
LongCat-AudioDiT Demo

Every TTS pipeline you have ever used converts text to a spectrogram first. Then it converts that spectrogram to audio. Two steps, two places for errors to compound, two stages of quality loss baked into the architecture by design.

LongCat skips the spectrogram entirely. It works directly in the waveform latent space, which means what you hear is one generation step closer to what the model actually learned. The result shows in the benchmarks. On the Seed benchmark, the hardest voice cloning evaluation available, LongCat-AudioDiT-3.5B achieved a speaker similarity score of 0.818 on Seed-ZH and 0.797 on Seed-Hard. Both beat the previous state of the art.

It comes in two sizes. The 1B variant is fast and capable for most use cases. The 3.5B variant is where the benchmark numbers above come from. Voice cloning works by passing a reference audio clip alongside your target text. No fine tuning or training, just inference.

It is MIT license and both sizes are available on HuggingFace.

Limitations

  • 3.5B variant needs a capable GPU for comfortable inference
  • Currently stronger on Chinese than English based on benchmark scores

3. FireRedTTS-2

Fish Audio S2 Pro
FireRedTTS-2 Demo

Every TTS model in this list can clone a single voice and generate a sentence. FireRedTTS-2 does something none of them do. It generates multi-speaker conversations with natural speaker switching, context-aware prosody, and first-packet latency as low as 140ms on an L20 GPU.

That is a different use case entirely. If you are building a podcast generator, a chatbot with realistic dialogue, or a voice interface where two speakers need to interact naturally over minutes of audio, FireRedTTS-2 is one of the best open source option doing this reliably right now. It supports up to four speakers in a single generation run, up to three minutes of dialogue, and handles cross-lingual voice cloning so Speaker 1 can be in English and Speaker 2 in Japanese without breaking the output.

Language support covers English, Chinese, Japanese, Korean, French, German, and Russian. The streaming architecture means you do not wait for the full audio to generate before playback starts. It streams sentence by sentence.

At 20.9GB it is the heaviest model in this list. Apache 2.0 licensed. Weights on HuggingFace under FireRedTeam.

Worth knowing

  • Strongest on Chinese and English, other languages less thoroughly evaluated
  • Voice cloning intended for academic research per the model’s own disclaimer, use responsibly
  • Dialogue generation currently capped at three minutes and four speakers

4. Fish Audio S2 Pro

Fish Audio S2 Pro Demo

On the Audio Turing Test, human listeners correctly identified S2 Pro as AI only 48.5% of the time. Essentially a coin flip. That single number tells you more about where this model sits than any other benchmark.

Fish Audio S2 Pro is a 4B parameter model trained on over 10 million hours of audio across 80 plus languages. Voice cloning works from a 10 to 30 second reference sample, no fine tuning required. But what separates S2 Pro from everything else in this list is granular emotional control. Using simple bracket tags you can embed emotional instructions anywhere in the text. Whisper, excited, laughing, angry, inhale, pause, emphasis. Over 15,000 unique tags supported, not fixed presets, free form descriptions the model actually understands.

On Seed-TTS Eval it achieved the lowest word error rate among all evaluated models including closed source systems like Seed-TTS and MiniMax Speech.

Limitations

  • Fish Audio Research License, free for personal and research use, commercial use requires a separate paid license from Fish Audio
  • Requires HuggingFace access and local GPU for self hosting
  • SGLang recommended for best streaming performance

Audio that sounds natural

A year ago the gap between open source and commercial TTS was obvious the moment you hit play. Robotic cadence, clipped consonants, speaker similarity that fooled nobody. These four models do not sound like that.

OmniVoice covers more languages than any other model at its size. LongCat beats previous state of the art on speaker similarity by skipping the spectrogram entirely. FireRedTTS-2 handles multi-speaker conversations in a way nothing else in open source does. Fish Audio S2 Pro passed a human Turing test.

The hardware requirements vary and the licenses are not all equal. But the output quality across all four is at a level that would have seemed unrealistic in open source twelve months ago.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
VOID Model Netflix's open source AI removes objects and fixes the physics they break

VOID: Netflix’s open source AI removes objects and fixes the physics they break

0
Netflix has a visual effects budget most film studios would kill for. They do not release open source AI tools for fun. When they do ship something publicly, it is worth paying attention. VOID is their latest release. Video Object and Interaction Deletion. Point at an object in a video, and VOID removes it. Everything that object was doing to the world around it. That last part is where every other tool has failed for years. Remove a person carrying a stack of boxes and the boxes hang in mid air. Remove a chair someone is sitting on and the person hovers. The physics of the scene breaks and the edit becomes unusable. Film editors have been cleaning this up by hand since video editing existed. VOID does not just erase. It reasons about what should happen next. A vision language model looks at the scene first, identifies everything the removed object was physically affecting, and only then does the diffusion model generate what the world looks like without it. Remove the person, the boxes fall. Remove the chair, the person sits on the floor. The scene stays physically coherent.
Trinity-Large-Thinking AI Agent Model

Trinity-Large-Thinking: the open source brain your AI agents have been missing

0
Most open source models that claim agentic capability are really just instruction-tuned models with tool calling bolted on. They can call a function. They cannot think across ten steps, remember what they decided three tool calls ago, and course correct when something breaks mid-task. This is where Trinity-Large-Thinking comes into picture. Arcee AI released it this week. 398 billion total parameters, but only 13 billion active during inference. That MoE architecture means it runs closer to a 13B model in practice while carrying the knowledge of something nearly 30 times larger. And unlike most models where reasoning stops between steps, Trinity keeps its thinking tokens alive across the entire agent loop. Every decision it makes is informed by everything it reasoned through before it.
EmDash is what Cloudflare rebuilt WordPress for the agent-first web

EmDash: Cloudflare rebuilt WordPress for the agent-first web

0
WordPress has a problem it cannot fix from the inside. Not a performance problem. Not a features problem. A structural one. 96% of its security vulnerabilities come from plugins, and the reason is simple. Every plugin gets access to everything. The database, the filesystem, the entire execution context. That is how it was built in 2003 and that is how it still works today. Cloudflare looked at that and decided patching was the wrong answer. EmDash is their attempt to start over. Built in TypeScript, Its serverless & powered by Astro & MIT licensed. No PHP, legacy architecture or plugins that can silently access your entire database. I want to be straight about what this is right now. It is a v0.1.0 developer preview. You are not migrating your production site today. But the architecture decisions behind it are serious enough that if you build on WordPress, run a plugin business, or host WordPress sites for clients, you should understand what Cloudflare just shipped.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy