back to top
HomeSoftwareAI ToolsVibeVoice AI Voice & Podcast Generator Download and Install Locally Using ComfyUI

VibeVoice AI Voice & Podcast Generator Download and Install Locally Using ComfyUI

Vibe Voice ComfyUI

- Advertisement -

File Information

NameVibeVoice with ComfyUI Integration
VersionLatest Release
LicenseMIT License (Free & Open Source)
PlatformsWindows, macOS, Linux
File TypesSource code, Python dependencies
CategoryText-to-Speech & Conversational AI

Description

VibeVoice, developed by Microsoft, is a cutting-edge open source framework for generating expressive, multi-speaker conversational audio. By integrating it into ComfyUI’s modular workflow, you can now build natural, podcast-like dialogue with up to 4 speakers in one audio file. Whether you want to produce lifelike conversations, narrations, or long-form content, VibeVoice excels at delivering clarity, consistency & realism.

Unlike traditional TTS systems, VibeVoice allows zero-shot voice cloning – simply provide a short audio sample in .wav or .mp3 format, and it instantly recreates that speaker’s timbre. With advanced attention mechanisms like eager, sdpa, flash_attention_2 & the new high-performance SageAttention, developers have complete control over speed, memory usage & compatibility.

ComfyUI manages the heavy lifting by automatically downloading & optimizing models, so you don’t need to worry about manual setup. With optional 4-bit quantization, even GPUs with limited VRAM can run large VibeVoice models efficiently.

This combination makes VibeVoice with ComfyUI one of the best free alternatives to commercial AI speech tools, giving you the power to create professional-grade audio locally on your own machine, with full privacy & no vendor lock-in. You can also try the demo of large model here on HuggingFace Space

Scroll down, follow the installation steps, & start creating expressive multi-speaker dialogues today.

Features of VibeVoice by Microsoft

FeatureDescriptionBenefit
Multi-Speaker TTSGenerate conversations with up to 4 unique voices in one audio output.Perfect for podcasts, dialogues & storytelling.
Zero-Shot Voice CloningClone any voice instantly from a .wav or .mp3 file.No training required, highly natural results.
Advanced Attention ModesChoose from eager, sdpa, flash_attention_2, or sage for optimized performance.Flexibility between speed, memory efficiency & stability.
4-Bit QuantizationRun large models in 4-bit mode with optimized configurations.Save VRAM, run large models on mid-range GPUs.
Automatic Model ManagementComfyUI handles model download & VRAM management automatically.Hassle-free setup, faster experimentation.
Fine-Grained ControlAdjust CFG scale, temperature, top_k, top_p & inference steps.Customize speech style & performance easily.
Robust CompatibilityWorks across eager, sdpa, & SageAttention with smart fallbacks.Stable performance across different hardware.
Emergent CreativityMay generate music, spontaneous sounds, or expressive tones.Adds natural, human-like spontaneity to generated audio.

Screenshots

System Requirements

ComponentMinimum RequirementRecommended Requirement
Operating SystemWindows 10 or later, macOS 11+, Linux (64-bit)Latest Windows 11, macOS Ventura, Ubuntu 22.04
ProcessorIntel i5 / AMD Ryzen 5Intel i7 / Ryzen 7 or higher
RAM8 GB16 GB or more
Storage4 GB free spaceSSD for faster processing
GPU6 GB VRAM (NVIDIA recommended)12 GB+ VRAM for large models
PythonVersion 3.10+Latest stable Python

How to Download & Install VibeVoice with ComfyUI??

Before installation Download the supported version of ComfyUI from here

1. Install via ComfyUI Manager

  1. Open ComfyUI Manager.
  2. Search for ComfyUI-VibeVoice.
  3. Click Install.
  4. Restart ComfyUI & find the new VibeVoice TTS node under audio/tts.

2. Manual Installation

  1. Navigate to your ComfyUI/custom_nodes/ directory.
  2. Open a terminal & clone the repository: git clone https://github.com/wildminder/ComfyUI-VibeVoice.git
  3. Navigate into the folder: cd ComfyUI-VibeVoice
  4. Install dependencies: pip install -r requirements.txt
  5. (Optional) Install SageAttention for advanced performance: pip install sageattention
  6. Restart ComfyUI. The VibeVoice TTS node will now be available.

3. First Use

  • Load reference audio files with ComfyUI’s Load Audio node.
  • Connect them to the speaker inputs on the VibeVoice TTS node.
  • Write your dialogue script in the text field (Speaker 1: Hello, Speaker 2: Hi).
  • Queue the workflow to generate your conversation.

IF you like Open Source AI tools then you might definitely like our Open Source AI tool Collection

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
Mini Diarium Journal Desktop App

Mini Diarium: Offline Encrypted Journal That Keeps Your Writing Private

0
In an era where most journaling apps sync everything to the cloud, Mini Diarium's approach is to keep your journal stays fully offline, encrypted, and under your control. Its a privacy-first desktop journal that stores all entries locally on your device using AES-256-GCM encryption. There are no accounts, no cloud syncing or servers involved. Your thoughts remain exactly where they belong, with you. It is also the spiritual successor to Mini Diary, originally created by Samuel Meuli. Instead of simply updating the old project, the developer rebuilt the entire stack from scratch while keeping the same philosophy: simple journaling with complete privacy.
Emdash Open-Source Agentic IDE to Run Multiple AI Coding Agents in Parallel

Emdash: Open-Source Agentic IDE to Run Multiple AI Coding Agents in Parallel

0
Emdash is an open-source agentic development environment (ADE) designed for developers who want to orchestrate multiple coding agents from a single dashboard. It lets you run several agents in parallel. Each agent operates inside its own Git worktree, meaning every task stays isolated and easy to review. Think of it as a control center for AI coding agents. You can assign tasks, monitor progress, compare outputs, review diffs, and ship changes without constantly switching tools. Backed by Y Combinator, the project has already crossed 60K+ downloads, and its goal is simple, to give developers an environment where multiple AI coding agents can work together.
LTX-Desktop AI Video Generator for Text, Image & Audio

LTX-Desktop: AI Video Generator from Text, Image & Audio

0
LTX Desktop is an open-source desktop application designed to generate and edit videos using LTX generative video models. It provides a modern editor interface where users can create videos from prompts, images, or audio while managing projects directly inside the app. On systems with powerful NVIDIA GPUs, the software can download model weights and run video generation locally. On unsupported hardware or macOS, the application switches to an API-powered mode where generation happens through the LTX cloud service. The project also includes a timeline-based video editor, retake functionality for regenerating segments, and a flexible architecture combining a React interface, Electron desktop shell, and Python backend for GPU inference.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy