File Information
File | Details |
---|---|
Name | ComfyUI |
Version | 2025.08 (Latest Build) |
License | Free & Open Source (Custom MIT/Apache) |
Platform | Windows, Linux |
Developer | ComfyUI Community Developers |
File Size | 300 MB (may vary with models) |
Last Updated | August 2025 |
Category | AI Image Generator, Workflow UI, Open Source Tools |
Table of contents
Description
ComfyUI is a powerful, free & open-source node-based user interface designed for creating and managing complex AI image generation workflows. It primarily supports Stable Diffusion and its extensions like LoRA, ControlNet, T2I-Adapter, and custom models, offering one of the most flexible and transparent AI art generation environments available today.
ComfyUI focuses on visual workflows, allowing users to see, build, debug, and customize the flow of their prompts, models, samplers, and pre/post-processing steps in real-time. It’s modular, fast, and incredibly lightweight, and gives you granular control over every stage of image generation.
Built with Python and running in a local environment, ComfyUI puts privacy first and performance at the center. It’s highly extensible with community-developed custom nodes and supports advanced tasks like batch processing, video frame interpolation, inpainting, depth maps, and even ControlNet chaining all with minimal resource overhead.
Whether you’re a beginner trying to learn how diffusion models work, or a power user looking to optimize complex generative pipelines, ComfyUI is one of the best tools available in the open-source AI art community.
Key Features of ComfyUI
Node-Based Workflow Editing
At the heart of ComfyUI is a modular node editor where every operation (model loading, conditioning, prompt parsing, etc.) is handled via customizable nodes. This gives you complete visibility and control over how your images are generated.
Supports Stable Diffusion, LoRA, ControlNet & More
ComfyUI isn’t limited to one model. It supports Stable Diffusion 1.5, 2.1, SDXL, along with LoRA fine-tuned models, ControlNet for pose/depth/map guidance, and even community plugins that add support for T2I-Adapter, CLIPVision, FaceID, etc.
Real-Time Debugging & Optimization
You can preview intermediate outputs, reuse loaded models across batches, and even see memory & GPU usage for each operation. This makes ComfyUI an efficient choice for professionals looking to optimize generation workflows.
Extensive Custom Node Ecosystem
The community around ComfyUI is constantly evolving. Dozens of custom nodes are available for features like image segmentation, prompt scheduling, CLIP interrogation, upscale chains, and video outputs. Just plug them in and expand functionality instantly.
Completely Offline
ComfyUI runs 100% locally. Your prompts, images, and generations never leave your machine, ensuring both privacy & better performance.
Beginner-Friendly Yet Advanced
While it may look intimidating at first, ComfyUI offers ready-made workflow files you can load and modify. New users can get started quickly, while advanced users can dive deep into customizing every detail of their generation process.
Screenshots


System Requirements
Component | Minimum Requirement |
---|---|
OS | Windows 10/11 or Linux (Ubuntu preferred) |
CPU | Quad-core processor |
RAM | 8 GB (16 GB recommended) |
Storage | ~5 GB (varies with model size) |
GPU | NVIDIA GPU with 4GB+ VRAM (CUDA support) |
Python | Python 3.10+ |
Note: AMD GPUs may require additional setup and performance may vary.
How to Install??
Step 1: Download the Files
Scroll to the Download Links section at the bottom to get the latest build of ComfyUI for Windows or macOS.
Installation on Windows (.exe)
- Download the official
ComfyUI-Setup.exe
file. - Double-click to run the installer.
- Follow the installation wizard to complete the setup.
- Once installed, launch ComfyUI from the desktop or Start menu.
- ComfyUI will open in your default browser at
http://localhost:8188
.
Installation on macOS (.dmg)
- Download the
ComfyUI.dmg
file from the links below. - Open the
.dmg
and drag ComfyUI to your Applications folder. - Double-click to launch the app (you may need to allow it in System Preferences > Security & Privacy).
- ComfyUI will open automatically in your browser.
Installation on Linux
ComfyUI does not provide a one-click installer for Linux. You’ll need to set it up manually using Python. Follow this installation guide:
Step 1: Install Python (Recommended: 3.12)
ComfyUI supports Python 3.13, but it is recommended to use Python 3.12 because some community custom nodes may not yet support 3.13.
You can use pyenv or your system package manager to install Python 3.12.
Step 2: Clone the Repository
Open a terminal and run:
git clone https://github.com/comfyanonymous/ComfyUI
cd ComfyUI
Step 3: Prepare Model Directories
Make sure you place your models in the correct folders:
Model Type | Location |
---|---|
Checkpoints | models/checkpoints/ |
VAE Files | models/vae/ |
LoRA / LyCORIS | models/loras/ (optional) |
ControlNet | models/controlnet/ (optional) |
Step 4: Install PyTorch Based on Your GPU
NVIDIA GPUs
Run:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128
Or, for latest (nightly) version:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu129
AMD GPUs (Linux Only)
For stable ROCm 6.3:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.3
For ROCm 6.4 Nightly (latest):
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.4
Intel GPUs (Windows & Linux)
Option 1 – For Intel Arc (Nightly):
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
Option 2 – For Intel GPUs using IPEX:
- Create a Conda environment (recommended).
- Install dependencies:
conda install libuv
pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
For more GPU compatibility info, visit PyTorch Installation or Intel documentation.
Step 5: Install ComfyUI Dependencies
In the ComfyUI directory, run:
pip install -r requirements.txt
Step 6: Launch ComfyUI
Finally, run:
python main.py
Then open your browser and go to:
http://localhost:8188