back to top
HomeTechAI ModelsHeartMuLa: An Open-Source Suno-Style AI Music Generator You Can Run Locally with...

HeartMuLa: An Open-Source Suno-Style AI Music Generator You Can Run Locally with ComfyUI

- Advertisement -

If you’ve been playing with AI music tools lately, here’s some genuinely good news.

Heartmula has released an open-source AI music foundation model that’s surprisingly close to what tools like Suno AI can do but with a very different philosophy. It gives you something many creators actually want: full control.

With this model, you can generate music directly on your own PC, offline, with no usage limits. What you run, you own & once it’s set up, you can generate as much music as your hardware allows.

In this guide, I’ll show you exactly how to run Heartmula on your PC, step by step, without skipping the confusing parts.

Before we get into the setup, let’s quickly look at what this model can do and why it’s worth trying in the first place.

Demo of HeartMula

Below is the demo video of HeartMula music model showcasing some of its music generations in different styles & languages.

Features of HeartMula

FeatureWhat It DoesWhy It Matters
Open-Source (Apache 2.0)Fully open code and model weightsOpen-Source Freedom: No subscriptions
Suno-Style Music ScriptingSupports [Verse], [Chorus], [Bridge], etc.Structure Control: Custom songs generation
12.5 Hz HeartCodecUltra-efficient audio encoding & decodingHigh Fidelity: Pro-level sound on consumer GPUs
ComfyUI IntegrationVisual node-based workflowCreator Friendly: No scripts, easy experimentation
Full-Length Music OutputGenerates tracks up to ~6 minutesLong-Form Ready: Songs, not just short clips
Multilingual EngineSupports EN, ZH, JP, KR, ESGlobal Reach: Localized music & ads
Expressive Vocal ControlLyrics formatting affects vocal styleMore Emotion: Singing, spoken, and hybrid vocals
HeartTranscriptorWhisper-tuned audio-to-text modelSync-Ready: Lyrics, subtitles, karaoke
Local & Offline ExecutionRuns 100% on your PCData Sovereignty: Prompts never leave your system
VRAM-Optimized LoadingLazy loading + BF16 pipelineAccessible Power: Works on 12–16 GB GPUs

Before You Start

To keep things simple, this guide assumes you’re using ComfyUI’s portable Windows build. If you’re new to ComfyUI, this is the easiest and safest way to get started.

Recommended ComfyUI Version (Windows)

Why CU126?
It’s more widely compatible and tends to be more stable with custom nodes and AI audio models right now.

Minimum System Requirements

GPU: NVIDIA GPU

  • VRAM:
    • 12 GB minimum
    • 16 GB recommended (best audio quality)
  • Enough disk space for model downloads

If ComfyUI runs on your system, you’re good to continue.

Check if Hugging Face CLI Is Installed

HeartMuLa uses Hugging Face to download model files.

  1. Open Command Prompt or Terminal
  2. Navigate to your ComfyUI folder
  3. Run one of the following commands:
hf --help

or

huggingface-cli --help

What to expect:

  • If you see a list of commands → you’re ready
  • If you see command not found → install it

Install Hugging Face CLI (If Needed)

Run this inside the same Python environment ComfyUI uses:

pip install huggingface-hub

This ComfyUI workflow and custom node integration was created by Benji, and it’s an excellent contribution to the open-source community. His work makes it possible to run HeartMuLa directly inside ComfyUI with a clean, minimal workflow. We’ll use Benji’s HeartMuLa ComfyUI workflow to install and run HeartMuLa locally.

Step 1: Install HeartMuLa ComfyUI Custom Nodes

HeartMuLa uses custom nodes in ComfyUI for music generation and lyrics/audio transcription. Follow these steps:

  1. Open Command Prompt and navigate to your ComfyUI folder and in address bar, type cmd and hit enter then in command prompt type:
cd custom_nodes
  1. Download the custom nodes from GitHub:
git clone https://github.com/benjiyaya/HeartMuLa_ComfyUI
HeartMula Comfyui installation process
  1. Install the required Python dependencies:
  2. Stay in custom_nodes folder and run:
..\..\python_embeded\python.exe -m pip install -r .\HeartMuLa_ComfyUI\requirements.txt

This ensures all the libraries needed for HeartMuLa nodes are installed in your ComfyUI environment.

  1. Check that everything is ready
  • Start ComfyUI by simply double-clicking the file named: run_nvidia_gpu.bat
  • Look for messages confirming the custom nodes loaded successfully

File Structure

ComfyUI/custom_nodes/HeartMuLa_ComfyUI/
├── init.py <– The code provided below
├── util/ <– Create this folder
│ └── heartlib/ <– Paste the heartlib SOURCE CODE here
│ ├── init.py
│ ├── pipelines.py
│ ├── models.py
│ └── … (other python files)
└── requirements.txt (Optional: torch, transformers, torchaudio, etc.)

You’re now ready for Step 2

Step 2: Download the HeartMuLa Model Files.

HeartMuLa has multiple model components: the music generator, 3B model, codec, and transcriptor. We’ll use the Hugging Face CLI to download them directly into the correct folder.

1. Go to your ComfyUI models folder

HeartMula Comfyui installation

ComfyUI\models

2. Look for HeartMuLa folder, if it doesn’t exist, you can create it:

Create a folder namedHeartMuLa & don’t open it yet.

HeartMula Comfyui

3. Download the model files using Hugging Face CLI

Open Cmd & Run these commands one by one:

hf download HeartMuLa/HeartMuLaGen --local-dir ./HeartMuLa
hf download HeartMuLa/HeartMuLa-oss-3B --local-dir ./HeartMuLa/HeartMuLa-oss-3B
hf download HeartMuLa/HeartCodec-oss --local-dir ./HeartMuLa/HeartCodec-oss
hf download HeartMuLa/HeartTranscriptor-oss --local-dir ./HeartMuLa/HeartTranscriptor-oss

These commands will automatically place the files into the correct subfolders inside ComfyUI\models\HeartMuLa. Below is how the HeartMula folder should look like:

HeartMula Comfyui installation scr

Step 3: Verify the folder structure

ComfyUI
└── models
    └── HeartMuLa
        ├── HeartMuLa-oss-3B
        ├── HeartCodec-oss
        ├── HeartTranscriptor-oss
        └── gen_config.json
        └── tokenizer.json

This structure is required for the custom nodes to find the models correctly.


Tip

  • If your GPU has 12 GB VRAM, lazy loading will help manage memory.
  • The 7B model isn’t released yet — stick with 3B for now.

Also Read: Forget AI Videos Yume 1.5 Creates Interactive AI Worlds on Your PC

Step 4: Run Your First Music Generation in ComfyUI

music generation ai
  1. Run ComfyUI
  2. In the HeartMuLa custom nodes folder, you’ll find example workflows:
    • Generate Music.json → Music generation
    • Lyrics Transcriber.json → Audio-to-text transcription
  3. Drag & drop the workflow into ComfyUI.
  4. For music generation:
    • In the lyrics node, type your lyrics
    • Below it, type music styles as keywords/tags (piano,happy,wedding)
    • Adjust any settings you want and run → enjoy your generated song
  5. For lyrics transcription:
    • Import Lyrics Transcriber.json
    • Load any audio into the input node
    • Run → get a transcribed text output

That’s it! play around with the nodes, tweak lyrics or styles, and see what your AI can create!

Also Read: Run TRELLIS 2 Locally: Generate High-Quality 3D Models from Images

Need Help or Have Questions?

If you run into any issues, get stuck, or just want tips on better results, drop a comment below
I’ll do my best to help you out.

Wrapping Up

HeartMuLa brings Suno-style AI music generation fully offline, open-source, and ComfyUI-friendly. With portable ComfyUI, drag-and-drop workflows, and simple lyric + style inputs, you can go from idea to full track in minutes.

Install it once, experiment freely, tweak the settings, and let the model do the heavy lifting
If this guide helped you, try pushing the limits, different genres, structures, and languages.

Happy creating!!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
OpenMythos

OpenMythos: The Closest Thing to Claude Mythos You Can Run (And It’s Open Source)

0
Anthropic hasn't told anyone how Claude Mythos works. No architecture paper or model card with details. Just a product that keeps surprising people and a company that stays quiet about why. That silence has been driving the research community a little crazy. So one developer Kye Gomez did something about it. He read every public paper he could find on recurrent transformers, looped architectures, and inference-time scaling. He studied the behavioral patterns people were reporting from Mythos. Then he built what he thinks is inside it, published the code under MIT, and made it pip installable. It's called OpenMythos. It is not Claude Mythos. Gomez is explicit about that but the hypothesis behind it is serious, the architecture is real, and the reasoning for why Mythos might work this way is harder to dismiss than you'd expect.
Nucleus-Image AI image MOE model

Nucleus-Image: 17B Open-Source MoE Image Model Delivering GPT-Image Level Performance

0
The mixture-of-experts trick changed how people think about LLMs. Instead of running every parameter on every token, you activate a small fraction of the network per forward pass and somehow the quality stays competitive while the compute drops. It's the reason models like Mixtral punched above their weight. Everyone in the LLM space understood it immediately. Nobody had done it openly for image generation. Until now. Nucleus-Image is a 17B parameter diffusion transformer that activates roughly 2B parameters per forward pass. It beats Imagen4 on OneIG-Bench, sits at number one on DPG-Bench overall, and matches Qwen-Image on GenEval. It's also a base model. No fine-tuning, reinforcement learning or human preference tuning. What you're seeing in those benchmarks is raw pre-training performance. That's either impressive or a caveat depending on what you need it for, probably both.
ERNIE-Image Open-Source 8B Text-to-Image Model for Posters Comics and control

ERNIE-Image: Open-Source 8B Text-to-Image Model for Posters, Comics & Structured Generation

0
Text rendering in open source AI image generation has been broken for a long time. Ask most models to put readable words on a poster, lay out a comic panel, or generate anything where the text actually has to make sense and only few models can do it accurately and from rest you get something that looks like it was written by someone who learned the alphabet from a fever dream. ERNIE-Image is Baidu's answer to that specific problem. It's an 8B open weight text-to-image model built on a Diffusion Transformer and it's genuinely good at dense text, structured layouts, posters, infographics and multi-panel compositions. It can run on a 24GB consumer GPU, it's on Hugging Face right now, and it comes in two versions, a full quality model and a turbo variant that gets there in 8 steps instead of 50.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy