back to top
HomeTechAI ModelsThis Free Tool Let Me Run AI Video, Image and Music Models...

This Free Tool Let Me Run AI Video, Image and Music Models Locally Without ComfyUI

- Advertisement -

I’ve used ComfyUI multiple times. It’s powerful, no question. But installing every model in it feels unnecessarily complicated, some require specific dependencies, version conflicts are tricky to fix, and one wrong install can break models that were already working fine.

I wanted something simpler. Portable. Something I could move between drives, use offline anytime That’s where Stability Matrix came in.

In simple terms it’s an open source package manager for AI models. No terminal setup, no Python conflicts. You pick what you want, it installs it, and you use it.

My preferred setup is WAN2GP, it supports image, video, audio and music generation all in one place, which covers pretty much everything I care about. But you can install whatever fits your workflow.

To show you how simple this actually is, let me walk you through one real example. I wanted to generate music locally. Completely offline. For free. Here’s exactly what happened.

Here’s exactly how I Run AI Models

I’ll use HeartMuLA, a free local music generation model, as the example. Same process works for video, image, TTS everything.

Step 1: Install & Launch Stability Matrix

Stability matrix how to use

Download it for your OS and run the installer. On first launch it asks where you want to store everything. I checked the Portable Mode option, this stores all your data and models in the same folder as the app itself. That means you can move the entire thing to a different drive or computer anytime without reinstalling anything. Genuinely useful.

Step 2: Add WAN2GP Package

wan2gp stability matrix

Once inside, hit the Add Package button at the bottom. It shows you a list of available packages. Search for WAN2GP and click install, No Terminal or python needed. It handles everything automatically.

Step 3: Launch WAN2GP

Install AI Models Locally

Go to All Packages, find WAN2GP, and hit the Launch button. Give it 20-30 seconds. You’ll see a local URL appear in the terminal, something like localhost:7860. That’s your app running locally.

Install AI Models Locally in PC

Step 4: Open in browser and pick your model

HeartMuLA music generator install

Open that URL in any browser. Make sure your internet is on for this part, first time only, it needs to download the model. I selected HeartMuLA-3b from the model list. It downloaded automatically, no manual setup needed.

After that first download it’s yours. Completely offline, anytime.

Step 5: Generate

HeartMuLA music generator

I pasted in some lyrics, hit generate, and waited. On my 8GB VRAM GPU it took around 1-2 minutes. Not instant, but for a fully local, completely free music generation that never touched a server. I’ll take it. The output was a proper track.

What else Stability Matrix can do

Music is just one thing. That’s what surprised me most about this tool, the scope of what it actually covers.

Through WAN2GP alone you can run image generation, video generation, text to speech, and music — all locally, all free, all from the same browser interface. Pick a model, download it once, use it offline forever. The process is identical every time.

But WAN2GP is just one package. Stability Matrix supports a long list of others including ComfyUI, Automatic1111, Fooocus, InvokeAI, and more. So if you’ve already been using any of those, you can manage them all from one place instead of juggling separate installs.

The model browser is also worth mentioning. It connects directly to CivitAI and HuggingFace, so you can browse, download, and organize models without leaving the app. No manual folder management, no hunting for the right file location.

And because it runs in portable mode, your entire setup — models, settings, everything — lives in one folder you can move to a new computer or external drive anytime.

Closing Thoughts

There are plenty of ways to run AI models locally. ComfyUI is great — powerful, flexible, and has a massive community behind it. But if you want something more straightforward to get started with, Stability Matrix is worth a look.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
Reka Edge is The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object Detection

Reka Edge: The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object...

0
Most people assume beating a Google model requires another massive frontier model. More parameters. More compute. That is just how the hierarchy usually works. Reka Edge is a 7-billion-parameter model. Yet it manages to outperform Gemini 3 Pro on object detection benchmarks, and with quantization it can even run on devices like the Samsung S25. That combination should not exist. A model small enough to fit on a phone outperforming a frontier AI system from Google on a specific but genuinely useful task is not something you expect to see in 2026. Yet here we are. This is not a model that beats Gemini at everything. It does not. But where it wins it wins convincingly.
Helios 14B AI Model That Generates Minute-Long Videos in Real Time

Helios: The 14B AI Model That Generates Minute-Long Videos in Real Time

0
Most open source video generation models make you wait. You write a prompt, hit generate, and then sit there hoping the output is what you imagined. If it is not you tweak the prompt and wait again. That loop gets old fast. Helios works differently. It generates video in real time at 19.5 frames per second on a single GPU. You can see it being created, interrupt mid generation if something looks off, tweak and continue. Up to a full minute of video without starting over every time something does not look right. With group offloading it runs on around 6GB of VRAM. Consumer GPU territory.
Open Source LLMs That Rival ChatGPT and Claude

7 Open Source LLMs That Rival ChatGPT and Claude

0
Two years ago if you wanted a genuinely capable AI model your options were basically ChatGPT, Claude, Gemini or Grok. Open source existed but the gap was real and everyone knew it. That gap is closing faster than most people expected. In some areas it is already gone. Today open source models do not just compete with closed source. Some of them beat closed source on specific benchmarks that actually matter. And the list of categories where that is true keeps getting longer. If you are curious about what open source AI actually looks like at full power or you are building something serious and evaluating your options this list is for you. One thing worth saying upfront, these are not consumer GPU friendly models. You will need serious hardware to run them at full capacity. Quantized versions exist for most of them but expect performance and quality to reflect that. I went through a lot of options to put this list together. These seven are the ones that actually made me stop and pay attention.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy