back to top
HomeTechAI ModelsRun TRELLIS 2 Locally: Generate High-Quality 3D Models from Images

Run TRELLIS 2 Locally: Generate High-Quality 3D Models from Images

- Advertisement -

Imagine a tool that can transform a single image into a fully-realized 3D model in just seconds. Microsoft has released something that’s turning heads in the 3D & AI world, its named TRELLIS.2

It is an open source AI model that can take any image & turn it into a high-quality, fully textured 3D mesh. We’re talking about a complete 3D asset with physically based rendering (PBR) materials like color, roughness, metallic surfaces, even transparency, all generated automatically.

What makes TRELLIS 2 truly special is its innovative approach. While most 3D AI models struggle with complex geometries, Microsoft’s researchers developed a groundbreaking voxel-based representation called O-Voxel that handles intricate details with ease. Thin structures, open surfaces, hidden details, the kinds of things that typically make 3D modeling software break down are now handled smoothly and elegantly.

But here’s the catch, if you’ve peeked at the official setup, you’ve probably noticed it’s not exactly beginner-friendly. With complex research repositories, massive model files, & installation steps, getting TRELLIS 2 running can feel like solving a complicated puzzle especially if you’re not a hardcore tech enthusiast with enterprise-level hardware.

That’s exactly why guides like this exist. In this guide, we’ll walk through the easiest way to install and run TRELLIS 2 locally using ComfyUI, step by step, so you can start generating 3D models from images without unnecessary complexity.

Before You Start (Quick Checklist)

Make sure your system meets these requirements:

  • OS: Windows or Linux
  • Python: 3.10 or newer
  • GPU: NVIDIA CUDA-compatible GPU
    • Minimum: 8GB VRAM (will work but slow)
    • Recommended: 16GB+ VRAM
  • PyTorch: 2.0+ (handled automatically by ComfyUI)
  • Disk Space: ~20–25GB free (models download on first run)

TRELLIS.2 currently requires NVIDIA GPUs. AMD is not supported.

To keep things simple and avoid breaking any existing setups, we’ll use the portable version of ComfyUI. This runs in its own folder and won’t interfere with your current Python environment or workflows.

Choose the Right Portable Build

Recommended for most users

Advanced users only

Important: Backup Your Existing ComfyUI (If Any)

If you already have ComfyUI installed:

Extract the portable version into a separate folder, Do not overwrite it

Install and Launch

  1. Download the selected ComfyUI portable archive
  2. Extract it to a new folder (Extract using 7-Zip Recommended)
  3. Run ComfyUI once to make sure everything works
  4. Open your browser and go to:
    http://localhost:8188

Wait until ComfyUI fully loads before moving on to the next step.

Step 2: Install ComfyUI Manager (Portable Version)

ComfyUI Manager

To install TRELLIS 2 easily, we’ll use ComfyUI Manager, which allows you to install custom nodes directly from the UI. Since we’re using the portable version of ComfyUI, the manager needs to be installed slightly differently.

2.1 Install Git (Required)

ComfyUI Manager relies on Git, so make sure it’s installed first.

  1. Download Git for Windows from:
    https://git-scm.com/download/win
  2. Choose the Standalone Installer
  3. During installation, select:
    “Use Windows default console window”
  4. Complete the installation with default options

Once Git is installed, restart your system if prompted

2.2 Download the Manager Install Script (Portable Only)

comfyui manager installation
  1. Install ComfyUI Manager Directly using bat file
    scripts/install-manager-for-portable-version.bat
  2. Don’t left-click the file above
  3. Right-click the file → Save As…
  4. Save it directly inside your portable ComfyUI folder
    (for example: ComfyUI_windows_portable/)

Make sure the .bat file is placed in the root ComfyUI portable directory, not inside subfolders You can see the Screenshot above.

2.3 Run the Installer

  1. Launch ComfyUI again
  2. Open http://localhost:8188
  3. You should now see a Manager button in the interface

Once the Manager is visible, you’re ready to install TRELLIS 2 in the next step.

Step 3: Install ComfyUI-TRELLIS2

  1. Open ComfyUI Manager -> Custom Nodes Manager
  2. In the search bar, type:
    ComfyUI-TRELLIS2
  3. Click Install
  4. Select the latest version
  5. Wait for the installation to complete
  6. Restart ComfyUI

That’s it, Its that simple!

Step 4: Allow Dependency Installation (Important)

During installation or first launch:

  • Windows may ask permission to run scripts
  • Click “Allow” when prompted

Behind the scenes, the installer:

  • Detects your CUDA version
  • Installs required 3D libraries
  • Automatically installs Flash Attention (if compatible)
  • Selects the correct wheels for your GPU

This step is crucial, let it finish.

Step 5: Refresh ComfyUI and Load Example Workflows

After restarting ComfyUI:

  1. Right-click -> New Workflow
  2. Open the TRELLIS.2 example workflows
  3. Drag one into the canvas
    (e.g. geometry_only or geometry_texture)

The custom node includes ready-to-use demo workflows.

Step 6: First Run: Model Downloads

On the first execution, TRELLIS.2 will:

  • Download large model files from HuggingFace
  • Fetch files based on your selected resolution
    (e.g. 512 or 1024 cascade)
  • Show download progress in the terminal window

This is normal and can take several minutes.

Models are downloaded automatically, manual downloads are only needed if something fails.

Step 7: Fix Common First-Run Errors (If Any)

If you encounter any error like ModuleNotFoundError: pyrender

This is a known first-run issue & it may and may not occur so you can skip this step if it didn’t occur

Fix:

pip install pyrender

Then restart ComfyUI and rerun the workflow.

Missing 3D Preview or Render Node

Some example workflows rely on extra render nodes.

Fix:

  1. Open ComfyUI Manager
  2. Click Install Missing Custom Nodes
  3. Install the suggested node pack
  4. Restart ComfyUI

After this, red error boxes should disappear.

Step 8: Run Your First Image to 3D Generation

image to 3d Trellis 2 comfyui

Once everything is installed:

  1. Load an example image (or use the provided ones)
  2. Run the workflow
  3. Watch the terminal for:
    • Flash Attention activation
    • Shape model sampling
    • Texture model processing
  4. Preview the generated 3D mesh inside ComfyUI
  5. You can also save the mesh

Similarly You can use other workflows too, at first they will download the dependencies and then you can generate your 3D models

Conclusion

TRELLIS 2 is super cool. Microsoft just made 3D design way easier. With just one picture, you can now create a full 3D model that looks amazing. No more spending hours trying to build things by hand.

If you’ve got a good graphics card and some tech spirit, you can turn any image into a 3D thing in minutes. It’s like magic for creators, designers, and anyone who loves making stuff. The best part? It’s free and open for everyone to use. It’s the future of making 3D things.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
marco LLM nano and mini

Marco MoE Uses 5% of Its Parameters but Outperforms Models 3× Its Size

0
Most AI models are what they appear to be. A 12B parameter model uses 12B parameters. What you see is what runs. Marco MoE does not work that way. Alibaba built two models, Marco Nano and Marco Mini, that carry billions of parameters but wake up only a tiny fraction of them for each request. Marco Nano activates 0.6 billion out of 8 billion. Marco Mini activates 0.86 billion out of 17.3 billion. Less than 5% of either model is actually working at any moment. The part that makes this worth paying attention to is what that 5% manages to do against models running at full capacity.

VoxCPM2 lets you create voices just by describing them and it is open source

0
Most AI voice tools give you two options. Clone an existing voice or pick from a list of defaults. If neither works for what you need, you are stuck. VoxCPM2 adds a third option. You describe what you want. A young woman, gentle tone, slightly slow pace. A deep male voice with a formal cadence. Whatever you can put into words, it generates from scratch, no recording needed. That alone would make it interesting. But it also does voice cloning, supports 30 languages without needing a language tag, outputs 48kHz audio, runs on 8GB of VRAM, and ships under Apache 2.0. The whole thing is two billion parameters and installs with a single pip command. I tried the audio samples and the results are genuinely good. Not fully human, but natural enough that you stop noticing the model and start paying attention to what it is saying. Mixed languages, different emotions, and you can steer all of it.
meta muse spark ai

Meta’s Muse Spark: A Closed Bet on Multimodal, Multi-Agent AI

0
Meta has a new AI model and for the first time in years it is not called Llama. Muse Spark launched yesterday under Meta Superintelligence Labs, a new internal division Meta quietly formed by bringing together researchers from Google DeepMind and other frontier labs. It is natively multimodal, supports multi-agent reasoning, and is available right now at meta.ai. It is also not being released as open weights. That last part is worth sitting with for a second. Meta built one of the most trusted brands in open source AI through Llama. Developers built on it, researchers published with it. Muse Spark continues none of that. No weights, no HuggingFace release, private API preview only. What you get instead is a genuinely capable multimodal model with some benchmark numbers that are hard to ignore and a new reasoning mode called Contemplating that puts it in conversation with Gemini Deep Think and GPT Pro. Whether that trade is worth it depends entirely on what you were using Meta AI for in the first place.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy