back to top
HomeTechNVIDIA NemoClaw runs OpenClaw inside a secure sandbox and setup takes one...

NVIDIA NemoClaw runs OpenClaw inside a secure sandbox and setup takes one command

- Advertisement -

Earlier this year OpenClaw became the most starred project on GitHub faster than anything before it. Developers loved it. Companies started deploying it. Then a senior AI safety researcher at Meta let it run on her machine and it deleted her emails. A security researcher hijacked a running instance in under two hours.

Enterprises wanted AI agents. They just did not want ones they could not control.

NVIDIA’s answer to that is NemoClaw. One curl command and you have OpenClaw running inside a sandbox where it cannot touch your files, cannot make unauthorized network calls, and cannot escalate privileges without your approval.

What NemoClaw actually is

NemoClaw is an open source reference stack built by NVIDIA that runs OpenClaw inside a secure sandboxed environment. Think of it as a controlled container where your AI agent can work freely without being able to touch anything it should not.

It is not a replacement for OpenClaw. It is a secure wrapper around it. When you install NemoClaw it actually creates a fresh OpenClaw instance inside the sandbox automatically. The agent still does everything OpenClaw does. It just cannot go rogue while doing it.

NVIDIA released it on March 16 as an early alpha preview under Apache 2.0 license. It is not production ready yet and NVIDIA is upfront about that. Interfaces and APIs may change as they iterate. But it is available now for developers and enterprises who want to start experimenting with safe agent deployment.

What makes it different from just running OpenClaw directly

Running OpenClaw directly means trusting the agent completely. It can read your files, make network requests, install things, cross into systems it probably should not touch. That is exactly what happened at Meta.

NemoClaw puts three walls around that.

The first is network control. Every outbound connection the agent tries to make is blocked by default unless you have explicitly allowed it. If the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the interface for you to approve or deny in real time. You see exactly what your agent is trying to connect to before it connects.

The second is filesystem isolation. The agent only has access to the sandbox and tmp directories. Everything else on your machine is locked. It cannot read your documents, cannot touch your code outside the sandbox, cannot access anything you have not deliberately placed inside its reach.

The third is process protection. Privilege escalation is blocked. Dangerous system calls are blocked. The agent cannot quietly give itself more permissions than it started with.

What makes this genuinely useful is that all three layers are declarative. You define the policy in a YAML file. You can update network rules while the sandbox is running without restarting anything. You are in control of exactly what the agent can and cannot do at all times. That is the thing OpenClaw never had.

What you actually get when you install it

Before you install, check your hardware. You need at least 4 vCPU, 8GB RAM and 20GB free disk space. NVIDIA recommends 16GB RAM and 40GB disk for comfortable use. The sandbox image alone is around 2.4GB compressed and during setup Docker, k3s and the OpenShell gateway all run simultaneously, so machines with less than 8GB RAM can run into memory issues.

For software you need Linux Ubuntu 22.04 or later, Node.js 20 or later, npm 10 or later and Docker installed and running. On macOS Apple Silicon, Colima and Docker Desktop are the recommended runtimes. Windows WSL users need Docker Desktop with WSL backend. Podman on macOS is not supported yet.

Once installed you get a running sandbox environment with a fresh OpenClaw instance inside it, the OpenShell gateway managing all security policies, and NVIDIA’s Nemotron model connected via the NVIDIA Endpoint API. You will need a free NVIDIA API key from nvidia which the installer prompts you for during setup.

After setup your terminal confirms everything running with a summary showing your sandbox name, the model connected and the commands to get started.

How to get it running

This one is genuinely simple compared to most open source setups. If you are comfortable with terminal you will be up and running in minutes.

Before you start, make sure you have:

  • Linux Ubuntu 22.04 or later, macOS Apple Silicon, or Windows WSL with Docker Desktop
  • Node.js 20 or later
  • npm 10 or later
  • Docker installed and running
  • A free NVIDIA API key , get this before you start, the installer will ask for it

Step 1: Install NemoClaw

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

Get your NVIDIA API key before Step 2: Free from Nvidia. The onboard wizard asks for it and you cannot complete setup without it.

Step 2: Run the onboard wizard

nemoclaw onboard

This is where everything gets configured. Sandbox creation, inference setup, security policies and API key entry all happen here.

Step 3: Connect to your sandbox

nemoclaw my-assistant connect

This drops you into the sandbox shell where your OpenClaw agent is running.

Step 4: Start the agent

For interactive chat open the TUI:

openclaw tui

For a quick single message test:

openclaw agent --agent main --local -m "hello" --session-id test

If something goes wrong:

nemoclaw my-assistant status
openshell sandbox list

Those two commands tell you exactly what is happening at both the NemoClaw and OpenShell level.

To uninstall completely:

curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash

Should you install it today

NemoClaw is alpha software. NVIDIA says it plainly and they mean it. Do not run this in production.

But if you are a developer curious about safe agent deployment, install it today. The one command setup works, the sandbox is real and getting comfortable with this before it matures is worth your time.

Enterprises evaluating AI agents should watch this closely. The security model is exactly what the industry has been asking for. Just wait for a stable release before committing.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
OpenMythos

OpenMythos: The Closest Thing to Claude Mythos You Can Run (And It’s Open Source)

0
Anthropic hasn't told anyone how Claude Mythos works. No architecture paper or model card with details. Just a product that keeps surprising people and a company that stays quiet about why. That silence has been driving the research community a little crazy. So one developer Kye Gomez did something about it. He read every public paper he could find on recurrent transformers, looped architectures, and inference-time scaling. He studied the behavioral patterns people were reporting from Mythos. Then he built what he thinks is inside it, published the code under MIT, and made it pip installable. It's called OpenMythos. It is not Claude Mythos. Gomez is explicit about that but the hypothesis behind it is serious, the architecture is real, and the reasoning for why Mythos might work this way is harder to dismiss than you'd expect.
Nucleus-Image AI image MOE model

Nucleus-Image: 17B Open-Source MoE Image Model Delivering GPT-Image Level Performance

0
The mixture-of-experts trick changed how people think about LLMs. Instead of running every parameter on every token, you activate a small fraction of the network per forward pass and somehow the quality stays competitive while the compute drops. It's the reason models like Mixtral punched above their weight. Everyone in the LLM space understood it immediately. Nobody had done it openly for image generation. Until now. Nucleus-Image is a 17B parameter diffusion transformer that activates roughly 2B parameters per forward pass. It beats Imagen4 on OneIG-Bench, sits at number one on DPG-Bench overall, and matches Qwen-Image on GenEval. It's also a base model. No fine-tuning, reinforcement learning or human preference tuning. What you're seeing in those benchmarks is raw pre-training performance. That's either impressive or a caveat depending on what you need it for, probably both.
ERNIE-Image Open-Source 8B Text-to-Image Model for Posters Comics and control

ERNIE-Image: Open-Source 8B Text-to-Image Model for Posters, Comics & Structured Generation

0
Text rendering in open source AI image generation has been broken for a long time. Ask most models to put readable words on a poster, lay out a comic panel, or generate anything where the text actually has to make sense and only few models can do it accurately and from rest you get something that looks like it was written by someone who learned the alphabet from a fever dream. ERNIE-Image is Baidu's answer to that specific problem. It's an 8B open weight text-to-image model built on a Diffusion Transformer and it's genuinely good at dense text, structured layouts, posters, infographics and multi-panel compositions. It can run on a 24GB consumer GPU, it's on Hugging Face right now, and it comes in two versions, a full quality model and a turbo variant that gets there in 8 steps instead of 50.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy