back to top
HomeTechNVIDIA NemoClaw runs OpenClaw inside a secure sandbox and setup takes one...

NVIDIA NemoClaw runs OpenClaw inside a secure sandbox and setup takes one command

- Advertisement -

Earlier this year OpenClaw became the most starred project on GitHub faster than anything before it. Developers loved it. Companies started deploying it. Then a senior AI safety researcher at Meta let it run on her machine and it deleted her emails. A security researcher hijacked a running instance in under two hours.

Enterprises wanted AI agents. They just did not want ones they could not control.

NVIDIA’s answer to that is NemoClaw. One curl command and you have OpenClaw running inside a sandbox where it cannot touch your files, cannot make unauthorized network calls, and cannot escalate privileges without your approval.

What NemoClaw actually is

NemoClaw is an open source reference stack built by NVIDIA that runs OpenClaw inside a secure sandboxed environment. Think of it as a controlled container where your AI agent can work freely without being able to touch anything it should not.

It is not a replacement for OpenClaw. It is a secure wrapper around it. When you install NemoClaw it actually creates a fresh OpenClaw instance inside the sandbox automatically. The agent still does everything OpenClaw does. It just cannot go rogue while doing it.

NVIDIA released it on March 16 as an early alpha preview under Apache 2.0 license. It is not production ready yet and NVIDIA is upfront about that. Interfaces and APIs may change as they iterate. But it is available now for developers and enterprises who want to start experimenting with safe agent deployment.

What makes it different from just running OpenClaw directly

Running OpenClaw directly means trusting the agent completely. It can read your files, make network requests, install things, cross into systems it probably should not touch. That is exactly what happened at Meta.

NemoClaw puts three walls around that.

The first is network control. Every outbound connection the agent tries to make is blocked by default unless you have explicitly allowed it. If the agent tries to reach an unlisted host, OpenShell blocks the request and surfaces it in the interface for you to approve or deny in real time. You see exactly what your agent is trying to connect to before it connects.

The second is filesystem isolation. The agent only has access to the sandbox and tmp directories. Everything else on your machine is locked. It cannot read your documents, cannot touch your code outside the sandbox, cannot access anything you have not deliberately placed inside its reach.

The third is process protection. Privilege escalation is blocked. Dangerous system calls are blocked. The agent cannot quietly give itself more permissions than it started with.

What makes this genuinely useful is that all three layers are declarative. You define the policy in a YAML file. You can update network rules while the sandbox is running without restarting anything. You are in control of exactly what the agent can and cannot do at all times. That is the thing OpenClaw never had.

What you actually get when you install it

Before you install, check your hardware. You need at least 4 vCPU, 8GB RAM and 20GB free disk space. NVIDIA recommends 16GB RAM and 40GB disk for comfortable use. The sandbox image alone is around 2.4GB compressed and during setup Docker, k3s and the OpenShell gateway all run simultaneously, so machines with less than 8GB RAM can run into memory issues.

For software you need Linux Ubuntu 22.04 or later, Node.js 20 or later, npm 10 or later and Docker installed and running. On macOS Apple Silicon, Colima and Docker Desktop are the recommended runtimes. Windows WSL users need Docker Desktop with WSL backend. Podman on macOS is not supported yet.

Once installed you get a running sandbox environment with a fresh OpenClaw instance inside it, the OpenShell gateway managing all security policies, and NVIDIA’s Nemotron model connected via the NVIDIA Endpoint API. You will need a free NVIDIA API key from nvidia which the installer prompts you for during setup.

After setup your terminal confirms everything running with a summary showing your sandbox name, the model connected and the commands to get started.

How to get it running

This one is genuinely simple compared to most open source setups. If you are comfortable with terminal you will be up and running in minutes.

Before you start, make sure you have:

  • Linux Ubuntu 22.04 or later, macOS Apple Silicon, or Windows WSL with Docker Desktop
  • Node.js 20 or later
  • npm 10 or later
  • Docker installed and running
  • A free NVIDIA API key , get this before you start, the installer will ask for it

Step 1: Install NemoClaw

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

Get your NVIDIA API key before Step 2: Free from Nvidia. The onboard wizard asks for it and you cannot complete setup without it.

Step 2: Run the onboard wizard

nemoclaw onboard

This is where everything gets configured. Sandbox creation, inference setup, security policies and API key entry all happen here.

Step 3: Connect to your sandbox

nemoclaw my-assistant connect

This drops you into the sandbox shell where your OpenClaw agent is running.

Step 4: Start the agent

For interactive chat open the TUI:

openclaw tui

For a quick single message test:

openclaw agent --agent main --local -m "hello" --session-id test

If something goes wrong:

nemoclaw my-assistant status
openshell sandbox list

Those two commands tell you exactly what is happening at both the NemoClaw and OpenShell level.

To uninstall completely:

curl -fsSL https://raw.githubusercontent.com/NVIDIA/NemoClaw/refs/heads/main/uninstall.sh | bash

Should you install it today

NemoClaw is alpha software. NVIDIA says it plainly and they mean it. Do not run this in production.

But if you are a developer curious about safe agent deployment, install it today. The one command setup works, the sandbox is real and getting comfortable with this before it matures is worth your time.

Enterprises evaluating AI agents should watch this closely. The security model is exactly what the industry has been asking for. Just wait for a stable release before committing.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
SparkVSR lets you control AI video upscaling with just a few keyframes

SparkVSR lets you control AI video upscaling with just a few keyframes

0
A research team from Texas A&M and YouTube quietly dropped SparkVSR on GitHub. No big announcement or hype cycle. Just a repo and a paper. Everyone right now is chasing text to video. Sora, Kling, Wan, the list keeps growing. But nobody is talking about the much harder problem sitting right underneath all of it. What happens when your existing footage, your old clips, your AI generated videos, just do not look good enough? You upscale them, the AI guesses, and you get flickering textures and smeared faces with zero way to fix it. SparkVSR is the first tool I have seen that actually lets you step in and correct that.
MAI-IMAGE-2 AI Image Generator

Microsoft MAI Image 2 is impressive, but it comes with serious limitations you should...

0
Microsoft's MAI Image 2 just ranked third globally on Arena.ai. Here is what it genuinely does well, where it falls short, and what this launch actually signals about Microsoft's direction
Foundation-1 Is the Open Source AI Model That Thinks Like a Music Producer

Foundation-1 Is the Open Source AI Model That Thinks Like a Music Producer

0
There are genuinely impressive open source music generation models out there right now. ACE Step, YuE, HeartMuLa, models that generate full songs with vocals, structure and emotion. If you want a complete track from a single prompt those are worth exploring. Foundation-1 does not compete with them. It does not try to. What it does instead is something more specific and honestly more useful for anyone who actually makes music. It generates individual loops and samples like tempo-synced, key-locked, bar-aware, built to drop straight into a production without fixing anything first. Just clean, structured instrumental loops that behave like something a producer built rather than something an AI guessed at. If you have ever spent twenty minutes trying to make an AI-generated loop fit your track you already understand why that matters.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy