back to top
HomeTechPicksI Use Claude Code But Not for My Personal Projects — Here's...

I Use Claude Code But Not for My Personal Projects — Here’s What I Use Instead

- Advertisement -

I like Claude Code. But for some of my personal projects, the last thing I want is my code touching a cloud server I don’t control. So I went looking for an open source alternative and found this absolute beast.

It’s called Goose. Honestly surprised it took me this long to find it.

So what is Goose exactly?

goose ai coding

Think of it as an AI agent that lives on your machine. Not a chatbot that gives you code suggestions. An actual agent that can create files, edit code, run commands, debug errors, and work through multi-step tasks on its own.

The part that makes it different from most AI coding tools is the model flexibility. Goose doesn’t care what LLM you use. Connect it to Claude, GPT-4, Gemini, Groq or if you want everything fully local with zero internet, plug in an Ollama model like GLM-5 or Kimi K2. Your choice, your data.

How it’s different from Claude Code

Claude Code is built around Anthropic’s own models and needs an account to run even with local models. Goose has none of that, its fully open source, no account needed. The difference seems small. In practice for personal projects it changes everything.

FeatureClaude CodeGoose
Requires accountYesNo
Fully local optionPartialYes
Model flexibilityLimitedAny LLM or Ollama
Open sourceNoYes (Apache 2.0)
Approval gatesYesMinimal
Internet requiredYesOnly if using cloud models
Desktop appYesYes
CLI supportYesYes
CostPaidFree

Your model, your choice

Goose doesn’t care what model you use. Connect it to Claude, GPT-4, Gemini, or Groq if you want cloud performance. Or point it at a local Ollama model if you want everything staying on your machine. Both work. It just depends on your hardware and what you actually need from the session.

I personally use GLM-5 for most of my personal projects. Is it as good as Claude Opus? No. But it’s good enough for what I’m building, it runs locally, and my code never leaves my machine. That tradeoff works for me.

The part I actually appreciate though is how easy it is to switch. If I hit something complex that needs heavier reasoning I just change the model in settings and keep going. No reinstalling, no reconfiguring, no starting over. Same session, different model.

That kind of flexibility is rare in coding tools. Most lock you in. Goose just gets out of the way.

Getting started is simpler than you think

Goose installs like any normal app, download the binary for your system, open it, connect your model of choice and you’re running. Desktop app if you prefer a visual interface, CLI if you like staying in the terminal. Both work the same way.

Why I chose Goose for personal projects

Simple reason. I didn’t want my code leaving my system. I just have projects where the idea itself is something I want to keep to myself until it’s ready. That’s it.

Goose gives me that. My code stays on my machine, my ideas stay in my head, and I still get an AI agent that can actually work through problems autonomously.

Everyone has their own reasons for caring about this. Maybe it’s a client project with an NDA. Maybe it’s something you’re building that you don’t want anyone seeing yet. Maybe you just like knowing exactly where your data goes.

Whatever the reason, if you’ve been looking for an open source alternative to Claude Code that actually works, this one is worth trying.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
MOSS-TTS-Nano Real-Time Voice AI on CPU

MOSS-TTS-Nano: Real-Time Voice AI on CPU, Part of an Open-Source Stack Rivaling Gemini

0
Most text-to-speech tools fall into two camps. The ones that sound good need serious hardware. The ones that run on anything sound robotic. MOSS-TTS-Nano is trying to be neither. It's a 100 million parameter model that runs on a regular CPU and it actually sounds good. Good enough that the team behind it built an entire family of speech models around the same core technology, one of which has gone head to head with Gemini 2.5 Pro and ElevenLabs and come out ahead on speaker similarity. It just dropped on April 10th and it's the newest addition to the MOSS-TTS family, a collection of five open source speech models from MOSI.AI and the OpenMOSS team. The family doesn't just cover lightweight local deployment. One of its models MOSS-TTSD outperforms Gemini 2.5 Pro and ElevenLabs on speaker similarity in benchmarks. Another generates voices purely from text descriptions with no reference audio needed. And one is built specifically for real-time voice agents with a 180ms first-byte latency. Nano is the entry point. The family is the story.
Gen-Searcher An Open Source AI That Searches the Web Before Generating Images

Gen-Searcher: An Open Source AI That Searches the Web Before Generating Images

0
Your image generator has never seen today. It was trained months ago, maybe longer, and everything it draws comes from that frozen snapshot of the world. Ask it to generate a current news moment, a product that launched last month, or anything that requires knowing what's happening right now and it fills in the gaps with a confident guess. Sometimes that guess is close. Often it isn't. Gen-Searcher does something none of the mainstream tools do. Before it draws a single pixel, it goes and looks things up. It searches the web. It browses sources. It pulls visual references. Then it generates. The result is an image grounded in actual current information. It's open source, the weights are on Hugging Face, and the team released everything including code, training data, benchmark, the lot.
MiniMax M2.7 The Agentic Model That Helped Build Itself

MiniMax M2.7: The Agentic Model That Helped Build Itself

0
MiniMax handed an internal version of M2.7 a programming scaffold and let it run unsupervised. Over 100 rounds it analyzed its own failures, modified its own code, ran evaluations, and decided what to keep and what to revert. The result was a 30% performance improvement with nobody directing each step. That is not a benchmark result. That is a different way of thinking about how AI models get built. M2.7 is now available on HuggingFace with weights you can download and deploy. NVIDIA is offering free API access if you want to try it without the hardware overhead. The license has a commercial limitation worth knowing about, we will get to that.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy