back to top
HomeTechPicksI Thought Figma Was Untouchable — Until This Open-Source AI Tool Designed...

I Thought Figma Was Untouchable — Until This Open-Source AI Tool Designed My UI

- Advertisement -

I’ve used Figma. I’ve used Adobe XD. And for most design work they do the job fine — if you’re okay with paying for them and okay with your files living on someone else’s server.

I wasn’t looking for a replacement. I just stumbled across OpenPencil while browsing GitHub one evening and the one thing that caught my attention wasn’t the canvas or the components. It was the MCP server built directly into the tool.

An AI agent that can read, create and modify your design files from the terminal. That’s not a plugin. That’s a different way of thinking about design tools entirely.

I installed it, connected it to Claude Code, created a sample design and spent some time with it. Here’s what I actually found.

What is OpenPencil (In a Nutshell)

OpenPencil is an open-source design tool built for developers who’d rather type than click. Describe what you want, and the AI breaks it into spatial chunks and generates them in parallel on a canvas, you watch the layout appear rather than build it by hand.

The MCP server is probably the most interesting part. It’s built in, which means Claude Code, Codex, or any MCP-compatible agent can read and modify your design files.

The part that actually got my attention

openpencil app screenshot

You can use it completely offline as a regular design tool — import your Figma files, tweak layouts, adjust components, work with shapes and UI elements manually. No internet needed for that part. It works like any other desktop design app.

But connect it to Claude Code, Codex, Gemini or OpenCode CLI and it becomes something else entirely. You describe what you want — a login form, a dashboard layout, a settings page — and the AI agent generates it directly on the canvas. You can then manually adjust whatever it gets wrong, which in my experience is always something small.

The part I found genuinely interesting is how it saves files. Everything exports as a .op file. Open it in any text editor and it’s just JSON — clean, readable, Git friendly. No proprietary binary format, no locked ecosystem. Your designs are just data you actually own.

That combination of offline capable, agent powered when you want it, open file format, is what makes it powerful.

How it compares to Figma

I went further and actually compared it against Figma since that’s what most people use.

FeatureFigmaOpenPencil
Price$16-90/month per seatFree to use
Open sourceNoYes (MIT)
AI built in nativelyPaid AI credits onlyYes via CLI agents
MCP serverNoBuilt in
Figma file importN/AYes (.fig files)
Code exportDev seat required ($12-35/mo)React, Tailwind, HTML free
File formatProprietary.op (JSON, Git friendly)
Works offlineYesYes for manual design
Self hostableNoYes
Desktop appYesYes (Mac, Windows, Linux)

The number that stood out to me was the Dev seat on Figma, $12/month on Professional, $35/month on Organization, just to inspect code and hand off to developers. OpenPencil generates React, Tailwind and HTML directly at no extra cost.

One thing worth being honest about is OpenPencil isn’t completely free if you want the AI features. You’ll need an active subscription or API access with Claude, Codex, Gemini or OpenCode to use the agent capabilities. But if you already have a plan with any of those providers, you’re not paying anything extra. You’re just plugging in what you already use.

For solo developers or small teams who just need a design tool that talks to their existing AI setup, the cost difference is still hard to ignore.

A few things to know before you try it

First thing you’ll hit, the AI agent features don’t just work out of the box. You need the CLI installed separately first. Claude Code CLI, Codex CLI, Gemini CLI or OpenCode CLI depending on which provider you use. The app tells you clearly when something’s missing but it’s worth knowing before you go in expecting instant AI generation.

The AI features also need internet. If you’re using Claude or Codex for generation you’re still making API calls to cloud servers. The offline part is only for manual design work — shapes, layouts, components, Figma imports. Fully local AI generation isn’t there yet.

Collaborative editing is still on the roadmap. So if you’re expecting a drop-in Figma replacement for a team that works together in real time, this isn’t that yet. It’s currently better suited for solo developers or designers working independently.

Boolean operations — union, subtract, intersect — are also not available yet. For complex vector work that’s a real gap.

It’s a young project. MIT licensed, still actively building. The bones are solid but some features that feel standard in mature design tools are still coming.

None of this is a dealbreaker depending on what you need it for. Just worth knowing upfront.

Closing Thoughts

If you’re a developer who already has Claude or Codex in your workflow, the idea of a design tool that speaks the same language — MCP, agents, code export — is genuinely interesting. I hadn’t seen that combination in an open source tool before.

Import your Figma files, connect your existing AI setup, export clean React and Tailwind. For personal projects or solo work that’s a pretty compelling package.

It’s early. But early is when it’s worth paying attention.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
ERNIE-Image Open-Source 8B Text-to-Image Model for Posters Comics and control

ERNIE-Image: Open-Source 8B Text-to-Image Model for Posters, Comics & Structured Generation

0
Text rendering in open source AI image generation has been broken for a long time. Ask most models to put readable words on a poster, lay out a comic panel, or generate anything where the text actually has to make sense and only few models can do it accurately and from rest you get something that looks like it was written by someone who learned the alphabet from a fever dream. ERNIE-Image is Baidu's answer to that specific problem. It's an 8B open weight text-to-image model built on a Diffusion Transformer and it's genuinely good at dense text, structured layouts, posters, infographics and multi-panel compositions. It can run on a 24GB consumer GPU, it's on Hugging Face right now, and it comes in two versions, a full quality model and a turbo variant that gets there in 8 steps instead of 50.
MOSS-TTS-Nano Real-Time Voice AI on CPU

MOSS-TTS-Nano: Real-Time Voice AI on CPU, Part of an Open-Source Stack Rivaling Gemini

0
Most text-to-speech tools fall into two camps. The ones that sound good need serious hardware. The ones that run on anything sound robotic. MOSS-TTS-Nano is trying to be neither. It's a 100 million parameter model that runs on a regular CPU and it actually sounds good. Good enough that the team behind it built an entire family of speech models around the same core technology, one of which has gone head to head with Gemini 2.5 Pro and ElevenLabs and come out ahead on speaker similarity. It just dropped on April 10th and it's the newest addition to the MOSS-TTS family, a collection of five open source speech models from MOSI.AI and the OpenMOSS team. The family doesn't just cover lightweight local deployment. One of its models MOSS-TTSD outperforms Gemini 2.5 Pro and ElevenLabs on speaker similarity in benchmarks. Another generates voices purely from text descriptions with no reference audio needed. And one is built specifically for real-time voice agents with a 180ms first-byte latency. Nano is the entry point. The family is the story.
Gen-Searcher An Open Source AI That Searches the Web Before Generating Images

Gen-Searcher: An Open Source AI That Searches the Web Before Generating Images

0
Your image generator has never seen today. It was trained months ago, maybe longer, and everything it draws comes from that frozen snapshot of the world. Ask it to generate a current news moment, a product that launched last month, or anything that requires knowing what's happening right now and it fills in the gaps with a confident guess. Sometimes that guess is close. Often it isn't. Gen-Searcher does something none of the mainstream tools do. Before it draws a single pixel, it goes and looks things up. It searches the web. It browses sources. It pulls visual references. Then it generates. The result is an image grounded in actual current information. It's open source, the weights are on Hugging Face, and the team released everything including code, training data, benchmark, the lot.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy