<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI Agents &#8211; Firethering</title>
	<atom:link href="https://firethering.com/tag/ai-agents/feed/" rel="self" type="application/rss+xml" />
	<link>https://firethering.com</link>
	<description>Firethering is Your Hub for AI, Open Source and Tech That Actually Matters</description>
	<lastBuildDate>Thu, 23 Apr 2026 18:16:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.5</generator>

 
	<item>
		<title>MiMo-V2.5-Pro: A Coding Model Taking On Claude Opus 4.6 and GPT-5.4</title>
		<link>https://firethering.com/mimo-v2-5-pro-xiaomi-coding-model/</link>
					<comments>https://firethering.com/mimo-v2-5-pro-xiaomi-coding-model/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Thu, 23 Apr 2026 18:16:12 +0000</pubDate>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[AI Models]]></category>
		<category><![CDATA[Trends]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=6338</guid>

					<description><![CDATA[Peking University gives its computer science students a compiler project every semester. Build a complete SysY compiler in Rust including lexer, parser, abstract syntax tree, IR code generation, assembly backend, performance optimization. The whole thing. Students typically need several weeks.

MiMo-V2.5-Pro finished it in 4.3 hours. Perfect score. 233 out of 233 tests passed on a hidden test suite it had never seen. That's a real university project and a model that scored higher than most students who spent weeks on it. Xiaomi built this, which is still a sentence that takes a moment to process.

V2.5-Pro is the next step up from MiMo-V2-Flash and its closed source for now, but Xiaomi has confirmed open source is coming for the V2.5 series. What V2.5-Pro adds over Flash is meaningful. Better long-horizon coherence, stronger agentic capabilities, and the ability to sustain complex tasks across more than a thousand tool calls without losing the thread.]]></description>
		
					<wfw:commentRss>https://firethering.com/mimo-v2-5-pro-xiaomi-coding-model/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Kimi K2.6: Turn Your Documents Into Reusable Skills and Let 50+ Agents Execute Them</title>
		<link>https://firethering.com/kimi-k2-6-document-skills-agent-swarm/</link>
					<comments>https://firethering.com/kimi-k2-6-document-skills-agent-swarm/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Wed, 22 Apr 2026 13:11:57 +0000</pubDate>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[AI Models]]></category>
		<category><![CDATA[Trends]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=6323</guid>

					<description><![CDATA[There's a particular kind of frustration that comes with doing great work and then starting from scratch the next time you need to do it again.

You wrote a brilliant research report last month. The structure was tight, the sourcing was solid, the tone was exactly right. Now a client wants something similar and you're staring at a blank page again. The previous report is sitting in a folder somewhere, useful as a reference but not as a tool.

Kimi K2.6 is trying to fix that specific problem. And the way it goes about it is different enough from what other models are doing that it's worth paying attention to.

The model itself is a 1T parameter MoE released under a Modified MIT license,  more on what that means practically in a moment. But the architecture is almost secondary to what Moonshot AI built around it. Document to Skills, Agent Swarm, full stack generation from a single prompt. It's a system designed around the idea that one person should be able to operate like a team.]]></description>
		
					<wfw:commentRss>https://firethering.com/kimi-k2-6-document-skills-agent-swarm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>GLM 5.1: The open source model that gets better the longer you run it</title>
		<link>https://firethering.com/glm-5-1-open-source-agentic-model/</link>
					<comments>https://firethering.com/glm-5-1-open-source-agentic-model/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Wed, 08 Apr 2026 16:23:40 +0000</pubDate>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[AI Models]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=6149</guid>

					<description><![CDATA[Give an AI agent a hard problem and it usually figures out the easy wins fast. After that, more time does not help. It just sits there, trying the same things.

ZhipuAI ran GLM-5.1 on a vector database optimization problem and let it go for 600 iterations. It did not run out of ideas. At iteration 50 it was sitting at roughly the same performance as the best single-session result any model had achieved. By iteration 600 it had reached 21,500 queries per second. The previous best was 3,547.

That gap is not incremental improvement. It is a different category of result. GLM-5.1 is open source, MIT licensed, and the weights are on HuggingFace right now. It works with Claude Code, vLLM, and SGLang. If you are building anything that runs agents over long tasks, this one is worth understanding.]]></description>
		
					<wfw:commentRss>https://firethering.com/glm-5-1-open-source-agentic-model/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Trinity-Large-Thinking: the open source brain your AI agents have been missing</title>
		<link>https://firethering.com/trinity-large-thinking-open-source-agent-model/</link>
					<comments>https://firethering.com/trinity-large-thinking-open-source-agent-model/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Sat, 04 Apr 2026 09:47:46 +0000</pubDate>
				<category><![CDATA[AI Models]]></category>
		<category><![CDATA[Tech]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=6016</guid>

					<description><![CDATA[Most open source models that claim agentic capability are really just instruction-tuned models with tool calling bolted on. They can call a function. They cannot think across ten steps, remember what they decided three tool calls ago, and course correct when something breaks mid-task.

This is where Trinity-Large-Thinking comes into picture. Arcee AI released it this week. 398 billion total parameters, but only 13 billion active during inference. That MoE architecture means it runs closer to a 13B model in practice while carrying the knowledge of something nearly 30 times larger. And unlike most models where reasoning stops between steps, Trinity keeps its thinking tokens alive across the entire agent loop. Every decision it makes is informed by everything it reasoned through before it.]]></description>
		
					<wfw:commentRss>https://firethering.com/trinity-large-thinking-open-source-agent-model/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>EmDash: Cloudflare rebuilt WordPress for the agent-first web</title>
		<link>https://firethering.com/emdash-cloudflare-rebuilt-wordpress/</link>
					<comments>https://firethering.com/emdash-cloudflare-rebuilt-wordpress/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 15:00:43 +0000</pubDate>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[Trends]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[Cloudflare]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=6003</guid>

					<description><![CDATA[WordPress has a problem it cannot fix from the inside. Not a performance problem. Not a features problem. A structural one. 96% of its security vulnerabilities come from plugins, and the reason is simple. Every plugin gets access to everything. The database, the filesystem, the entire execution context. That is how it was built in 2003 and that is how it still works today.

Cloudflare looked at that and decided patching was the wrong answer. EmDash is their attempt to start over. Built in TypeScript, Its serverless &#038; powered by Astro &#038; MIT licensed. No PHP, legacy architecture or plugins that can silently access your entire database.

I want to be straight about what this is right now. It is a v0.1.0 developer preview. You are not migrating your production site today. But the architecture decisions behind it are serious enough that if you build on WordPress, run a plugin business, or host WordPress sites for clients, you should understand what Cloudflare just shipped.]]></description>
		
					<wfw:commentRss>https://firethering.com/emdash-cloudflare-rebuilt-wordpress/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>5 open source AI agentic models built for real autonomous work</title>
		<link>https://firethering.com/best-open-source-ai-agent-models/</link>
					<comments>https://firethering.com/best-open-source-ai-agent-models/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Tue, 31 Mar 2026 09:03:06 +0000</pubDate>
				<category><![CDATA[AI Picks]]></category>
		<category><![CDATA[Tech]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[AI Models]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=5929</guid>

					<description><![CDATA[Getting an AI agent to start a task is easy. Getting it to finish one properly is a different story. Most agents fall apart somewhere in the middle. A tool returns unexpected output, the model misreads it, and everything that follows builds on that mistake. By step thirty you are looking at something that has completely lost track of what it was supposed to do.

The five AI models here were built with that specific problem in mind. They handle complex multi-step tasks, real browser control, deep research and coding workflows. All open source &#038; self hostable.]]></description>
		
					<wfw:commentRss>https://firethering.com/best-open-source-ai-agent-models/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MiroThinker 1.7 Finally Brings Deep Research AI Agents to Open Source</title>
		<link>https://firethering.com/mirothinker-1-7-open-source-deep-research-ai-agent/</link>
					<comments>https://firethering.com/mirothinker-1-7-open-source-deep-research-ai-agent/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 13:52:00 +0000</pubDate>
				<category><![CDATA[AI Models]]></category>
		<category><![CDATA[Tech]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=5875</guid>

					<description><![CDATA[For deep research tasks, the options are mostly proprietary. Perplexity, ChatGPT DeepResearch, paid tools that do the job but keep your data on their servers and charge you monthly for the privilege. Yes you can use open source reasoning models like DeepSeek-R1 or Qwen3 for complex analysis and they are genuinely capable. But they are not built specifically for agentic deep research. They reason well. They do not orchestrate.

That gap is exactly what MiroThinker 1.7 is designed to fill. An open source model built from the ground up for long horizon research tasks, step by step verification and up to 300 sequential tool calls without losing the plot.

If you handle sensitive research and cannot pipe it through a third party server, this is worth paying close attention to.]]></description>
		
					<wfw:commentRss>https://firethering.com/mirothinker-1-7-open-source-deep-research-ai-agent/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Nvidia Is Building NemoClaw, an Open Source AI Agent Platform That Runs on Any Chip</title>
		<link>https://firethering.com/nvidia-nemoclaw-open-source-ai-agent-platform/</link>
					<comments>https://firethering.com/nvidia-nemoclaw-open-source-ai-agent-platform/#respond</comments>
		
		<dc:creator><![CDATA[Mohit Geryani]]></dc:creator>
		<pubDate>Wed, 11 Mar 2026 13:55:27 +0000</pubDate>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[Trends]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Agents]]></category>
		<category><![CDATA[Nvidia]]></category>
		<guid isPermaLink="false">https://firethering.com/?p=5568</guid>

					<description><![CDATA[The company that sells the chips just built software that runs on everyone else's chips.

Nvidia is reportedly preparing to launch an open source AI agent platform called NemoClaw at GTC 2026 next week in San Jose. People familiar with the plans say the platform will let enterprise companies deploy AI agents across their workforces regardless of whether they run on Nvidia hardware or not.

Nvidia hasn't confirmed anything publicly yet. But the conversations with companies like Salesforce, Cisco, Google, Adobe and CrowdStrike are apparently already happening.]]></description>
		
					<wfw:commentRss>https://firethering.com/nvidia-nemoclaw-open-source-ai-agent-platform/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
