back to top
HomeTechJust After Launching Qwen3.5, Qwen's Core Team Walked Out. Is This the...

Just After Launching Qwen3.5, Qwen’s Core Team Walked Out. Is This the Last Great Qwen Model?

- Advertisement -

Yesterday I was testing Qwen3.5-4B on my machine, genuinely impressed by what a 4B model was doing with images and reasoning. Then I opened X and saw a five word post from Junyang Lin, the man who built Qwen from the ground up: “bye my beloved qwen.”

That was it. No explanation, just a goodbye.

Within hours the replies were flooding in. Developers, researchers, open source contributors all asking the same thing — what just happened? And then Elon Musk’s comment on Qwen3.5 calling it “impressive intelligence density” surfaced, and Lin replied with a simple “thx elon.” People in the comments started connecting the dots — was he already gone when he replied? Did he know? Nobody is quite sure what to make of that exchange but it made the whole thing feel even stranger.

Lin wasn’t alone. Yu Bowen, who led post-training for Qwen, resigned the same day. Hui Binyuan, a core contributor focused on coding, had already left in January. Three of the most important people behind one of the best open source AI model families in the world, gone within months of each other.

I had just tested the model. I had just written about why it was worth your attention. And now the people who built it had walked out.

How Qwen Lost Its Soul

Qwen Team Lead Stepped Down

Lin Junyang wasn’t just a team lead. He had been building Qwen since 2022, turning it from an internal Alibaba project into one of the most downloaded open source model families in the world. Over 400 models released, over a billion downloads. That’s not a department head, that’s the person the whole thing was built around.

His resignation on Wednesday came two days after Qwen3.5 launched with just a five word post on X and silence after that.

Yu Bowen, who ran post-training for Qwen, left the same day. Hui Binyuan, a core contributor focused on coding, had already quietly resigned in January. Three people who understood Qwen at its deepest level, all gone within two months.

What pushed them out isn’t entirely clear. Lin himself said at a Beijing forum in January that his team was stretched thin, spending most of their resources just meeting delivery demands rather than doing the kind of research that actually moves things forward.

Alibaba CEO Eddie Wu responded with a brief statement thanking Lin and announcing a task force to coordinate future AI model development. Reading between the lines that sounds like exactly the kind of corporate restructuring that makes passionate researchers leave separate teams, separate goals, less room to just build something great.

The open source community noticed immediately. Zhipu AI’s CEO was already publicly trying to recruit the departing engineers within hours of Lin’s post.

I tested it right before everything fell apart

The timing is strange to think about. While Lin was probably writing that five word goodbye, I was running Qwen3.5-4B on my machine, genuinely surprised by what it was doing.

Vision works better than you’d expect from a 4B model. I dropped in images and it described them accurately — screenshots, diagrams, general scenes. It stumbled on location and landmark identification occasionally, giving confident answers that were just wrong. But for everyday image understanding it holds up.

Text and reasoning is where it genuinely impressed me. It thinks before it answers, works through problems rather than guessing. For something running on 16GB RAM and 6GB VRAM that’s not what you expect.

The model is good. That’s what makes this whole situation harder to sit with.

What happens to the Qwen models we already have?

The models already out there aren’t going anywhere. They’re open source, already downloaded over a billion times, and the community will keep building on them regardless of what Alibaba does next.

But that’s where the reassurance ends.

The real concern is what comes next. Qwen’s release pace existed because the people behind it genuinely cared. Those people just left. Whether Alibaba keeps the same momentum, or even keeps future models open source, nobody knows right now. A few months ago nobody expected Lin to leave either.

Use it while you can

Maybe Alibaba figures it out. Maybe the new team surprises everyone. But right now the safest thing you can do is download the Qwen3.5 weights locally and keep them. We don’t know if the next version will be this good or this open.

What Lin and his team built was rare, a full open source AI stack, genuinely competitive, freely available. That doesn’t come around often. And right now it feels like we’re watching the end of something without quite knowing what comes next.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
Nucleus-Image AI image MOE model

Nucleus-Image: 17B Open-Source MoE Image Model Delivering GPT-Image Level Performance

0
The mixture-of-experts trick changed how people think about LLMs. Instead of running every parameter on every token, you activate a small fraction of the network per forward pass and somehow the quality stays competitive while the compute drops. It's the reason models like Mixtral punched above their weight. Everyone in the LLM space understood it immediately. Nobody had done it openly for image generation. Until now. Nucleus-Image is a 17B parameter diffusion transformer that activates roughly 2B parameters per forward pass. It beats Imagen4 on OneIG-Bench, sits at number one on DPG-Bench overall, and matches Qwen-Image on GenEval. It's also a base model. No fine-tuning, reinforcement learning or human preference tuning. What you're seeing in those benchmarks is raw pre-training performance. That's either impressive or a caveat depending on what you need it for, probably both.
ERNIE-Image Open-Source 8B Text-to-Image Model for Posters Comics and control

ERNIE-Image: Open-Source 8B Text-to-Image Model for Posters, Comics & Structured Generation

0
Text rendering in open source AI image generation has been broken for a long time. Ask most models to put readable words on a poster, lay out a comic panel, or generate anything where the text actually has to make sense and only few models can do it accurately and from rest you get something that looks like it was written by someone who learned the alphabet from a fever dream. ERNIE-Image is Baidu's answer to that specific problem. It's an 8B open weight text-to-image model built on a Diffusion Transformer and it's genuinely good at dense text, structured layouts, posters, infographics and multi-panel compositions. It can run on a 24GB consumer GPU, it's on Hugging Face right now, and it comes in two versions, a full quality model and a turbo variant that gets there in 8 steps instead of 50.
MOSS-TTS-Nano Real-Time Voice AI on CPU

MOSS-TTS-Nano: Real-Time Voice AI on CPU, Part of an Open-Source Stack Rivaling Gemini

0
Most text-to-speech tools fall into two camps. The ones that sound good need serious hardware. The ones that run on anything sound robotic. MOSS-TTS-Nano is trying to be neither. It's a 100 million parameter model that runs on a regular CPU and it actually sounds good. Good enough that the team behind it built an entire family of speech models around the same core technology, one of which has gone head to head with Gemini 2.5 Pro and ElevenLabs and come out ahead on speaker similarity. It just dropped on April 10th and it's the newest addition to the MOSS-TTS family, a collection of five open source speech models from MOSI.AI and the OpenMOSS team. The family doesn't just cover lightweight local deployment. One of its models MOSS-TTSD outperforms Gemini 2.5 Pro and ElevenLabs on speaker similarity in benchmarks. Another generates voices purely from text descriptions with no reference audio needed. And one is built specifically for real-time voice agents with a 180ms first-byte latency. Nano is the entry point. The family is the story.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy