Most AI models are what they appear to be. A 12B parameter model uses 12B parameters. What you see is what runs.
Marco MoE does not work that way. Alibaba built two models, Marco Nano and Marco Mini, that carry billions of parameters but wake up only a tiny fraction of them for each request. Marco Nano activates 0.6 billion out of 8 billion. Marco Mini activates 0.86 billion out of 17.3 billion. Less than 5% of either model is actually working at any moment.
The part that makes this worth paying attention to is what that 5% manages to do against models running at full capacity.
Table of Contents
How MoE sparsity actually works
MoE stands for Mixture of Experts. The idea is that instead of one large neural network handling every token, you build many smaller specialized networks called experts and route each token to only a few of them. Marco Nano has 232 experts total and activates 8 per token. Marco Mini has 256 experts and also activates 8.
The result is a model that is large but cheap to run in practice. You get the knowledge capacity of a big model because all those experts exist and were trained. You get the inference cost of a small model because only a handful of them fire at once.
Alibaba took this further by building both models through a process called upcycling, starting from Qwen3-0.6B and expanding it into a sparse MoE structure rather than training from scratch. That kept compute costs down while building something that behaves like a much larger model at inference time.
Meet Marco Nano and Mini
Two models, one architecture, different points on the same curve.
Marco Nano is 8 billion parameters total, 0.6 billion active. It is the lighter option, easier to deploy, lower memory requirements, and still competitive with models running three times as many active parameters. If you need something that fits on modest hardware without sacrificing too much capability, Nano is the starting point.
Marco Mini steps up to 17.3 billion total parameters with 0.86 billion active. The activation ratio actually drops to 5% here, even sparser than Nano despite being a larger model overall. The benchmark gains over Nano are meaningful, particularly on harder reasoning tasks and translation quality. If you have slightly more headroom and need stronger multilingual performance, Mini is the one to reach for.
Both share identical post training pipelines, the same language support across 29 languages, and the same Apache 2.0 license. The choice between them is mostly about your hardware situation and how much multilingual depth you need.
If you are looking to fine tune rather than use the instruct versions directly, both Marco Nano Base and Marco Mini Base are available on HuggingFace as well. Same architecture, same sparsity, without the post training applied. Useful if you want to build your own instruction tuning pipeline on top.
The multilingual story
Most models at this size are built for English first and everything else is an afterthought. The multilingual numbers are there on the benchmark table but the gaps tell the real story. Performance drops sharply once you move outside English and other languages.
Marco MoE was built differently. Thirty percent of the post training data mixture was explicitly multilingual, covering translation pairs, cultural content, and region specific knowledge across 29 languages. The results show it. On GlobalMMLU, which tests general knowledge across languages, Marco Nano scores 58.7 against Qwen3-1.7B’s 46.3 and LFM2’s 49.0. Marco Mini pushes that to 73.3, ahead of Gemma3-12B at 69.2 despite running a fraction of the active parameters.
The cultural benchmark numbers are where it gets more interesting. TurkishMMLU, KazakhMMLU, GreekMMLU, Indonesian cultural benchmarks, Marco Mini leads on nearly all of them. These are not languages that show up in most small model training pipelines as a priority. Alibaba built something that actually works across regions most Western labs do not think about until after launch.
Related: Gemma 4 Makes Local AI Agents Actually Practical
What the benchmarks actually show
These numbers all come from Alibaba’s own evaluations so treat them as directional. That said the comparisons are internally consistent and the models they benchmark against are well known.
Marco Nano at 0.6B active parameters scores 62.8 average on English benchmarks. LFM2 with 1.5B active parameters scores 62.5. Ministral3 with 3.84B active scores 59.2. The gap on MMLU-Pro is particularly clear, Marco Nano scores 54.5 against Ministral3’s 49.5.
Marco Mini’s English average of 75.5 sits above Qwen3-4B at 73.3 and well above Gemma3-12B at 65.8. On GSM8K math it scores 93.1, the highest in its comparison group. MMLU-Pro comes in at 70.7 against Qwen3-4B’s 66.9.
The one honest gap worth knowing is GPQA-Diamond, which tests graduate level scientific reasoning. Marco Nano scores 22.2 there, below every model in its comparison group. Marco Mini recovers to 50.3 which is competitive, but Nano’s weakness on deep reasoning tasks is worth considering if that matters for your use case.
You May Like: Small But Powerful AI Models You Can Run Locally on Your System
How to run them today
Both models are on HuggingFace. They load through standard Transformers with a straightforward setup, no custom kernels or special dependencies required. All instrcutions available on the repo
For serious deployment vLLM is the recommended inference engine. SGLang has a known compatibility issue with MoE models using tied embeddings so if your workflow depends on SGLang, the GitHub repo points to a specific build that works. Worth checking before you start.
Both models use bfloat16 precision by default. Standard Transformers loading with device_map auto handles placement across available hardware without manual configuration.
Who it is for
If you are building multilingual applications and have been frustrated by how quickly small model performance collapses outside English, Marco MoE is genuinely worth testing. The 29 language coverage is not surface level and the cultural benchmark results back that up across languages that rarely get serious attention from model developers.
Developers working on resource constrained deployments will find the active parameter story compelling. You get the knowledge capacity of a much larger model at the inference cost of a small one. That trade is real and the benchmarks support it.
Researchers and teams who need commercial flexibility will appreciate the Apache 2.0 license on both models. No restrictions, no conversations about terms, build what you want with it.
Marco Nano is the starting point if hardware is a constraint. Marco Mini is the step up if you need stronger reasoning and translation quality. One honest limitation is that Marco Nano’s GPQA-Diamond score of 22.2 is a real weakness. If your use case depends on deep scientific or graduate level reasoning, Nano is not the right tool. Mini recovers that gap significantly at 50.3 but again, Nano users should know going in.
vLLM is essentially required for smooth deployment. The SGLang compatibility issue is a pain point if your existing stack depends on it.
Big but efficient
Marco MoE makes a simple argument and backs it up with numbers. You do not need to run every parameter to get the most out of a model. You need the right parameters firing at the right time.
For multilingual work especially, these are among the most capable models at this active parameter count available right now. The 29 language coverage with genuine cultural depth is not something you find easily at this size and this license.




