GLM-5 Drops: The Largest Open-Source Model Yet Signals a New Era of “Agentic Engineering”
At 754 billion parameters and 1.51TB, Z.ai’s latest release isn’t just big—it’s a statement about where AI development is heading.
The open-source AI landscape just got significantly more interesting. Z.ai has released GLM-5, a massive MIT-licensed model that doubles the size of its predecessor and takes a strong position on what professional AI-assisted software development should be called.
The Core Insight
GLM-5 is a beast: 754 billion parameters, 1.51TB on Hugging Face. That’s twice the size of GLM-4.7 (368B parameters, 717GB) and represents a significant leap in open-source model capabilities.
But the more interesting signal isn’t the size—it’s the framing. Z.ai is explicitly positioning GLM-5 as a tool for “Agentic Engineering,” distinguishing serious AI-assisted development from the more casual “vibe coding” that’s become popular on social media.
This terminology isn’t arbitrary. Andrej Karpathy and Addy Osmani have both started using “Agentic Engineering” to describe a new paradigm: software development where AI agents don’t just assist but actively participate in the engineering process.
Why This Matters
The open-source vs. proprietary gap may be narrowing faster than expected.
Simon Willison tested GLM-5 with his standard “pelican riding a bicycle” SVG prompt—a deceptively difficult test that reveals how well a model handles spatial reasoning and creative interpretation. The result: “a very good pelican on a disappointing bicycle frame.”
That’s not a passing grade, but it’s progress. More importantly, the fact that we can even run 754B-parameter models through services like OpenRouter means the infrastructure for open-source frontier models is maturing.
The terminology battle signals a maturation of the field. When thought leaders start arguing about what to call something, it means that something has become real enough to need precise language. “Vibe coding” captures the playful, experimental aspect of LLM-assisted development. “Agentic Engineering” captures the serious, production-grade work that enterprises need.
Key Takeaways
- MIT license matters enormously — Unlike many frontier models, GLM-5 can be deployed without licensing headaches
- Scale alone isn’t everything, but it helps — Doubling model size from 368B to 754B represents substantial capability increases
- “Agentic Engineering” is becoming standard terminology — Karpathy, Osmani, and now Z.ai are converging on this framing
- Open-source benchmarks remain mixed — GLM-5 excels at some tasks while struggling with others like spatial reasoning
- The barrier to entry for frontier-class models is falling — Running 754B models via API is now straightforward
Looking Ahead
The release of GLM-5 raises fascinating questions about the future of AI development:
Will open-source catch up to closed models? The compute costs of training 754B+ parameter models remain enormous, but the trend is clear: open weights are getting bigger and better, faster.
Will “Agentic Engineering” become a recognized discipline? We may be witnessing the birth of a new field—one that requires understanding both traditional software engineering and AI agent orchestration.
What does MIT-licensed frontier AI mean for competition? When anyone can deploy a model this powerful, the competitive advantage shifts from model access to model application. Companies that figure out how to productize agentic engineering will win, regardless of whether they trained the underlying models.
GLM-5 isn’t just a model release. It’s a signal that the open-source AI community is serious about competing at the frontier—and about defining what that frontier looks like.
Based on analysis of Simon Willison’s coverage of GLM-5