You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Add glm-4.6 model to both international and mainland Z.AI configurations
- Update model to GLM-4.6 as default for both regions
- Configure 200K context window (upgraded from 131K in GLM-4.5)
- Add tiered pricing for mainland China (32K, 128K, 200K+ contexts)
- Support 355B-parameter MoE architecture with improved capabilities
- Enable prompt caching support for cost optimization
GLM-4.6 represents Zhipu's latest SOTA model with significant
improvements in coding, reasoning, search, writing, and agent
applications across 8 authoritative benchmarks.
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
"GLM-4.5 is Zhipu's latest featured model. Its comprehensive capabilities in reasoning, coding, and agent reach the state-of-the-art (SOTA) level among open-source models, with a context length of up to 128k.",
33
+
"GLM-4.5 is Zhipu's previous flagship model. Its comprehensive capabilities in reasoning, coding, and agent are excellent among open-source models, with a context length of up to 128k.",
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
"GLM-4.5 is Zhipu's latest featured model. Its comprehensive capabilities in reasoning, coding, and agent reach the state-of-the-art (SOTA) level among open-source models, with a context length of up to 128k.",
94
+
"GLM-4.5 is Zhipu's previous flagship model. Its comprehensive capabilities in reasoning, coding, and agent are excellent among open-source models, with a context length of up to 128k.",
0 commit comments