Skip to content

Conversation

@jues
Copy link

@jues jues commented Sep 30, 2025

  • Add glm-4.6 model to both international and mainland Z.AI configurations
  • Update model to GLM-4.6 as default for both regions
  • Configure 200K context window (upgraded from 131K in GLM-4.5)
  • Add tiered pricing for mainland China (32K, 128K, 200K+ contexts)
  • Support 355B-parameter MoE architecture with improved capabilities
  • Enable prompt caching support for cost optimization

GLM-4.6 represents Zhipu's latest SOTA model with significant improvements in coding, reasoning, search, writing, and agent applications across 8 authoritative benchmarks.

Related GitHub Issue

Closes: # 8406

Roo Code Task Context (Optional)

Description

Add support for GLM-4.6, Zhipu AI's latest SOTA model, to the Z.AI provider. This update makes GLM-4.6 the default model for both international and mainland China configurations.

Test Procedure

[x] All existing tests pass (15/15 tests green)
[x] Type checking passes
[x] Linting passes
[x] Build succeeds
[x] Extension packaged successfully

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes (if applicable).
  • Documentation Impact: I have considered if my changes require documentation updates (see "Documentation Updates" section below).
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Documentation Updates

Additional Notes


Important

Add support for GLM-4.6 model to Z.AI provider, updating default model, context window, pricing, and enabling prompt caching.

  • Model Support:
    • Add GLM-4.6 model to internationalZAiModels and mainlandZAiModels in zai.ts.
    • Set GLM-4.6 as default model for both international and mainland China.
  • Configuration Changes:
    • Increase context window to 200K from 131K in GLM-4.5.
    • Add tiered pricing for mainland China: 32K, 128K, 200K+ contexts.
    • Support 355B-parameter MoE architecture.
    • Enable prompt caching for cost optimization.

This description was created by Ellipsis for 1339c7c. You can customize this summary. It will automatically update as commits are pushed.

- Add glm-4.6 model to both international and mainland Z.AI configurations
- Update model to GLM-4.6 as default for both regions
- Configure 200K context window (upgraded from 131K in GLM-4.5)
- Add tiered pricing for mainland China (32K, 128K, 200K+ contexts)
- Support 355B-parameter MoE architecture with improved capabilities
- Enable prompt caching support for cost optimization

GLM-4.6 represents Zhipu's latest SOTA model with significant
improvements in coding, reasoning, search, writing, and agent
applications across 8 authoritative benchmarks.
@jues jues requested review from cte, jr and mrubens as code owners September 30, 2025 12:00
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. enhancement New feature or request labels Sep 30, 2025
cacheWritesPrice: 0,
cacheReadsPrice: 0.11,
description:
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: In the GLM-4.6 description, 'agentsUpgraded' is missing a separator (e.g. a space or punctuation).

Suggested change
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agents, upgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",

cacheWritesPrice: 0,
cacheReadsPrice: 0.057,
description:
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo: In the GLM-4.6 description for the Mainland model, 'agentsUpgraded' is missing a separator.

Suggested change
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agents, Upgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",

Copy link

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found some issues that need attention:

  • Fix GLM-4.6 description typos and grammar (two places)
  • Avoid using Infinity for contextWindow tier; use explicit 200_000

cacheWritesPrice: 0,
cacheReadsPrice: 0.11,
description:
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Typo/grammar. Missing space in 'agentsUpgraded' and plural 'models' should be singular. Add 'window' for clarity.

Suggested change
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
\"GLM-4.6 is Zhipu's latest SOTA model for reasoning, coding, and agents. Upgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and a 200K context window, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.\",

cacheWritesPrice: 0,
cacheReadsPrice: 0.057,
description:
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Same copy issue as above.

Suggested change
"GLM-4.6 is Zhipu's latest SOTA models for reasoning, code, and agentsUpgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and 200K context, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.",
\"GLM-4.6 is Zhipu's latest SOTA model for reasoning, coding, and agents. Upgraded across 8 authoritative benchmarks. With a 355B-parameter MoE architecture and a 200K context window, it surpasses GLM-4.5 in coding, reasoning, search, writing, and agent applications.\",

cacheReadsPrice: 0.057,
},
{
contextWindow: Infinity,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Using Infinity for a tier contextWindow can cause issues in some consumers (JSON serialization, numeric comparisons). Given the model’s max supported context is 200K, use explicit numeric value.

Suggested change
contextWindow: Infinity,
contextWindow: 200_000,

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Sep 30, 2025
@mrubens
Copy link
Collaborator

mrubens commented Sep 30, 2025

Thank you for the PR! Didn't have permission to make edits so went with #8408, but let me know if you see anything that got missed.

@mrubens mrubens closed this Sep 30, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Sep 30, 2025
@github-project-automation github-project-automation bot moved this from Triage to Done in Roo Code Roadmap Sep 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. size:M This PR changes 30-99 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

4 participants