Skip to content

Conversation

JustinTong0323
Copy link
Collaborator

@JustinTong0323 JustinTong0323 commented Sep 26, 2025

Motivation

Close #10785
This PR enables deterministic inference for models using TP > 1.

Modifications

Set NCCL_ALGO="allreduce:tree" and disable custom all-reduce when TP > 1.

Accuracy Tests

Model Type TP size Backend Status
Dense & MoE 2, 4, 8 FlashInfer ✅ Passed
Dense & MoE 2, 4, 8 Triton ✅ Passed
Dense & MoE 4, 8 FlashAttention-3 ✅ Passed
Dense 2 FlashAttention-3 ✅ Passed
MoE 2 FlashAttention-3 ❌ Partial Failed (Corner Case)¹

Tests passed for both Dense/MoE models (TP=2,4,8) across all backends (fa3, flashinfer, triton) using sglang.test.test_deterministic.

However, we found one exception: the prefix test failed for MoEQwen3-30B-A3B with TP=2 and the fa3 backend, generating two unique outputs for prefix length 1.(But mistralai/Mixtral-8x7B-v0.1 Could pass)

We use Qwen3-32B for dense and Qwen3-30B-A3B for MoE

Reproduce

  1. Launch server:
python3 -m sglang.launch_server \
    --model Qwen/Qwen3-30B-A3B \
    --port 30000 \
    --enable-deterministic-inference \
    --attention-backend fa3 \
    --tp-size 2
  1. Run test:
python3 -m sglang.test.test_deterministic --test-mode prefix

You can see result like:

Prompt 0 with prefix length 1: total samples: xxx, Unique samples: 2

Checklist

Co-authored-by: yhyang201 <[email protected]>
Co-authored-by: Yangmin Li <[email protected]>
Co-authored-by: Yuan Luo <[email protected]>
Copy link
Contributor

Summary of Changes

Hello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the SGLang framework by extending deterministic inference capabilities to models that utilize Tensor Parallelism (TP) greater than one. Previously, this feature was limited to single-TP setups. The change involves programmatically setting NCCL's all-reduce algorithm and disabling custom all-reduce when TP is active, thereby removing a critical constraint. Additionally, a minor adjustment was made to a test file to ensure robust path handling for test data.

Highlights

  • Deterministic Inference for Tensor Parallelism (TP) > 1: The pull request enables deterministic inference for models when Tensor Parallelism (TP) size is greater than 1, by configuring NCCL_ALGO and disabling custom all-reduce.
  • Removal of TP Size Restriction: A previous ValueError that restricted deterministic inference to only TP size 1 has been removed, allowing the feature to be used with larger TP configurations.
  • NCCL Configuration for Determinism: When tp_size > 1 and deterministic inference is enabled, the NCCL_ALGO environment variable is now set to "allreduce:tree", and custom all-reduce is explicitly disabled.
  • Updated Test File Path Handling: The test_deterministic.py file has been updated to use os.path.join(dirpath, "long_prompt.txt") for reading the long prompt, improving path resolution and portability.
  • Accuracy Test Results: Accuracy tests passed for Dense/MoE models (TP=2,4,8) across various backends (fa3, flashinfer, triton), with a noted exception for the prefix test with MoE, TP=2, and the fa3 backend.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables deterministic inference for tensor parallelism greater than 1 by setting NCCL_ALGO to allreduce:tree and disabling custom all-reduce. The changes look reasonable, but I've identified a minor issue in a warning message that could be misleading. The change in the test file to use a relative path is a good improvement for robustness.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Fridge003
Copy link
Collaborator

Interesting result~ Have you tried on other MoE models, such Mixtral?

@JustinTong0323
Copy link
Collaborator Author

Interesting result~ Have you tried on other MoE models, such Mixtral?

mistralai/Mixtral-8x7B-v0.1 Passed all test on TP=2 with fa3

@merrymercy merrymercy merged commit 62e2e99 into sgl-project:main Sep 27, 2025
104 of 137 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Support deterministic inference for MoE in large TP
4 participants