Skip to content

Conversation

DomBrown
Copy link
Collaborator

@DomBrown DomBrown commented Jul 2, 2025

There was an illegal memory access occurring in the MoE routing kernel when using autotuning with TRTLLM Gen fp8 block scale MoE.

  • Fixed by adding additional autotuning constraints
  • Added assertions to torch op to catch this bug in the future

This bug is present both here and in main. Main will get the fix in the next mass integration.

No test waive to remove as previously there was a workaround, so no waive exists.

Test Coverage

tests/unittest/_torch/thop/test_moe.py

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

…memory access

- Fixed by adding additional autotuning constraints
- Added assertions to torch op to catch this bug in the future

Signed-off-by: Dom Brown <[email protected]>
@DomBrown DomBrown requested review from hlu1 and hyukn July 2, 2025 11:30
@DomBrown DomBrown self-assigned this Jul 2, 2025
@DomBrown DomBrown requested review from a team as code owners July 2, 2025 11:30
@DomBrown DomBrown added the bug Something isn't working label Jul 2, 2025
@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10629 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10629 [ run ] completed with state FAILURE
/LLM/release-0.21/L0_MergeRequest_PR pipeline #116 completed with status: 'FAILURE'

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10635 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10635 [ run ] completed with state FAILURE
/LLM/release-0.21/L0_MergeRequest_PR pipeline #117 completed with status: 'FAILURE'

@DomBrown DomBrown requested a review from Copilot July 2, 2025 14:17
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes an illegal memory access in the fp8 block scale MoE routing kernel by tightening autotuning constraints and adding runtime checks.

  • Replace manual bucket list with dynamic power-of-2 bucket generation and clamp rule
  • Introduce ConstraintSpec definitions to align token counts across tensors
  • Enhance C++ kernel with extra TORCH_CHECK validations for shape consistency

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py Add get_constraint_specs, replace m_values tuple with helper API
cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp Add dimension checks for routing_logits and hidden_states_scale
Comments suppressed due to low confidence (2)

tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py:141

  • [nitpick] Add a docstring explaining what _constrain_to_num_tokens does, including the expected shape layout and which tensor’s token count is being extracted.
        def _constrain_to_num_tokens(shapes: Tuple[torch.Size]) -> int:

tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py:164

  • [nitpick] Consider adding a unit test for get_constraint_specs to verify that the returned ConstraintSpec objects enforce matching token counts as intended during autotuning.
        return constraint_specs_tuple

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10661 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10661 [ run ] completed with state FAILURE
/LLM/release-0.21/L0_MergeRequest_PR pipeline #120 completed with status: 'FAILURE'

@nv-guomingz
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10675 [ run ] triggered by Bot

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10675 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #121 completed with status: 'FAILURE'

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10687 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10687 [ run ] completed with state FAILURE
/LLM/release-0.21/L0_MergeRequest_PR pipeline #123 completed with status: 'ABORTED'

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10691 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10691 [ run ] completed with state FAILURE
/LLM/release-0.21/L0_MergeRequest_PR pipeline #125 completed with status: 'ABORTED'

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10704 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10704 [ run ] completed with state FAILURE
/LLM/release-0.21/L0_MergeRequest_PR pipeline #128 completed with status: 'FAILURE'

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10776 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10776 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #145 completed with status: 'FAILURE'

@DomBrown
Copy link
Collaborator Author

DomBrown commented Jul 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10838 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10838 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #150 completed with status: 'SUCCESS'

@hyukn hyukn requested a review from kaiyux July 4, 2025 00:57
@kaiyux kaiyux merged commit 2aacdba into NVIDIA:release/0.21 Jul 4, 2025
5 checks passed
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 10, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 10, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 11, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 14, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 14, 2025
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Jul 14, 2025
dc3671 pushed a commit that referenced this pull request Jul 14, 2025
@DomBrown DomBrown deleted the dev/nvbug_5356427 branch August 21, 2025 15:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants