-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[TRTLLM-6100] fix: Nvbug 5356427: autotuned TRTLLM Gen fp8 block scale MoE illegal memory access #5676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…memory access - Fixed by adding additional autotuning constraints - Added assertions to torch op to catch this bug in the future Signed-off-by: Dom Brown <[email protected]>
/bot run |
PR_Github #10629 [ run ] triggered by Bot |
PR_Github #10629 [ run ] completed with state |
/bot run |
PR_Github #10635 [ run ] triggered by Bot |
PR_Github #10635 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes an illegal memory access in the fp8 block scale MoE routing kernel by tightening autotuning constraints and adding runtime checks.
- Replace manual bucket list with dynamic power-of-2 bucket generation and clamp rule
- Introduce
ConstraintSpec
definitions to align token counts across tensors - Enhance C++ kernel with extra
TORCH_CHECK
validations for shape consistency
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py | Add get_constraint_specs , replace m_values tuple with helper API |
cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp | Add dimension checks for routing_logits and hidden_states_scale |
Comments suppressed due to low confidence (2)
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py:141
- [nitpick] Add a docstring explaining what
_constrain_to_num_tokens
does, including the expected shape layout and which tensor’s token count is being extracted.
def _constrain_to_num_tokens(shapes: Tuple[torch.Size]) -> int:
tensorrt_llm/_torch/custom_ops/trtllm_gen_custom_ops.py:164
- [nitpick] Consider adding a unit test for
get_constraint_specs
to verify that the returnedConstraintSpec
objects enforce matching token counts as intended during autotuning.
return constraint_specs_tuple
/bot run |
PR_Github #10661 [ run ] triggered by Bot |
PR_Github #10661 [ run ] completed with state |
/bot run |
PR_Github #10675 [ run ] triggered by Bot |
/bot run |
PR_Github #10675 [ run ] completed with state |
PR_Github #10687 [ run ] triggered by Bot |
PR_Github #10687 [ run ] completed with state |
/bot run |
PR_Github #10691 [ run ] triggered by Bot |
PR_Github #10691 [ run ] completed with state |
/bot run |
PR_Github #10704 [ run ] triggered by Bot |
PR_Github #10704 [ run ] completed with state |
/bot run |
PR_Github #10776 [ run ] triggered by Bot |
PR_Github #10776 [ run ] completed with state |
/bot run |
PR_Github #10838 [ run ] triggered by Bot |
PR_Github #10838 [ run ] completed with state |
…e MoE illegal memory access (NVIDIA#5676) Signed-off-by: Dom Brown <[email protected]>
…e MoE illegal memory access (NVIDIA#5676) Signed-off-by: Dom Brown <[email protected]>
…e MoE illegal memory access (NVIDIA#5676) Signed-off-by: Dom Brown <[email protected]>
…e MoE illegal memory access (NVIDIA#5676) Signed-off-by: Dom Brown <[email protected]>
…e MoE illegal memory access (NVIDIA#5676) Signed-off-by: Dom Brown <[email protected]>
…e MoE illegal memory access (NVIDIA#5676) Signed-off-by: Dom Brown <[email protected]>
…e MoE illegal memory access (#5676) Signed-off-by: Dom Brown <[email protected]>
There was an illegal memory access occurring in the MoE routing kernel when using autotuning with TRTLLM Gen fp8 block scale MoE.
This bug is present both here and in main. Main will get the fix in the next mass integration.
No test waive to remove as previously there was a workaround, so no waive exists.
Test Coverage
tests/unittest/_torch/thop/test_moe.py
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]
Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md
.kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.