Skip to content

Conversation

@stnie
Copy link
Collaborator

@stnie stnie commented Jul 10, 2025

Description

Out of bounds access, occured, when batchsize changes during generates as the initial estimation of beam search workspace size might sometimes be lower than an estimation with a lower batchsize, due to different algorithms being called.

This change always uses the function, which exhibits the larger workspace demands as an upper bound estimation

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Refactor
    • Improved the internal logic for computing top-k elements along the last dimension, enhancing parameter handling and GPU resource utilization. No changes to user-facing functionality.

@stnie
Copy link
Collaborator Author

stnie commented Jul 10, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11563 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11563 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #222 completed with status: 'FAILURE'

@stnie
Copy link
Collaborator Author

stnie commented Jul 11, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11649 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11649 [ run ] completed with state FAILURE

@stnie
Copy link
Collaborator Author

stnie commented Jul 11, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11657 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11657 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #230 completed with status: 'SUCCESS'

@stnie stnie changed the base branch from release/0.21 to main July 14, 2025 08:17
@stnie stnie force-pushed the bugfix/5354884 branch 2 times, most recently from c2a0e1a to b4e4bce Compare July 14, 2025 10:00
@stnie
Copy link
Collaborator Author

stnie commented Jul 14, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11802 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11802 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8746 completed with status: 'SUCCESS'

@stnie stnie requested review from rakib-hasan and wili-65535 July 14, 2025 15:01
@stnie stnie marked this pull request as ready for review July 14, 2025 15:01
@Funatiq Funatiq requested a review from ChristinaZ July 17, 2025 11:43
@Funatiq
Copy link
Collaborator

Funatiq commented Jul 17, 2025

@ChristinaZ not sure if you're familiar with this AirTopK variant. Could you please review?

@Funatiq
Copy link
Collaborator

Funatiq commented Jul 17, 2025

@symphonylyh maybe you could also review or reassign since you reviewed the original implementation.

@ChristinaZ
Copy link
Collaborator

Funatiq

Let me introduce some background. AIR TopK actually has two different versions: a single-block and a multi-block implementation. One of our main reasons for choosing between different implementations is to avoid the tailing effect.
When the batch size becomes smaller than the initial setting, we may use the multi-block version, which uses more global memory.
As the buffer size of the multi-block version is always larger than the single-block version for the same problem size, I think this solution works.

…_stable_radix_topk_ as an better upper bound

Signed-off-by: Stefan Niebler <[email protected]>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 18, 2025

Walkthrough

The function invokeComputeTopkLastDimWorkspaceSize in a CUDA kernel implementation was updated to replace a simple call to a radix top-k workspace function with a more explicit and parameterized invocation. The new implementation sets up additional constants, queries GPU hardware properties, computes grid dimensions, and passes more detailed arguments to the kernel launcher.

Changes

File(s) Change Summary
cpp/tensorrt_llm/kernels/topkLastDim.cu Replaced a direct function call with a more explicit setup: added type alias, constants, hardware queries, grid calculation, and a detailed kernel launcher invocation.

Sequence Diagram(s)

sequenceDiagram
    participant Host
    participant GPU

    Host->>GPU: Query number of streaming multiprocessors (sm_cnt)
    Host->>Host: Compute grid_dim using sm_cnt
    Host->>GPU: Call standalone_stable_radix_topk_ with detailed parameters (workspace, grid_dim, flags, etc.)
    GPU-->>Host: Return workspace size/buffer size
Loading

Poem

In the land of CUDA, kernels spin and twirl,
Grid dims are counted, SMs in a whirl.
No longer so simple, the setup is keen—
With flags and new types, efficiency is seen.
A hop, a skip, a top-k leap—
This rabbit codes while you’re asleep! 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@stnie
Copy link
Collaborator Author

stnie commented Jul 18, 2025

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cpp/tensorrt_llm/kernels/topkLastDim.cu (1)

1477-1478: Consider adding explanatory comment for future maintainers.

While the implementation correctly ensures upper bound workspace estimation, consider adding a comment explaining why we explicitly call standalone_stable_radix_topk_ instead of standalone_stable_radix_11bits. This would help future maintainers understand the reasoning behind this approach.

+    // Explicitly call the multi-block version to ensure workspace size represents
+    // the upper bound, preventing underestimation when batch size changes
     standalone_stable_radix_topk_<T, idxT, 11, block_dim>(workspace, buf_size, in, static_cast<idxT*>(nullptr),
         batchSize, inputLength, k, out_val, out_idx, !is_largest, fused_last_filter, grid_dim, 0, sorted);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 519a211 and e40b7f8.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/kernels/topkLastDim.cu (1 hunks)
🔇 Additional comments (2)
cpp/tensorrt_llm/kernels/topkLastDim.cu (2)

1462-1462: Good use of type alias for consistency.

The introduction of idxT type alias improves code readability and maintains consistency throughout the function.


1470-1478: Effective solution for workspace size underestimation.

The explicit setup and call to standalone_stable_radix_topk_ ensures that workspace size is calculated using the multi-block version parameters, which provides the upper bound needed to prevent out-of-bounds access. This directly addresses the PR objective of fixing workspace underestimation when batch size changes.

The approach of querying SM count and calculating grid dimensions ensures we get the workspace size for the more resource-intensive multi-block algorithm, preventing the underestimation issue described in the PR.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12300 [ run ] triggered by Bot

@stnie stnie enabled auto-merge (squash) July 18, 2025 13:09
@tensorrt-cicd
Copy link
Collaborator

PR_Github #12300 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9136 completed with status: 'FAILURE'

@stnie
Copy link
Collaborator Author

stnie commented Jul 18, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12325 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12325 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9156 completed with status: 'SUCCESS'

@stnie stnie merged commit d475c97 into NVIDIA:main Jul 18, 2025
3 checks passed
reasonsolo pushed a commit to reasonsolo/TensorRT-LLM that referenced this pull request Jul 21, 2025
timlee0212 pushed a commit to timlee0212/TensorRT-LLM that referenced this pull request Jul 21, 2025
NVShreyas pushed a commit to NVShreyas/TensorRT-LLM that referenced this pull request Jul 28, 2025
…upper bound (NVIDIA#5926)

Signed-off-by: Stefan Niebler <[email protected]>
Signed-off-by: Shreyas Misra <[email protected]>
@stnie stnie deleted the bugfix/5354884 branch September 9, 2025 08:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants