Skip to content

Conversation

PerkzZheng
Copy link
Collaborator

@PerkzZheng PerkzZheng commented Jul 28, 2025

this MR fixes the illegal shared memory access when chunked attention is used in MMHA. The shared memory offset is not calculated properly.

Summary by CodeRabbit

  • Bug Fixes
    • Improved memory handling in masked multihead attention to better respect chunked and cyclic attention window sizes, which may enhance stability and performance for certain attention configurations.

Copy link
Contributor

coderabbitai bot commented Jul 28, 2025

Walkthrough

The masked multihead attention CUDA kernel was updated to introduce a new parameter, chunked_attention_size, which is now used alongside cyclic_attention_window_size to determine the maximum timestep for shared memory operations. The calculation of max_timesteps was changed to use the minimum of timestep, cyclic_kv_cache_len, and chunked_attention_size.

Changes

Cohort / File(s) Change Summary
Masked Multihead Attention Kernel Update
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h
Introduced chunked_attention_size parameter from params and used it to restrict max_timesteps calculation within the CUDA kernel. No changes to exported or public entity declarations.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~7 minutes

Poem

In kernels deep where data flows,
A chunked size now gently grows.
Timesteps capped with care and grace,
To keep the memory in its place.
Rabbit hops with code so bright,
Optimizing day and night! 🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 97b26ae and 29c8870.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@PerkzZheng PerkzZheng force-pushed the user/perkzz/chunked-attention-fix2 branch from 2644f13 to 97b26ae Compare July 28, 2025 09:35
@coderabbitai coderabbitai bot requested review from lfr-0531 and yweng0828 July 28, 2025 09:35
@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13184 [ run ] triggered by Bot

@PerkzZheng PerkzZheng force-pushed the user/perkzz/chunked-attention-fix2 branch from 97b26ae to 4aab6c3 Compare July 28, 2025 10:02
@PerkzZheng PerkzZheng requested review from a team as code owners July 28, 2025 10:02
@PerkzZheng PerkzZheng changed the base branch from main to release/0.21 July 28, 2025 10:02
@PerkzZheng PerkzZheng requested a review from a team as a code owner July 28, 2025 10:02
@PerkzZheng
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13187 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13184 [ run ] completed with state ABORTED

@juney-nvidia
Copy link
Collaborator

@litaotju @nvbrantz for this fix for MMHA, do we want to use this as the opportunity for Pengbo as a ramp-up to do the code review and validation from his side also? :)

June

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13187 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #261 completed with status: 'FAILURE'

@PerkzZheng PerkzZheng force-pushed the user/perkzz/chunked-attention-fix2 branch from 4aab6c3 to 29c8870 Compare July 29, 2025 01:46
@PerkzZheng
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13264 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13264 [ run ] completed with state SUCCESS
/LLM/release-0.21/L0_MergeRequest_PR pipeline #262 completed with status: 'FAILURE'

@schetlur-nv
Copy link
Collaborator

schetlur-nv commented Jul 29, 2025

x86 tests passed, re-triggered SBSA tests here:

https://nv/trt-llm-cicd/job/release-0.21/job/L0_Test-SBSA/263

@chzblych
Copy link
Collaborator

/bot skip --comment "The previous failed SBSA test stage passed"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13463 [ skip ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13463 [ skip ] completed with state SUCCESS
Skipping testing for commit f8ea106

@chzblych chzblych merged commit 9239747 into NVIDIA:release/0.21 Jul 30, 2025
3 checks passed
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Aug 1, 2025
…th chunked attention (NVIDIA#6401)

Signed-off-by: Perkz Zheng <[email protected]>
Co-authored-by: Sharan Chetlur <[email protected]>
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Aug 1, 2025
…th chunked attention (NVIDIA#6401)

Signed-off-by: Perkz Zheng <[email protected]>
Co-authored-by: Sharan Chetlur <[email protected]>
dc3671 pushed a commit to dc3671/TensorRT-LLM that referenced this pull request Aug 4, 2025
…th chunked attention (NVIDIA#6401)

Signed-off-by: Perkz Zheng <[email protected]>
Co-authored-by: Sharan Chetlur <[email protected]>
dc3671 pushed a commit that referenced this pull request Aug 4, 2025
…th chunked attention (#6401)

Signed-off-by: Perkz Zheng <[email protected]>
Co-authored-by: Sharan Chetlur <[email protected]>
lancelly pushed a commit to lancelly/TensorRT-LLM that referenced this pull request Aug 6, 2025
…th chunked attention (NVIDIA#6401)

Signed-off-by: Perkz Zheng <[email protected]>
Co-authored-by: Sharan Chetlur <[email protected]>
Signed-off-by: Lanyu Liao <[email protected]>
jain-ria pushed a commit to jain-ria/TensorRT-LLM that referenced this pull request Aug 7, 2025
…th chunked attention (NVIDIA#6401)

Signed-off-by: Perkz Zheng <[email protected]>
Co-authored-by: Sharan Chetlur <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants