Skip to content

Conversation

shaharmor98
Copy link
Collaborator

@shaharmor98 shaharmor98 commented Aug 3, 2025

Summary by CodeRabbit

  • New Features

    • Added support for configuring the data type used for the Mamba SSM cache in the PyTorch backend via a new user-facing option.
    • Users can specify the Mamba SSM cache data type through configuration fields, command-line arguments, and server launch options, with "auto" as the default to infer from the model.
    • Benchmarking tools for latency and throughput now include a command-line option to set the Mamba SSM cache data type.
    • Model configuration and runtime settings now expose and apply the Mamba SSM cache data type for improved customization.
  • Tests

    • Enhanced tests to validate model correctness with different Mamba SSM cache data type settings.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@shaharmor98 shaharmor98 requested review from a team as code owners August 3, 2025 08:34
Copy link
Contributor

coderabbitai bot commented Aug 3, 2025

📝 Walkthrough

Walkthrough

This change introduces a new configuration option for specifying the data type used in the Mamba SSM cache throughout the codebase. It adds new fields and parameters to configuration and model classes, updates resource managers to handle the new dtype, and propagates this option through the cache management and model execution pipeline. Associated tests are updated to parameterize and verify this new option.

Changes

Cohort / File(s) Change Summary
Mamba2Mixer dtype propagation
tensorrt_llm/_torch/modules/mamba/mamba2_mixer.py
Adds instance variable _mamba_ssm_cache_dtype initialized from config. Passes mamba_ssm_cache_dtype to mamba_chunk_scan_combined in forward.
Mamba SSM cache dtype in scan functions
tensorrt_llm/_torch/modules/mamba/ssd_combined.py
Adds optional mamba_ssm_cache_dtype parameter to _mamba_chunk_scan_combined_fwd and mamba_chunk_scan_combined. Uses it as out_dtype in _state_passing_fwd if provided. Updates docstring accordingly.
PyExecutor utility and config handling
tensorrt_llm/_torch/pyexecutor/_util.py, tensorrt_llm/_torch/pyexecutor/config.py
Adds mamba_ssm_cache_dtype string field (default "auto") to PyTorchConfig. Updates _create_kv_cache_manager to extract dtype from model config, convert string to torch dtype, and pass it to MambaHybridCacheManager.
Resource manager dtype handling
tensorrt_llm/_torch/pyexecutor/resource_manager.py
Updates MambaCacheManager and MambaHybridCacheManager constructors to accept and store mamba_ssm_cache_dtype. Uses it for ssm_states tensor allocation. Adds getter get_mamba_ssm_cache_dtype().
LLM API and config propagation
tensorrt_llm/llmapi/llm_args.py
Adds mamba_ssm_cache_dtype field (default "auto") to KvCacheConfig. Updates TorchLlmArgs.get_pytorch_backend_config() to include this field in the returned PyTorchConfig.
Nemotron-H model test parameterization
tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
Adds optional mamba_ssm_cache_dtype argument to create_nemotron_h_llm. Parameterizes test_nemotron_h_correctness to run with None and "float32" values. Updates test calls and imports accordingly.
Benchmark CLI option additions
tensorrt_llm/bench/benchmark/low_latency.py, tensorrt_llm/bench/benchmark/throughput.py
Adds new CLI option --mamba_ssm_cache_dtype with choices "auto", "float16", "bfloat16", "float32" and default "auto" to benchmark commands.
Benchmark config and dataclass updates
tensorrt_llm/bench/benchmark/utils/general.py, tensorrt_llm/bench/build/dataclasses.py
Adds mamba_ssm_cache_dtype parameter handling in benchmark config loading and model config setting. Adds field and setter method for mamba_ssm_cache_dtype in NemotronHybridConfig dataclass.
Benchmark tuning adjustment
tensorrt_llm/bench/build/tuning.py
Modifies calc_engine_setting to adjust bytes_per_elem based on mamba_ssm_cache_dtype in NemotronHybridConfig, overriding default bytes per element if dtype is float32.
Serve command integration
tensorrt_llm/commands/serve.py
Adds mamba_ssm_cache_dtype parameter and CLI option to serve command and get_llm_args function, passing the dtype through to KvCacheConfig.
Model engine validation
tensorrt_llm/_torch/pyexecutor/model_engine.py
Adds validate_and_set_mamba_ssm_cache_dtype function to interpret and set mamba_ssm_cache_dtype in model quant config during model loading.
QuantConfig dataclass update
tensorrt_llm/models/modeling_utils.py
Adds optional mamba_ssm_cache_dtype string field to QuantConfig dataclass, defaulting to None.
Docker Compose adjustment
.devcontainer/docker-compose.yml
Comments out Hugging Face cache volume mount line in devcontainer configuration.
API stability test reference update
tests/unittest/api_stability/references/quant_config.yaml
Adds optional mamba_ssm_cache_dtype parameter with default null to quant config class constructor in test reference YAML.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant LLM API
    participant PyTorchConfig
    participant ResourceManager
    participant Mamba2Mixer
    participant ScanCombined

    User->>LLM API: Set mamba_ssm_cache_dtype (via config/args)
    LLM API->>PyTorchConfig: Pass mamba_ssm_cache_dtype
    PyTorchConfig->>ResourceManager: Pass mamba_ssm_cache_dtype to cache manager
    ResourceManager->>Mamba2Mixer: Expose mamba_ssm_cache_dtype via get_mamba_ssm_cache_dtype
    Mamba2Mixer->>ScanCombined: Call mamba_chunk_scan_combined(..., mamba_ssm_cache_dtype)
    ScanCombined->>ScanCombined: Use mamba_ssm_cache_dtype for SSM state allocation
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested labels

Community want to contribute

Suggested reviewers

  • pcastonguay
  • litaotju
  • nv-guomingz
  • Superjomn
  • yilin-void

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6ca4a86 and 32da275.

📒 Files selected for processing (1)
  • .devcontainer/docker-compose.yml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .devcontainer/docker-compose.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@shaharmor98
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #13865 [ run ] triggered by Bot

@shaharmor98 shaharmor98 requested a review from tomeras91 August 3, 2025 11:31
@tensorrt-cicd
Copy link
Collaborator

PR_Github #13865 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10432 completed with status: 'FAILURE'

@Superjomn Superjomn requested a review from QiJune August 4, 2025 01:53
@shaharmor98 shaharmor98 force-pushed the feat/enable-fp32-mamba-cache branch from a722381 to cea6d82 Compare August 5, 2025 14:03
@shaharmor98 shaharmor98 requested a review from a team as a code owner August 5, 2025 14:03
@shaharmor98 shaharmor98 requested a review from nv-yilinf August 5, 2025 14:03
@shaharmor98
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14155 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14155 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10681 completed with status: 'FAILURE'

@shaharmor98
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14219 [ run ] triggered by Bot

@shaharmor98 shaharmor98 force-pushed the feat/enable-fp32-mamba-cache branch from bb5c6fd to 72f98fa Compare August 6, 2025 06:56
@shaharmor98 shaharmor98 requested review from a team as code owners August 6, 2025 06:56
@shaharmor98 shaharmor98 requested review from tomeras91 and 2ez4bz August 6, 2025 06:56
@shaharmor98
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14261 [ run ] triggered by Bot

@shaharmor98
Copy link
Collaborator Author

/bot run --disable-fail-fast

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
.devcontainer/docker-compose.yml (1)

24-27: Commenting-out the Hugging Face cache mount may slow dev workflows – consider a conditional mount instead

With the host cache disabled every container rebuild/download will fetch models from scratch, which can be several GB and noticeably slow CI & local iterations.
If the original issue was hosts that don’t have a GPU/HF cache, you can keep the performance benefit while retaining portability:

-      #- ${LOCAL_HF_HOME}:/huggingface  # HF cache
+      # Mount HF cache only when the env-var is set
+      ${LOCAL_HF_HOME:-/nonexistent}:/huggingface:ro

Docker Compose will skip the mount when LOCAL_HF_HOME is unset (or you can wrap with a separate profiles: entry).
Please verify that repeated model downloads are acceptable for all users/CI runners before merging.

tests/unittest/api_stability/references/quant_config.yaml (1)

19-21: Consider narrowing the type annotation to the supported dtypes.

Optional[str] gives no compile-time guidance and may hide typos ("fp32" vs "fp32 ").
If only a small, closed set of dtypes is valid (e.g. "fp16" | "fp32" | "bf16"), consider updating the source QuantConfig dataclass to:

from typing import Literal, Optional

mamba_ssm_cache_dtype: Optional[Literal["fp16", "fp32", "bf16"]] = None

That will automatically propagate to this reference file and tighten API contracts.

Please confirm whether additional dtypes are expected; if so, enumerate them or keep str intentionally.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d4467df and 656b478.

📒 Files selected for processing (17)
  • .devcontainer/docker-compose.yml (1 hunks)
  • tensorrt_llm/_torch/modules/mamba/mamba2_mixer.py (2 hunks)
  • tensorrt_llm/_torch/modules/mamba/ssd_combined.py (6 hunks)
  • tensorrt_llm/_torch/pyexecutor/_util.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/config.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (3 hunks)
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py (5 hunks)
  • tensorrt_llm/bench/benchmark/low_latency.py (1 hunks)
  • tensorrt_llm/bench/benchmark/throughput.py (1 hunks)
  • tensorrt_llm/bench/benchmark/utils/general.py (5 hunks)
  • tensorrt_llm/bench/build/dataclasses.py (2 hunks)
  • tensorrt_llm/bench/build/tuning.py (2 hunks)
  • tensorrt_llm/commands/serve.py (5 hunks)
  • tensorrt_llm/llmapi/llm_args.py (2 hunks)
  • tensorrt_llm/models/modeling_utils.py (1 hunks)
  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py (4 hunks)
  • tests/unittest/api_stability/references/quant_config.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • tensorrt_llm/_torch/pyexecutor/config.py
🚧 Files skipped from review as they are similar to previous changes (14)
  • tensorrt_llm/_torch/pyexecutor/_util.py
  • tensorrt_llm/models/modeling_utils.py
  • tensorrt_llm/bench/benchmark/low_latency.py
  • tensorrt_llm/_torch/modules/mamba/ssd_combined.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/bench/benchmark/throughput.py
  • tests/unittest/_torch/modeling/test_modeling_nemotron_h.py
  • tensorrt_llm/_torch/modules/mamba/mamba2_mixer.py
  • tensorrt_llm/bench/build/tuning.py
  • tensorrt_llm/bench/benchmark/utils/general.py
  • tensorrt_llm/_torch/pyexecutor/resource_manager.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/bench/build/dataclasses.py
  • tensorrt_llm/commands/serve.py
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: in tensorrt-llm, test files (files under tests/ directories) do not require nvidia copyright headers...
Learnt from: galagam
PR: NVIDIA/TensorRT-LLM#6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • .devcontainer/docker-compose.yml
📚 Learning: in tensorrt-llm, examples directory can have different dependency versions than the root requirement...
Learnt from: yibinl-nvidia
PR: NVIDIA/TensorRT-LLM#6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • .devcontainer/docker-compose.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tests/unittest/api_stability/references/quant_config.yaml (1)

19-21: API-stability baseline updated – ensure downstream tests are refreshed.

Adding a new parameter means any existing serialized configs or golden API-stability snapshots must be regenerated. Verify that:

  1. All existing YAML baselines were updated (not only this one).
  2. CI includes at least one test case exercising mamba_ssm_cache_dtype="fp32" to avoid regressions.

Failing to do so will cause silent drift between implementation and reference data.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14394 [ run ] triggered by Bot

@shaharmor98
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14418 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14394 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14418 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10898 completed with status: 'FAILURE'

Copy link
Collaborator

@tomeras91 tomeras91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I like it better in the quant config. Makes much more sense
Thanks for the changes in all the entrypoints and in max batch size tuning as well

Generally approved, just uncomment the forgotten commented line

Copy link
Collaborator

@tomeras91 tomeras91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uncommented that line. Approved

@tomeras91
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14516 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14516 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #10964 completed with status: 'SUCCESS'

Signed-off-by: Shahar Mor <[email protected]>

refactor mamba ssm cache dtype init

Signed-off-by: Shahar Mor <[email protected]>
@shaharmor98 shaharmor98 force-pushed the feat/enable-fp32-mamba-cache branch from 9df6d72 to 32da275 Compare August 10, 2025 06:17
@shaharmor98
Copy link
Collaborator Author

/bot run

@shaharmor98 shaharmor98 enabled auto-merge (squash) August 10, 2025 06:17
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14699 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14699 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11095 completed with status: 'SUCCESS'

@shaharmor98 shaharmor98 disabled auto-merge August 10, 2025 13:13
Copy link
Collaborator

@Naveassaf Naveassaf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@shaharmor98
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #14711 [ run ] triggered by Bot

@shaharmor98 shaharmor98 enabled auto-merge (squash) August 10, 2025 17:15
@tensorrt-cicd
Copy link
Collaborator

PR_Github #14711 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11103 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@shaharmor98 shaharmor98 merged commit 14b36e0 into NVIDIA:main Aug 10, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants