-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[None][chore] remove CLI support for mamba cache dtype setting #7119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[None][chore] remove CLI support for mamba cache dtype setting #7119
Conversation
Signed-off-by: Shahar Mor <[email protected]>
📝 WalkthroughWalkthroughRemoved the Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant CLI as serve CLI
participant Server as serve()
participant Args as get_llm_args()
participant KV as KvCacheConfig
User->>CLI: Invoke serve (no mamba_ssm_cache_dtype)
CLI->>Server: Parse options and call serve(...)
Server->>Args: get_llm_args(..., kv_cache_free_gpu_memory_fraction)
Args->>KV: KvCacheConfig(free_gpu_memory_fraction)
KV-->>Args: Config object
Args-->>Server: LLM args (no dtype)
Server-->>User: Service starts
note over Server,KV: mamba_ssm_cache_dtype removed from flow
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
|
/bot run --disable-fail-fast |
|
PR_Github #16037 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tensorrt_llm/commands/serve.py (3)
1-1: Add required NVIDIA copyright header (2025).Per the repository guidelines, prepend the current-year NVIDIA header to all source files.
Apply at the very top of the file:
+# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
313-332: Bug: cluster_size is silently dropped (moe_cluster_parallel_size not accepted by get_llm_args; extras ignored here).serve() passes moe_cluster_parallel_size=cluster_size to get_llm_args, but get_llm_args doesn’t accept this kwarg, so it lands in the function’s **llm_args_extra_dict and is returned as the second tuple value. However, serve() discards that second return (uses “_”), so cluster_size never makes it into llm_args. Net effect: cluster-parallelism is ignored in the non-disaggregated serve path.
Two minimal, robust fixes:
- Accept moe_cluster_parallel_size in get_llm_args and place it into llm_args.
- Preserve and merge the extra kwargs returned by get_llm_args with YAML overrides (so future unknown-but-supported args aren’t lost).
Proposed diffs:
- Accept and wire moe_cluster_parallel_size in get_llm_args.
@@ def get_llm_args(model: str, tokenizer: Optional[str] = None, backend: str = "pytorch", max_beam_width: int = BuildConfig.max_beam_width, max_batch_size: int = BuildConfig.max_batch_size, max_num_tokens: int = BuildConfig.max_num_tokens, max_seq_len: int = BuildConfig.max_seq_len, tensor_parallel_size: int = 1, pipeline_parallel_size: int = 1, - moe_expert_parallel_size: Optional[int] = None, + moe_expert_parallel_size: Optional[int] = None, + moe_cluster_parallel_size: Optional[int] = None, gpus_per_node: Optional[int] = None, free_gpu_memory_fraction: Optional[float] = None, num_postprocess_workers: int = 0, trust_remote_code: bool = False, reasoning_parser: Optional[str] = None, fail_fast_on_attention_window_too_large: bool = False, **llm_args_extra_dict: Any): @@ llm_args = { @@ "pipeline_parallel_size": pipeline_parallel_size, "moe_expert_parallel_size": moe_expert_parallel_size, + "moe_cluster_parallel_size": + moe_cluster_parallel_size, "gpus_per_node": gpus_per_node,
- Preserve extras captured by get_llm_args and merge with YAML overrides.
@@ - llm_args, _ = get_llm_args( + llm_args, cli_extra_kwargs = get_llm_args( model=model, @@ - llm_args_extra_dict = {} - if extra_llm_api_options is not None: - with open(extra_llm_api_options, 'r') as f: - llm_args_extra_dict = yaml.safe_load(f) - llm_args = update_llm_args_with_extra_dict(llm_args, llm_args_extra_dict) + # Start with CLI extras captured by get_llm_args(**) + llm_args_extra_dict = dict(cli_extra_kwargs) + if extra_llm_api_options is not None: + with open(extra_llm_api_options, 'r') as f: + yaml_overrides = yaml.safe_load(f) or {} + # YAML overrides take precedence over CLI extras + llm_args_extra_dict.update(yaml_overrides) + llm_args = update_llm_args_with_extra_dict(llm_args, llm_args_extra_dict)This restores cluster_size behavior and future-proofs the serve path against similar issues.
205-209: Fix CLI help text for --backend (default is ‘pytorch’, not ‘cpp’).The help string is user-facing and currently contradicts the actual default.
-@click.option("--backend", - type=click.Choice(["pytorch", "trt"]), - default="pytorch", - help="Set to 'pytorch' for pytorch path. Default is cpp path.") +@click.option("--backend", + type=click.Choice(["pytorch", "trt"]), + default="pytorch", + help="Set to 'pytorch' for the PyTorch path. Default is 'pytorch'.")
🧹 Nitpick comments (4)
tensorrt_llm/commands/serve.py (4)
521-525: Use DisaggLauncherEnvs..value consistently for env var keys.StrEnum works as a str, but mixing raw member and .value is inconsistent and confuses readers. Standardize on .value across the file.
- if os.environ.get(DisaggLauncherEnvs. - TLLM_DISAGG_RUN_REMOTE_MPI_SESSION_CLIENT) != "1": + if os.environ.get(DisaggLauncherEnvs. + TLLM_DISAGG_RUN_REMOTE_MPI_SESSION_CLIENT.value) != "1": @@ - os.environ[DisaggLauncherEnvs.TLLM_DISAGG_INSTANCE_IDX] = str( + os.environ[DisaggLauncherEnvs.TLLM_DISAGG_INSTANCE_IDX.value] = str( instance_idx) @@ - instance_idx = os.environ.get(DisaggLauncherEnvs.TLLM_DISAGG_INSTANCE_IDX) + instance_idx = os.environ.get(DisaggLauncherEnvs.TLLM_DISAGG_INSTANCE_IDX.value)(Note: Lines 615–617 already use .value; keep that as-is.)
Also applies to: 561-563, 586-588, 612-617
41-44: Adjust misleading comment about “print” in signal handler.The handler uses logger.info, not print. Either switch to print or update the comment to reflect reality. Given the handler executes in the main thread in CPython, logging is acceptable.
- # Using print for safety in signal handlers + # Using logger here; in CPython the signal handler runs on the main thread.
339-349: Validate server_role parsing path for helpful error messages.The ValueError from enum conversion only catches invalid names; add guard for None and suggest valid roles directly in the exception. Current assert covers None but the error message could include allowed values for faster UX. Optional.
373-384: Minor: Clarify “progressivly” typo and intent in TODO.“progressivly” → “progressively”. If these args are part of the public surface, consider docstrings for get_llm_args to meet docstring guidance.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
tensorrt_llm/bench/benchmark/low_latency.py(0 hunks)tensorrt_llm/bench/benchmark/throughput.py(0 hunks)tensorrt_llm/commands/serve.py(2 hunks)
💤 Files with no reviewable changes (2)
- tensorrt_llm/bench/benchmark/throughput.py
- tensorrt_llm/bench/benchmark/low_latency.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else
Files:
tensorrt_llm/commands/serve.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
Prepend NVIDIA copyright header (current year) to all source files
Files:
tensorrt_llm/commands/serve.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/commands/serve.py (2)
297-306: serve() signature updated to drop mamba_ssm_cache_dtype — LGTM.Click binds options by name; dropping the dtype parameter here is fine and keeps the CLI surface consistent with the removal.
726-736: DefaultGroup wiring — LGTM.Falling back to “serve” as the default command is helpful and works with the updated serve signature.
tomeras91
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM
Can you add something in the description about the motivation for this change?
|
PR_Github #16037 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #16293 [ run ] triggered by Bot |
|
PR_Github #16293 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #16303 [ run ] triggered by Bot |
|
PR_Github #16303 [ run ] completed with state |
|
/bot run |
|
PR_Github #16312 [ run ] triggered by Bot |
|
PR_Github #16312 [ run ] completed with state |
|
/bot run --disable-fail-fast |
|
PR_Github #16371 [ run ] triggered by Bot |
|
PR_Github #16371 [ run ] completed with state |
Summary by CodeRabbit
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.