Skip to content

Conversation

@Tabrizian
Copy link
Member

@Tabrizian Tabrizian commented Jul 14, 2025

[fix] Release slots with spec decode + disagg

Summary by CodeRabbit

  • New Features

    • Added support for new model variants "Llama-3.1-8B-Instruct" and "EAGLE3-LLaMA3.1-Instruct-8B" in disaggregated single-GPU serving tests.
    • Introduced a new test to verify correct batch slot release when using speculative decoding in disaggregated setups.
  • Bug Fixes

    • Improved resource management by ensuring both sequence and speculative resources are properly released for completed context-only requests.
    • Enhanced draft request preparation to include models using n-gram decoding mode.

@Tabrizian Tabrizian requested a review from a team as a code owner July 14, 2025 23:58
@Tabrizian
Copy link
Member Author

/bot run

@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch from 33e2508 to 77af18b Compare July 14, 2025 23:58
@tensorrt-cicd
Copy link
Collaborator

PR_Github #11854 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11854 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8785 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Copy link
Collaborator

@pcastonguay pcastonguay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we have a test that would have caught this issue? Maybe enqueue enough requests until we run out of batch slots?

@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch 3 times, most recently from bf6dca6 to f8ae103 Compare July 15, 2025 22:37
@Tabrizian
Copy link
Member Author

@pcastonguay Good point, added a test.

@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch from f8ae103 to 401b0d3 Compare July 15, 2025 22:37
@Tabrizian Tabrizian requested a review from pcastonguay July 15, 2025 22:37
@Tabrizian
Copy link
Member Author

/bot run

@Tabrizian Tabrizian enabled auto-merge (squash) July 15, 2025 22:37
@tensorrt-cicd
Copy link
Collaborator

PR_Github #11975 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11975 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #8888 completed with status: 'FAILURE'

@Tabrizian
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11978 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11978 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #8890 completed with status: 'FAILURE'

@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch from 401b0d3 to b5f4652 Compare July 16, 2025 15:37
@Tabrizian
Copy link
Member Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 16, 2025

Walkthrough

The updates introduce improved handling for speculative decoding and resource management in the model execution loop, specifically accounting for n-gram decoding modes. A new integration test verifies batch slot release in disaggregated speculative decoding scenarios, and the test list is updated to include this new case for specific GPU configurations.

Changes

File(s) Change Summary
tensorrt_llm/_torch/pyexecutor/py_executor.py Enhanced speculative decoding logic with n-gram mode check; expanded resource freeing to include additional manager.
tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py Added new test for disaggregated speculative decoding batch slot limit; included new model paths in test parameters.
tests/integration/test_lists/test-db/l0_h100.yml Registered the new batch slot limit test for the relevant GPU/test configuration.

Sequence Diagram(s)

sequenceDiagram
    participant Test as test_disaggregated_spec_dec_batch_slot_limit
    participant MPI as MPIPoolExecutor
    participant Worker1 as ContextWorker
    participant Worker2 as GenerationWorker
    participant Model as ModelEngine

    Test->>MPI: Launch context and generation workers
    MPI->>Worker1: Start context worker
    MPI->>Worker2: Start generation worker
    Test->>Worker1: Send context-only requests
    Worker1->>Model: Process context requests
    Model-->>Worker1: Return context results
    Worker1-->>Test: Return context results
    Test->>Worker2: Send generation-only requests
    Worker2->>Model: Process generation requests (speculative decoding)
    Model-->>Worker2: Return generation results
    Worker2-->>Test: Return generation results
    Test->>MPI: Signal termination
    MPI->>Worker1: Terminate
    MPI->>Worker2: Terminate
Loading

Poem

In code’s bright warren, slots are freed,
Speculative bunnies hop with speed.
N-gram paths and batch slots clear,
New tests ensure the coast is clear.
With every hop, a bug takes flight—
The future’s fluffy, fast, and bright!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12096 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

1658-1664: Fix line length and improve resource management implementation.

The expansion to iterate over both SEQ_SLOT_MANAGER and SPEC_RESOURCE_MANAGER is a necessary improvement for proper resource cleanup in disaggregated speculative decoding scenarios. However, there's a line length issue that needs to be addressed.

Apply this diff to fix the line length issue:

-                for resource_mgr_type in (
-                        ResourceManagerType.SEQ_SLOT_MANAGER,
-                        ResourceManagerType.SPEC_RESOURCE_MANAGER):
-                    if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[
-                            resource_mgr_type] is not None:
-                        self.resource_manager.resource_managers[
-                            resource_mgr_type].free_resources(req)
+                for resource_mgr_type in (
+                        ResourceManagerType.SEQ_SLOT_MANAGER,
+                        ResourceManagerType.SPEC_RESOURCE_MANAGER):
+                    resource_mgr = self.resource_manager.resource_managers.get(
+                        resource_mgr_type)
+                    if resource_mgr is not None:
+                        resource_mgr.free_resources(req)

This refactoring also improves readability by reducing redundant dictionary lookups.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e30d7be and b5f4652.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (3 hunks)
  • tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py (3 hunks)
  • tests/integration/test_lists/test-db/l0_h100.yml (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (6)
tensorrt_llm/_torch/speculative/interface.py (1)
  • is_ngram (37-38)
tensorrt_llm/llmapi/llm_args.py (3)
  • spec_dec_mode (289-296)
  • spec_dec_mode (341-346)
  • spec_dec_mode (443-448)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
  • ResourceManagerType (43-48)
  • free_resources (74-75)
  • free_resources (424-425)
  • free_resources (958-962)
  • free_resources (1055-1057)
  • free_resources (1123-1126)
  • free_resources (1211-1212)
tensorrt_llm/_torch/speculative/eagle3.py (1)
  • free_resources (62-63)
tensorrt_llm/_torch/speculative/mtp.py (1)
  • free_resources (70-74)
tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
  • free_resources (26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py

1661-1661: Line too long (128 > 120)

(E501)

🔇 Additional comments (6)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

899-902: LGTM: Well-implemented defensive programming for n-gram mode detection.

The code correctly uses hasattr() to safely check for the spec_config attribute before accessing it, preventing potential AttributeError exceptions. The logic chain properly validates that both the config exists and the mode is n-gram.


925-925: LGTM: Logical extension of draft request preparation condition.

The addition of is_ngram to the condition correctly ensures that draft requests are prepared for n-gram decoding modes even when there's no separate draft model engine, which aligns with the PR objective of fixing spec decode functionality.

tests/integration/test_lists/test-db/l0_h100.yml (1)

69-69: LGTM! Test case properly added to H100 test suite.

The new test case for speculative decoding with batch slot limits is correctly added to the H100 GPU test configuration. The parameterization matches the expected test function signature.

tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py (3)

15-15: LGTM! Import correctly added for the new test.

The EagleDecodingConfig import is needed for the new speculative decoding test function.


37-41: LGTM! Model paths correctly added for new test models.

The new model paths for Llama-3.1-8B-Instruct and EAGLE3-LLaMA3.1-Instruct-8B are properly structured and follow the existing pattern in the function.


322-421: Approved: Model path references for Llama-3.1-8B-Instruct and EAGLE3-LLaMA3.1-Instruct-8B are valid

Model names used in test_disaggregated_spec_dec_batch_slot_limit match directory names referenced throughout integration and unit tests as well as examples under LLM_MODELS_ROOT. The lookup via os.environ["LLM_MODELS_ROOT"] will resolve correctly in CI. No further action required.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12096 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8984 completed with status: 'FAILURE'

@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch from b5f4652 to a8d164d Compare July 16, 2025 17:19
@Tabrizian
Copy link
Member Author

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

899-902: Good defensive programming with a minor formatting issue.

The logic correctly detects n-gram speculative decoding mode with proper null checks. However, the line length exceeds the 120-character limit.

Consider breaking the long line for better readability:

-        is_ngram = hasattr(
-            self.model_engine, "spec_config"
-        ) and self.model_engine.spec_config is not None and self.model_engine.spec_config.spec_dec_mode.is_ngram(
-        )
+        is_ngram = (
+            hasattr(self.model_engine, "spec_config")
+            and self.model_engine.spec_config is not None
+            and self.model_engine.spec_config.spec_dec_mode.is_ngram()
+        )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b5f4652 and a8d164d.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (3 hunks)
  • tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py (3 hunks)
  • tests/integration/test_lists/test-db/l0_h100.yml (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • tests/integration/test_lists/test-db/l0_h100.yml
  • tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (6)
tensorrt_llm/_torch/speculative/interface.py (1)
  • is_ngram (37-38)
tensorrt_llm/llmapi/llm_args.py (3)
  • spec_dec_mode (289-296)
  • spec_dec_mode (341-346)
  • spec_dec_mode (443-448)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
  • ResourceManagerType (43-48)
  • free_resources (74-75)
  • free_resources (424-425)
  • free_resources (958-962)
  • free_resources (1055-1057)
  • free_resources (1123-1126)
  • free_resources (1211-1212)
tensorrt_llm/_torch/speculative/eagle3.py (1)
  • free_resources (62-63)
tensorrt_llm/_torch/speculative/mtp.py (1)
  • free_resources (70-74)
tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
  • free_resources (26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py

1661-1661: Line too long (128 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

925-925: Correctly extends draft request preparation to include n-gram mode.

The updated condition properly ensures that draft requests are prepared for n-gram speculative decoding scenarios, even when there's no separate draft model engine.


1658-1664: Proper resource management for disaggregated speculative decoding.

The extended resource freeing logic correctly addresses the PR objective by ensuring both sequence slots and speculative decoding resources are properly released for finished context-only requests. This prevents resource leaks in disaggregated serving scenarios.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12101 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12101 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8988 completed with status: 'FAILURE'

@Tabrizian
Copy link
Member Author

/bot run --disable-fail-fast

@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch from a8d164d to 5d0e86f Compare July 16, 2025 21:58
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

1658-1664: LGTM: Comprehensive resource management for disaggregated speculative decoding.

The extension to free resources from both SEQ_SLOT_MANAGER and SPEC_RESOURCE_MANAGER is well-implemented. The code properly checks for the presence and non-nullity of each resource manager before calling free_resources, which prevents potential errors and ensures comprehensive cleanup for context-only requests in disaggregated scenarios.

Note: There's a minor formatting issue flagged by static analysis (line too long), but this doesn't affect functionality:

-                    if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[
-                            resource_mgr_type] is not None:
+                    if (resource_mgr_type in self.resource_manager.resource_managers 
+                        and self.resource_manager.resource_managers[resource_mgr_type] is not None):
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a8d164d and 5d0e86f.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (3 hunks)
  • tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py (3 hunks)
  • tests/integration/test_lists/test-db/l0_h100.yml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • tests/integration/test_lists/test-db/l0_h100.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (6)
tensorrt_llm/_torch/speculative/interface.py (1)
  • is_ngram (37-38)
tensorrt_llm/llmapi/llm_args.py (3)
  • spec_dec_mode (289-296)
  • spec_dec_mode (341-346)
  • spec_dec_mode (444-449)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
  • ResourceManagerType (43-48)
  • free_resources (74-75)
  • free_resources (424-425)
  • free_resources (973-977)
  • free_resources (1070-1072)
  • free_resources (1138-1141)
  • free_resources (1226-1227)
tensorrt_llm/_torch/speculative/eagle3.py (1)
  • free_resources (62-63)
tensorrt_llm/_torch/speculative/mtp.py (1)
  • free_resources (70-74)
tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
  • free_resources (26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py

1661-1661: Line too long (128 > 120)

(E501)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

899-902: LGTM: Well-implemented n-gram detection logic.

The logic correctly checks for the presence of spec_config, ensures it's not None, and calls the is_ngram() method to determine if the speculative decoding mode is n-gram. This follows the pattern established in the relevant code snippets and integrates well with the existing condition at line 925.


925-925: LGTM: Proper integration of n-gram condition.

The addition of is_ngram to the existing condition ensures that draft request preparation occurs for n-gram decoding modes, which aligns with the PR objective of fixing slot release with spec decode + disagg.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12116 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12116 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9001 completed with status: 'FAILURE'

@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch from 5d0e86f to a339a93 Compare July 17, 2025 02:55
@Tabrizian
Copy link
Member Author

/bot run --stage-list "H100_PCIe-PyTorch-1"

Tabrizian and others added 2 commits July 16, 2025 19:56
Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Iman Tabrizian <[email protected]>
@Tabrizian Tabrizian force-pushed the user/imant/cherrypicking branch from a339a93 to 8926d50 Compare July 17, 2025 02:56
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)

1658-1664: Correctly extends resource freeing to include speculative resources.

The logic properly iterates over both SEQ_SLOT_MANAGER and SPEC_RESOURCE_MANAGER to ensure all resources are freed for finished context-only requests. This addresses the slot release issue mentioned in the PR objectives.

However, please address the line length issue flagged by static analysis:

-                    if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[
-                            resource_mgr_type] is not None:
+                    resource_mgr = self.resource_manager.resource_managers.get(resource_mgr_type)
+                    if resource_mgr is not None:
                         self.resource_manager.resource_managers[
                             resource_mgr_type].free_resources(req)

Or alternatively:

-                    if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[
-                            resource_mgr_type] is not None:
-                        self.resource_manager.resource_managers[
-                            resource_mgr_type].free_resources(req)
+                    resource_mgr = self.resource_manager.resource_managers.get(resource_mgr_type)
+                    if resource_mgr is not None:
+                        resource_mgr.free_resources(req)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5d0e86f and 8926d50.

📒 Files selected for processing (3)
  • tensorrt_llm/_torch/pyexecutor/py_executor.py (3 hunks)
  • tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py (3 hunks)
  • tests/integration/test_lists/test-db/l0_h100.yml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • tests/integration/test_lists/test-db/l0_h100.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (5)
tensorrt_llm/_torch/speculative/interface.py (1)
  • is_ngram (37-38)
tensorrt_llm/llmapi/llm_args.py (3)
  • spec_dec_mode (289-296)
  • spec_dec_mode (341-346)
  • spec_dec_mode (444-449)
tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
  • ResourceManagerType (43-48)
  • free_resources (74-75)
  • free_resources (424-425)
  • free_resources (973-977)
  • free_resources (1070-1072)
  • free_resources (1138-1141)
  • free_resources (1226-1227)
tensorrt_llm/_torch/speculative/mtp.py (1)
  • free_resources (70-74)
tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
  • free_resources (26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py

1661-1661: Line too long (128 > 120)

(E501)

🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)

899-902: Good defensive programming for n-gram detection.

The implementation properly handles potential attribute errors by checking for the existence of spec_config before accessing it and ensuring it's not None before calling methods on it.


925-925: Correct addition of n-gram condition for draft request preparation.

The logic properly extends the condition to include n-gram decoding mode, ensuring draft requests are prepared when needed for speculative decoding scenarios.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12144 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12144 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #9021 (Partly Tested) completed with status: 'SUCCESS'

@Tabrizian
Copy link
Member Author

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12153 [ reuse-pipeline ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #12153 [ reuse-pipeline ] completed with state SUCCESS
Reusing PR_Github #12144 (Partly Tested) for commit 8926d50

@Tabrizian Tabrizian merged commit d4d21a1 into NVIDIA:main Jul 17, 2025
4 checks passed
yizhang-nv pushed a commit to yizhang-nv/TensorRT-LLM that referenced this pull request Jul 17, 2025
)

Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Iman Tabrizian <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants