-
Notifications
You must be signed in to change notification settings - Fork 1.8k
[fix] Release slots with spec decode + disagg (#5975) #6032
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
/bot run |
33e2508 to
77af18b
Compare
|
PR_Github #11854 [ run ] triggered by Bot |
|
PR_Github #11854 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we have a test that would have caught this issue? Maybe enqueue enough requests until we run out of batch slots?
bf6dca6 to
f8ae103
Compare
|
@pcastonguay Good point, added a test. |
f8ae103 to
401b0d3
Compare
|
/bot run |
|
PR_Github #11975 [ run ] triggered by Bot |
|
PR_Github #11975 [ run ] completed with state |
|
/bot run |
|
PR_Github #11978 [ run ] triggered by Bot |
|
PR_Github #11978 [ run ] completed with state |
401b0d3 to
b5f4652
Compare
|
/bot run |
WalkthroughThe updates introduce improved handling for speculative decoding and resource management in the model execution loop, specifically accounting for n-gram decoding modes. A new integration test verifies batch slot release in disaggregated speculative decoding scenarios, and the test list is updated to include this new case for specific GPU configurations. Changes
Sequence Diagram(s)sequenceDiagram
participant Test as test_disaggregated_spec_dec_batch_slot_limit
participant MPI as MPIPoolExecutor
participant Worker1 as ContextWorker
participant Worker2 as GenerationWorker
participant Model as ModelEngine
Test->>MPI: Launch context and generation workers
MPI->>Worker1: Start context worker
MPI->>Worker2: Start generation worker
Test->>Worker1: Send context-only requests
Worker1->>Model: Process context requests
Model-->>Worker1: Return context results
Worker1-->>Test: Return context results
Test->>Worker2: Send generation-only requests
Worker2->>Model: Process generation requests (speculative decoding)
Model-->>Worker2: Return generation results
Worker2-->>Test: Return generation results
Test->>MPI: Signal termination
MPI->>Worker1: Terminate
MPI->>Worker2: Terminate
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
PR_Github #12096 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
1658-1664: Fix line length and improve resource management implementation.The expansion to iterate over both
SEQ_SLOT_MANAGERandSPEC_RESOURCE_MANAGERis a necessary improvement for proper resource cleanup in disaggregated speculative decoding scenarios. However, there's a line length issue that needs to be addressed.Apply this diff to fix the line length issue:
- for resource_mgr_type in ( - ResourceManagerType.SEQ_SLOT_MANAGER, - ResourceManagerType.SPEC_RESOURCE_MANAGER): - if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[ - resource_mgr_type] is not None: - self.resource_manager.resource_managers[ - resource_mgr_type].free_resources(req) + for resource_mgr_type in ( + ResourceManagerType.SEQ_SLOT_MANAGER, + ResourceManagerType.SPEC_RESOURCE_MANAGER): + resource_mgr = self.resource_manager.resource_managers.get( + resource_mgr_type) + if resource_mgr is not None: + resource_mgr.free_resources(req)This refactoring also improves readability by reducing redundant dictionary lookups.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tensorrt_llm/_torch/pyexecutor/py_executor.py(3 hunks)tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py(3 hunks)tests/integration/test_lists/test-db/l0_h100.yml(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (6)
tensorrt_llm/_torch/speculative/interface.py (1)
is_ngram(37-38)tensorrt_llm/llmapi/llm_args.py (3)
spec_dec_mode(289-296)spec_dec_mode(341-346)spec_dec_mode(443-448)tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
ResourceManagerType(43-48)free_resources(74-75)free_resources(424-425)free_resources(958-962)free_resources(1055-1057)free_resources(1123-1126)free_resources(1211-1212)tensorrt_llm/_torch/speculative/eagle3.py (1)
free_resources(62-63)tensorrt_llm/_torch/speculative/mtp.py (1)
free_resources(70-74)tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
free_resources(26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
1661-1661: Line too long (128 > 120)
(E501)
🔇 Additional comments (6)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
899-902: LGTM: Well-implemented defensive programming for n-gram mode detection.The code correctly uses
hasattr()to safely check for thespec_configattribute before accessing it, preventing potential AttributeError exceptions. The logic chain properly validates that both the config exists and the mode is n-gram.
925-925: LGTM: Logical extension of draft request preparation condition.The addition of
is_ngramto the condition correctly ensures that draft requests are prepared for n-gram decoding modes even when there's no separate draft model engine, which aligns with the PR objective of fixing spec decode functionality.tests/integration/test_lists/test-db/l0_h100.yml (1)
69-69: LGTM! Test case properly added to H100 test suite.The new test case for speculative decoding with batch slot limits is correctly added to the H100 GPU test configuration. The parameterization matches the expected test function signature.
tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py (3)
15-15: LGTM! Import correctly added for the new test.The
EagleDecodingConfigimport is needed for the new speculative decoding test function.
37-41: LGTM! Model paths correctly added for new test models.The new model paths for Llama-3.1-8B-Instruct and EAGLE3-LLaMA3.1-Instruct-8B are properly structured and follow the existing pattern in the function.
322-421: Approved: Model path references for Llama-3.1-8B-Instruct and EAGLE3-LLaMA3.1-Instruct-8B are validModel names used in
test_disaggregated_spec_dec_batch_slot_limitmatch directory names referenced throughout integration and unit tests as well as examples underLLM_MODELS_ROOT. The lookup viaos.environ["LLM_MODELS_ROOT"]will resolve correctly in CI. No further action required.
|
PR_Github #12096 [ run ] completed with state |
b5f4652 to
a8d164d
Compare
|
/bot run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
899-902: Good defensive programming with a minor formatting issue.The logic correctly detects n-gram speculative decoding mode with proper null checks. However, the line length exceeds the 120-character limit.
Consider breaking the long line for better readability:
- is_ngram = hasattr( - self.model_engine, "spec_config" - ) and self.model_engine.spec_config is not None and self.model_engine.spec_config.spec_dec_mode.is_ngram( - ) + is_ngram = ( + hasattr(self.model_engine, "spec_config") + and self.model_engine.spec_config is not None + and self.model_engine.spec_config.spec_dec_mode.is_ngram() + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tensorrt_llm/_torch/pyexecutor/py_executor.py(3 hunks)tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py(3 hunks)tests/integration/test_lists/test-db/l0_h100.yml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/integration/test_lists/test-db/l0_h100.yml
- tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (6)
tensorrt_llm/_torch/speculative/interface.py (1)
is_ngram(37-38)tensorrt_llm/llmapi/llm_args.py (3)
spec_dec_mode(289-296)spec_dec_mode(341-346)spec_dec_mode(443-448)tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
ResourceManagerType(43-48)free_resources(74-75)free_resources(424-425)free_resources(958-962)free_resources(1055-1057)free_resources(1123-1126)free_resources(1211-1212)tensorrt_llm/_torch/speculative/eagle3.py (1)
free_resources(62-63)tensorrt_llm/_torch/speculative/mtp.py (1)
free_resources(70-74)tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
free_resources(26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
1661-1661: Line too long (128 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
925-925: Correctly extends draft request preparation to include n-gram mode.The updated condition properly ensures that draft requests are prepared for n-gram speculative decoding scenarios, even when there's no separate draft model engine.
1658-1664: Proper resource management for disaggregated speculative decoding.The extended resource freeing logic correctly addresses the PR objective by ensuring both sequence slots and speculative decoding resources are properly released for finished context-only requests. This prevents resource leaks in disaggregated serving scenarios.
|
PR_Github #12101 [ run ] triggered by Bot |
|
PR_Github #12101 [ run ] completed with state |
|
/bot run --disable-fail-fast |
a8d164d to
5d0e86f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
1658-1664: LGTM: Comprehensive resource management for disaggregated speculative decoding.The extension to free resources from both
SEQ_SLOT_MANAGERandSPEC_RESOURCE_MANAGERis well-implemented. The code properly checks for the presence and non-nullity of each resource manager before callingfree_resources, which prevents potential errors and ensures comprehensive cleanup for context-only requests in disaggregated scenarios.Note: There's a minor formatting issue flagged by static analysis (line too long), but this doesn't affect functionality:
- if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[ - resource_mgr_type] is not None: + if (resource_mgr_type in self.resource_manager.resource_managers + and self.resource_manager.resource_managers[resource_mgr_type] is not None):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tensorrt_llm/_torch/pyexecutor/py_executor.py(3 hunks)tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py(3 hunks)tests/integration/test_lists/test-db/l0_h100.yml(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/integration/test_lists/test-db/l0_h100.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (6)
tensorrt_llm/_torch/speculative/interface.py (1)
is_ngram(37-38)tensorrt_llm/llmapi/llm_args.py (3)
spec_dec_mode(289-296)spec_dec_mode(341-346)spec_dec_mode(444-449)tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
ResourceManagerType(43-48)free_resources(74-75)free_resources(424-425)free_resources(973-977)free_resources(1070-1072)free_resources(1138-1141)free_resources(1226-1227)tensorrt_llm/_torch/speculative/eagle3.py (1)
free_resources(62-63)tensorrt_llm/_torch/speculative/mtp.py (1)
free_resources(70-74)tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
free_resources(26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
1661-1661: Line too long (128 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
899-902: LGTM: Well-implemented n-gram detection logic.The logic correctly checks for the presence of
spec_config, ensures it's not None, and calls theis_ngram()method to determine if the speculative decoding mode is n-gram. This follows the pattern established in the relevant code snippets and integrates well with the existing condition at line 925.
925-925: LGTM: Proper integration of n-gram condition.The addition of
is_ngramto the existing condition ensures that draft request preparation occurs for n-gram decoding modes, which aligns with the PR objective of fixing slot release with spec decode + disagg.
|
PR_Github #12116 [ run ] triggered by Bot |
|
PR_Github #12116 [ run ] completed with state |
5d0e86f to
a339a93
Compare
|
/bot run --stage-list "H100_PCIe-PyTorch-1" |
Signed-off-by: Iman Tabrizian <[email protected]> Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Iman Tabrizian <[email protected]>
a339a93 to
8926d50
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (1)
1658-1664: Correctly extends resource freeing to include speculative resources.The logic properly iterates over both
SEQ_SLOT_MANAGERandSPEC_RESOURCE_MANAGERto ensure all resources are freed for finished context-only requests. This addresses the slot release issue mentioned in the PR objectives.However, please address the line length issue flagged by static analysis:
- if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[ - resource_mgr_type] is not None: + resource_mgr = self.resource_manager.resource_managers.get(resource_mgr_type) + if resource_mgr is not None: self.resource_manager.resource_managers[ resource_mgr_type].free_resources(req)Or alternatively:
- if resource_mgr_type in self.resource_manager.resource_managers and self.resource_manager.resource_managers[ - resource_mgr_type] is not None: - self.resource_manager.resource_managers[ - resource_mgr_type].free_resources(req) + resource_mgr = self.resource_manager.resource_managers.get(resource_mgr_type) + if resource_mgr is not None: + resource_mgr.free_resources(req)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
tensorrt_llm/_torch/pyexecutor/py_executor.py(3 hunks)tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py(3 hunks)tests/integration/test_lists/test-db/l0_h100.yml(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/integration/test_lists/test-db/l0_h100.yml
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py
🧰 Additional context used
🧬 Code Graph Analysis (1)
tensorrt_llm/_torch/pyexecutor/py_executor.py (5)
tensorrt_llm/_torch/speculative/interface.py (1)
is_ngram(37-38)tensorrt_llm/llmapi/llm_args.py (3)
spec_dec_mode(289-296)spec_dec_mode(341-346)spec_dec_mode(444-449)tensorrt_llm/_torch/pyexecutor/resource_manager.py (7)
ResourceManagerType(43-48)free_resources(74-75)free_resources(424-425)free_resources(973-977)free_resources(1070-1072)free_resources(1138-1141)free_resources(1226-1227)tensorrt_llm/_torch/speculative/mtp.py (1)
free_resources(70-74)tensorrt_llm/_torch/pyexecutor/seq_slot_manager.py (1)
free_resources(26-27)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/pyexecutor/py_executor.py
1661-1661: Line too long (128 > 120)
(E501)
🔇 Additional comments (2)
tensorrt_llm/_torch/pyexecutor/py_executor.py (2)
899-902: Good defensive programming for n-gram detection.The implementation properly handles potential attribute errors by checking for the existence of
spec_configbefore accessing it and ensuring it's not None before calling methods on it.
925-925: Correct addition of n-gram condition for draft request preparation.The logic properly extends the condition to include n-gram decoding mode, ensuring draft requests are prepared when needed for speculative decoding scenarios.
|
PR_Github #12144 [ run ] triggered by Bot |
|
PR_Github #12144 [ run ] completed with state |
|
/bot reuse-pipeline |
|
PR_Github #12153 [ reuse-pipeline ] triggered by Bot |
|
PR_Github #12153 [ reuse-pipeline ] completed with state |
) Signed-off-by: Iman Tabrizian <[email protected]> Signed-off-by: Iman Tabrizian <[email protected]> Signed-off-by: Iman Tabrizian <[email protected]>
[fix] Release slots with spec decode + disagg
Summary by CodeRabbit
New Features
Bug Fixes