Skip to content

Conversation

Funatiq
Copy link
Collaborator

@Funatiq Funatiq commented Jun 18, 2025

Description

Please see commit messages for details.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 18, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9357 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9357 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #6868 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/cache_indir branch 2 times, most recently from 15bfa5c to b57c672 Compare June 18, 2025 09:49
@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 18, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9365 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9365 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #6872 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 18, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9390 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9390 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #6889 completed with status: 'FAILURE'

Funatiq added 3 commits June 19, 2025 14:36
- Refactored `DecoderBuffers` to eliminate cache indirection inputs and outputs, simplifying the buffer management.
- Updated function signatures across various components to use `DraftBuffers` and `DecoderState` for cache indirection handling.
- Enhanced `DecoderState` with methods for setting up and retrieving cache indirection buffers.
- Adjusted Python bindings to reflect the removal of cache indirection from the `DecoderBatchInput` and `DecoderBatchOutput` classes.

These changes improve the clarity and maintainability of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <[email protected]>
- Updated the `MakeDecodingBatchInputOutput` class to return only the `Input` object, removing the `Output` object from its interface.
- Modified the `GptDecoderBatched` class to adjust function signatures, eliminating the `Output` parameter from `forwardAsync`, `forward`, and `forwardDispatch` methods.
- Removed the `Output` class definition from the `iGptDecoderBatched` interface and updated related Python bindings accordingly.
- Adjusted tests to reflect changes in the decoder input/output handling.

These changes enhance the clarity and efficiency of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <[email protected]>
…quests

- Eliminated the `resetCacheIndirection` method from `TransformerBuffers` and related calls in `RuntimeBuffers` and Python bindings.
- Updated `CreateNewDecoderRequests` to handle cache indirection directly, ensuring proper initialization of cache tensors in the decoder state.
- These changes streamline buffer management and enhance the clarity of the decoding process in the batch manager.

Signed-off-by: Robin Kobus <[email protected]>
@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 19, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9514 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9514 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #6979 completed with status: 'FAILURE'

@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 19, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9540 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9540 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #6999 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@Funatiq Funatiq requested review from dcampora, stnie and Copilot June 20, 2025 06:49
@Funatiq Funatiq marked this pull request as ready for review June 20, 2025 06:49
@Funatiq Funatiq requested a review from a team as a code owner June 20, 2025 06:49
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the cache indirection management in the decoder state along with associated API and binding changes. Key changes include removing the direct assignment of cache indirection in decoder forward functions, introducing a dedicated setupCacheIndirection method in the decoder state, and updating various bindings and buffer management interfaces accordingly.

  • Refactored methods in gptDecoderBatched to remove explicit cache indirection parameters.
  • Added setupCacheIndirection in decoderState and updated related binding interfaces.
  • Removed obsolete cache indirection functionality from transformerBuffers and adjusted downstream consumers.

Reviewed Changes

Copilot reviewed 20 out of 23 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
cpp/tensorrt_llm/runtime/gptDecoderBatched.cpp Removed cache indirection assignments and updated forward dispatch signatures.
cpp/tensorrt_llm/runtime/decoderState.cpp Added setupCacheIndirection and related getters for cache indirection.
cpp/tensorrt_llm/pybind/runtime/bindings.cpp & batch_manager/bindings.cpp Updated bindings to reflect API changes; removed cache_indirection field in Python interface.
cpp/tensorrt_llm/batch_manager/* Refactored DecoderBuffers, makeDecodingBatchInputOutput, and related files to remove cache indirection dependencies.
cpp/include/tensorrt_llm/* Adjusted interface definitions to remove obsolete cache indirection parameters.
Comments suppressed due to low confidence (2)

cpp/tensorrt_llm/runtime/gptDecoderBatched.cpp:161

  • The removal of direct cache indirection assignments in prepareForward relies on the new setupCacheIndirection. Please ensure that all consumers of these tensors are updated to use the new mechanism.
    if (maxBeamWidth > 1)

cpp/tensorrt_llm/batch_manager/makeDecodingBatchInputOutput.cpp:83

  • The removal of cacheIndirection assignment from the decoder batch input may affect downstream consumers expecting this field. Verify that this change is fully aligned with the updated cache management design.
    auto decodingInput = std::make_unique<tr::decoder_batch::Input>(logitsVec, maxActiveDecoderSteps);

Copy link
Collaborator

@dcampora dcampora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dcampora dcampora merged commit e2a8cbc into NVIDIA:main Jun 24, 2025
3 checks passed
@Funatiq Funatiq deleted the dev/cache_indir branch June 24, 2025 16:44
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 9, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 10, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 11, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 11, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Jul 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants