Skip to content

Conversation

Funatiq
Copy link
Collaborator

@Funatiq Funatiq commented Jun 16, 2025

Description

  • Create LlmRequests in the test first.
  • Use createDecoderRequests from createNewDecoderRequests instead of lower level newRequest function.

Please see commit messages for details. This should enable simplifying createNewDecoderRequests.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Funatiq Funatiq requested review from Copilot and dcampora June 16, 2025 08:36
@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 16, 2025

/bot run

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the decoder test workflow to unify request creation and end‐to‐end integration. Key changes include:

  • Replacing the legacy prepareRequests function with createLlmRequest(s) to build request objects.
  • Updating the tests to call the new request creation functions and adjusting the related newRequests calls.
  • Adding a check for mSeqSlot in the decoder requests creation and removing a duplicated function signature in the header.

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
cpp/tests/runtime/gptDecoderBatchedTest.cpp Refactored test request creation with new LLM request functions and updated newRequests calls.
cpp/tensorrt_llm/batch_manager/createNewDecoderRequests.cpp Added a check to ensure that mSeqSlot is set before proceeding with request initialization.
cpp/include/tensorrt_llm/batch_manager/createNewDecoderRequests.h Removed duplicate createDecoderRequests declaration to reduce redundancy.
Comments suppressed due to low confidence (1)

cpp/tensorrt_llm/batch_manager/createNewDecoderRequests.cpp:663

  • Ensure that this check for mSeqSlot includes a descriptive error message or logging to aid in debugging if the invariant is violated.
TLLM_CHECK(llmReq->mSeqSlot.has_value());

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9004 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9004 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #6574 completed with status: 'FAILURE'

@Funatiq Funatiq force-pushed the dev/refactor_decoder_test branch from 8816bf8 to cfe0d45 Compare June 16, 2025 18:54
@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 16, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9060 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9060 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #6623 completed with status: 'FAILURE'

Funatiq added 6 commits June 17, 2025 07:00
- Introduced a new helper function, prepareRequest, to encapsulate the logic for creating individual decoder requests.
- Updated the prepareRequests function to utilize the new prepareRequest function, improving code readability and maintainability.
- This refactor enhances the clarity of request handling within the decoder batch processing.

Signed-off-by: Robin Kobus <[email protected]>
- Removed the old prepareRequests function, consolidating request preparation logic into a single location.

Signed-off-by: Robin Kobus <[email protected]>
- Moved the request preparation logic to a single function to improve code readability and maintainability.
- Moved prepareRequest function into newRequests function.
- Updated all calls to newRequests throughout the test file to align with the new signature, improving code clarity and maintainability.

Signed-off-by: Robin Kobus <[email protected]>
- Instead of custom method to prepare requests in gptDecoderBatchedTest, create llmRequests.
- Use createDecoderRequests method instead of newRequest in gptDecoderBatchedTest.

Signed-off-by: Robin Kobus <[email protected]>
- Introduced createLlmRequests function to streamline the creation of LlmRequest objects.
- Updated newRequests function to accept a vector of LlmRequest pointers, improving clarity and maintainability.
- Refactored test cases to utilize the new createLlmRequests function, ensuring consistent request handling across tests.

Signed-off-by: Robin Kobus <[email protected]>
@Funatiq Funatiq force-pushed the dev/refactor_decoder_test branch from cfe0d45 to 157945f Compare June 17, 2025 05:00
@Funatiq
Copy link
Collaborator Author

Funatiq commented Jun 17, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9132 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #9132 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #6685 completed with status: 'SUCCESS'

@Funatiq Funatiq marked this pull request as ready for review June 17, 2025 09:51
Copy link
Collaborator

@dcampora dcampora left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dcampora dcampora merged commit dc3861b into NVIDIA:main Jun 17, 2025
3 checks passed
@Funatiq Funatiq deleted the dev/refactor_decoder_test branch June 17, 2025 10:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants