Skip to content

Conversation

lfr-0531
Copy link
Collaborator

@lfr-0531 lfr-0531 commented May 21, 2025

Description

There are two issues:

  • In context server, the max_new_tokens=1. In mtp sampler, we will get True when calling _handle_stop_criteria, and the request.py_draft_tokens won't be updated. So in the first iteration of the generation requests, the draft tokens are not correct. Fixed it in mtp sampler.
  • When transferring parameters from context server to the generation server, the schedued generation requests will be updated after preparing the resouces, including the draft tokens. As a result, before the first generation forward, we didn't add new space in kv cache manager for those draft tokens. Fixed it in py_executor.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6011 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6011 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #4392 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6044 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6044 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #4416 completed with status: 'SUCCESS'

@Shixiaowei02
Copy link
Collaborator

Please get the approval from the NVIDIA/trt-llm-torch-devs group. @lfr-0531

Copy link
Collaborator

@pcastonguay pcastonguay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should have a test that verifies this fixes the accuracy issue. We have some accuracy tests for DP, can we add coverage for PD+MTP?

@lfr-0531
Copy link
Collaborator Author

We should have a test that verifies this fixes the accuracy issue. We have some accuracy tests for DP, can we add coverage for PD+MTP?

Hi @pcastonguay, do you mean accuracy tests on some datasets? or just compare the output texts? I saw all of the disaggregated servering tests compare the output texts. And we already have the test for pd + mtp + attention dp + overlap scheduler.

"deepseek_v3_lite_fp8_tp1_attention_dp_overlap_one_mtp":

If the former one, it is not an easy work to cover PD cases in our accuracy tests. It would be better to add it in a new PR.

@pcastonguay
Copy link
Collaborator

We should have a test that verifies this fixes the accuracy issue. We have some accuracy tests for DP, can we add coverage for PD+MTP?

Hi @pcastonguay, do you mean accuracy tests on some datasets? or just compare the output texts? I saw all of the disaggregated servering tests compare the output texts. And we already have the test for pd + mtp + attention dp + overlap scheduler.

"deepseek_v3_lite_fp8_tp1_attention_dp_overlap_one_mtp":

If the former one, it is not an easy work to cover PD cases in our accuracy tests. It would be better to add it in a new PR.

The former. We already have accuracy tests with disagg: https://github.com/NVIDIA/TensorRT-LLM/blob/main/tests/integration/defs/accuracy/test_disaggregated_serving.py. Could we extend to cover PD + MTP + Attention DP + overlap? @Tabrizian for vis.

@Tabrizian
Copy link
Member

@lfr-0531 You can specify the pytorch config needed for your setup here:

ctx_server_config = {
"pytorch_backend_config": {
"disable_overlap_scheduler": True
}
}
gen_server_config = {
"pytorch_backend_config": {
"disable_overlap_scheduler": disable_overlap_scheduler
}
}

@lfr-0531 lfr-0531 force-pushed the user/fanrongl/fix_pd_mtp branch from 1e219fc to 37f5b6a Compare May 26, 2025 13:45
@lfr-0531 lfr-0531 requested review from a team as code owners May 26, 2025 13:45
@lfr-0531
Copy link
Collaborator Author

Thanks, @pcastonguay and @Tabrizian! I added the pd+mtp accuracy test.

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6486 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6486 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #4746 completed with status: 'FAILURE'

@lfr-0531
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6489 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #6489 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #4749 completed with status: 'FAILURE'

@lfr-0531 lfr-0531 force-pushed the user/fanrongl/fix_pd_mtp branch from d52aa71 to 9b6f5e2 Compare May 28, 2025 08:23
@lfr-0531 lfr-0531 merged commit 380a5d1 into NVIDIA:main Jun 3, 2025
3 checks passed
lfr-0531 added a commit that referenced this pull request Jun 3, 2025
Signed-off-by: Fanrong Li <[email protected]>
darraghdog pushed a commit to darraghdog/TensorRT-LLM that referenced this pull request Jun 3, 2025
@Tabrizian
Copy link
Member

@lfr-0531 It looks like the new test that was added was never triggered in any of these pipelines. I believe you needed to trigger it manually.

@lfr-0531
Copy link
Collaborator Author

lfr-0531 commented Jun 6, 2025

@lfr-0531 It looks like the new test that was added was never triggered in any of these pipelines. I believe you needed to trigger it manually.

Ah, I added them to the post merge. I saw that those tests are waived in nvbugs/5322354. Let's wait for the mass integration. After that, I'll rerun the tests.

omera-nv pushed a commit to omera-nv/TensorRT-LLM that referenced this pull request Jun 7, 2025
@lfr-0531 lfr-0531 deleted the user/fanrongl/fix_pd_mtp branch June 27, 2025 12:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants