-
Notifications
You must be signed in to change notification settings - Fork 1.8k
feat: Add non-streaming support for trtllm serve bench script & fixed prompt and output token length #4971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
e608ce7
to
ed7a9b3
Compare
/bot run |
PR_Github #7816 [ run ] triggered by Bot |
ed7a9b3
to
7e01ce2
Compare
/bot kill |
7e01ce2
to
ed51cb8
Compare
/bot run |
PR_Github #7818 [ kill ] triggered by Bot |
PR_Github #7819 [ ] completed with state |
PR_Github #7816 [ run ] completed with state |
PR_Github #7818 [ kill ] completed with state |
ed51cb8
to
20e51f1
Compare
/bot run |
PR_Github #8078 [ run ] triggered by Bot |
PR_Github #8078 [ run ] completed with state |
62abe19
to
7e0bd8e
Compare
/bot run |
PR_Github #8181 [ run ] triggered by Bot |
PR_Github #8181 [ run ] completed with state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if tokenizing random dataset is a good WAR, from my previous benchmark result, tokenization was not CPU bottleneck for max through scenario, (detokenization with streaming maybe though), but I am not sure how TTFT will be affected by omitting tokenization, a recent PR that omitted tokenization for decoding server performance maybe relevant, @kaiyux may know more context.
7e0bd8e
to
4694214
Compare
PR_Github #9103 [ run ] triggered by Bot |
/bot run |
8b19246
to
0b01369
Compare
/bot run |
PR_Github #9111 [ run ] triggered by Bot |
PR_Github #9103 [ run ] completed with state |
PR_Github #9112 [ run ] triggered by Bot |
PR_Github #9111 [ run ] completed with state |
PR_Github #9112 [ run ] completed with state |
0b01369
to
3e78f58
Compare
/bot run |
PR_Github #9128 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, leave two comments for potential improvement.
PR_Github #9128 [ run ] completed with state |
209f67f
to
028dc94
Compare
/bot run |
PR_Github #9191 [ run ] triggered by Bot |
…unctions - Introduced `no_kv_cache_reuse` parameter in `get_llm_args` and `serve` functions for better cache management. - Updated `async_request_trt_llm`, `async_request_openai_completions`, and `async_request_openai_chat_completions` to accept a `streaming` flag, allowing for flexible response handling. - Modified benchmark scripts to incorporate streaming functionality, enhancing performance testing capabilities. Signed-off-by: Yi Zhang <[email protected]>
Signed-off-by: Yi Zhang <[email protected]>
Signed-off-by: Yi Zhang <[email protected]>
Signed-off-by: Yi Zhang <[email protected]>
Signed-off-by: Yi Zhang <[email protected]>
028dc94
to
94bbd52
Compare
/bot run |
PR_Github #9293 [ run ] triggered by Bot |
PR_Github #9293 [ run ] completed with state |
I think tokenization has a very slight impact on TTFT. The key is, the random token ids will be de-tokenized into meaningless prompt before sending to TRTLLM server, then the server will re-tokenize the meaningless prompt and get a new sequence of token ids, which will have a different length (longer for most case) than the random token ids at the beginning, thus making the benchmark loss its value (mainly affecting). |
feat: Add non-streaming support for trtllm serve bench script & fixed prompt and output token length
async_request_trt_llm
,async_request_openai_completions
, andasync_request_openai_chat_completions
to accept astreaming
flag, allowing for flexible response handling.input_ids
unchanged after detokenize -> tokenizePR title
Please write the PR title by following template:
[JIRA ticket link/nvbug link/github issue link][fix/feat/doc/infra/...] <summary of this PR>
For example, assume I have a PR hope to support a new feature about cache manager of Jira TRTLLM-1000 ticket, it would be like
[TRTLLM-1000][feat] Support a new feature about cache manager
Description
Please explain the issue and the solution in short.
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]
Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.