-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[Bugfix] Incorrect another MM data format in vllm bench throughput #26462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Incorrect another MM data format in vllm bench throughput #26462
Conversation
Signed-off-by: Huy Do <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request fixes an issue with handling multimodal data in the run_vllm benchmark function, aligning its logic with run_vllm_async. The change correctly handles cases where multi_modal_data might be None, preventing potential downstream errors. I've included one suggestion to enhance the robustness of the data validation.
…to loader * 'loader' of https://github.com/dsxsteven/vllm_splitPR: (778 commits) [torchao] Add support for ModuleFqnToConfig using regex (vllm-project#26001) Add: Support for multiple hidden layers in Eagle3 (vllm-project#26164) Enable `RMSNorm` substitution for Transformers backend (vllm-project#26353) [Model] Gemma3: Fix GGUF loading and quantization (vllm-project#26189) Bump Flashinfer to v0.4.0 (vllm-project#26326) Update Dockerfile and install runai-model-streamer[gcs] package (vllm-project#26464) [Core] Relax the LoRA max rank (vllm-project#26461) [CI/Build] Fix model nightly tests (vllm-project#26466) [Hybrid]: Decouple Kernel Block Size from KV Page Size (vllm-project#24486) [Core][KVConnector] Propagate all tokens on resumed preemptions (vllm-project#24926) [MM][Doc] Add documentation for configurable mm profiling (vllm-project#26200) [Hardware][AMD] Enable FlexAttention backend on ROCm (vllm-project#26439) [Bugfix] Incorrect another MM data format in vllm bench throughput (vllm-project#26462) [Bugfix] Catch and log invalid token ids in detokenizer #2 (vllm-project#26445) [Minor] Change warning->warning_once in preprocess (vllm-project#26455) [Bugfix] Set the minimum python version for gpt-oss (vllm-project#26392) [Misc] Redact ray runtime env before logging (vllm-project#26302) Separate MLAAttention class from Attention (vllm-project#25103) [Attention] Register FLASHMLA_SPARSE (vllm-project#26441) [Kernels] Modular kernel refactor (vllm-project#24812) ...
…llm-project#26462) Signed-off-by: Huy Do <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
…llm-project#26462) Signed-off-by: Huy Do <[email protected]> Signed-off-by: Dhruvil Bhatt <[email protected]>
…llm-project#26462) Signed-off-by: Huy Do <[email protected]>
…llm-project#26462) Signed-off-by: Huy Do <[email protected]>
…llm-project#26462) Signed-off-by: Huy Do <[email protected]>
…llm-project#26462) Signed-off-by: Huy Do <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
…llm-project#26462) Signed-off-by: Huy Do <[email protected]> Signed-off-by: 0xrushi <[email protected]>
…llm-project#26462) Signed-off-by: Huy Do <[email protected]> Signed-off-by: 0xrushi <[email protected]>
Purpose
Same as #26395, I think the PR missed another spot in
run_vllmfunction as I still see the failure when running vllm bench throughput after the rebase in https://github.com/pytorch/pytorch-integration-testing/actions/runs/18361337410/job/52305453336#step:16:8356cc @DarkLight1337
Test Plan
Test Result