Skip to content

Conversation

@varun-sundar-rabindranath
Copy link
Contributor

@varun-sundar-rabindranath varun-sundar-rabindranath commented Mar 21, 2025

FIX #15269

Repro command :

VLLM_USE_V1=0 python3 benchmarks/benchmark_throughput.py --model meta-llama/Llama-2-7b-hf --backend vllm --dataset ./ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 200 --max-loras 4 --max-lora-rank 8 --lora-path yard1/llama-2-7b-sql-lora-test --enable-lora --max-num-seqs 50

Bug:

...
  File "/home/shadeform/precog/.venv/lib/python3.12/site-packages/vllm/lora/punica_wrapper/punica_gpu.py", line 66, in update_metadata
    self.prompt_mapping_meta.prepare_tensors(self.sampler_indices)
  File "/home/shadeform/precog/.venv/lib/python3.12/site-packages/vllm/lora/ops/triton_ops/lora_kernel_metadata.py", line 76, in prepare_tensors
    self.token_lora_mapping[:num_tokens].copy_(token_lora_mapping,J
RuntimeError: The size of tensor a (50) must match the size of tensor b (56) at non-singleton dimension 0

PR that introduced the bug: #14685

Affects : V0 engine. V1 engine is fine.

Fix / Why ? :
In V0, during cudagraph capture, the cuda graph capture size could be greater than the max_num_seqs setting. In #14685 , we assume that max_num_seqs will always be respected. This assumption is true for V1, but not for V0.
The line where the error occurs deals with the LogitsProcessor. Before #14685, _sampler_indices at

self._sampler_indices = torch.empty(max_num_batched_tokens,
was used in the place of token_lora_mapping . Before #14685 we seem to have handled this issue by just allocating a buffer as big as max_num_batched_tokens. We use the same fix here.

Varun Sundar Rabindranath added 3 commits March 21, 2025 16:27
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@varun-sundar-rabindranath
Copy link
Contributor Author

Please take a look @jeejeelee. Thanks ! 🙌

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Copy link
Collaborator

@jeejeelee jeejeelee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After the test modifications, overall LGTM

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) March 22, 2025 06:44
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 22, 2025
@vllm-bot vllm-bot merged commit 8a8b30e into vllm-project:main Mar 22, 2025
44 of 50 checks passed
erictang000 pushed a commit to erictang000/vllm that referenced this pull request Mar 25, 2025
… capture sizes (vllm-project#15308)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
… capture sizes (vllm-project#15308)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Louis Ulmer <[email protected]>
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
… capture sizes (vllm-project#15308)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
… capture sizes (vllm-project#15308)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
… capture sizes (vllm-project#15308)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Mu Huai <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: RuntimeError at vllm startup, V0 engine, Llama 3.1, "The size of tensor a (50) must match the size of tensor b (56) at non-singleton dimension 0"

3 participants