Skip to content

Conversation

@qthequartermasterman
Copy link
Contributor

@qthequartermasterman qthequartermasterman commented Sep 26, 2025

Purpose

@njhill pointed out that #24278 introduced a performance regression in the case when prompt embeds is disabled, where inputs_embeds tensors are copied to GPU, even though those tensors are either empty or filled with garbage when --enable-prompt-embeds is not on. We can guard those copies with self.enable_prompt_embeds to avoid these copies unless prompt embeds is enabled.

Test Plan

No new tests are needed. I have some local scripts to send thousands of requests through vLLM with this change. Everything works.

Test Result

Relevant local tests pass. Pending CI.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization by avoiding unnecessary GPU copies of inputs_embeds and is_token_ids when prompt embeddings are not enabled. The changes are implemented correctly by adding if self.enable_prompt_embeds guards. I've also pointed out a related pre-existing issue in the async scheduling path where is_token_ids is not fully updated, which would be good to address for complete correctness, though it might be out of scope for this specific performance-focused PR.

Comment on lines +883 to +884
if self.enable_prompt_embeds:
self.is_token_ids.gpu[:num_commmon_tokens] = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change correctly handles is_token_ids for the fast path in async scheduling. However, a similar update is missing for the slow path (the scatter_ case for input_ids around line 900). For correctness when prompt embeddings are enabled, is_token_ids for the scattered tokens should also be set to True, as they correspond to sampled token IDs. A complete fix would handle both paths.

@DarkLight1337
Copy link
Member

We can also disable this code?

        is_token_ids = self.input_batch.is_token_ids.flatten()
        torch.index_select(
            is_token_ids,
            0,
            token_indices_tensor,
            out=self.is_token_ids.cpu[:total_num_scheduled_tokens])

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) September 26, 2025 05:41
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2025
@vllm-bot vllm-bot merged commit d48f4d6 into vllm-project:main Sep 26, 2025
48 of 51 checks passed
pdasigi pushed a commit to pdasigi/vllm that referenced this pull request Oct 2, 2025
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
… is enabled (#25739)

Signed-off-by: Andrew Sansom <[email protected]>
Signed-off-by: yewentao256 <[email protected]>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants