Skip to content

Conversation

@lgeiger
Copy link
Contributor

@lgeiger lgeiger commented Jun 4, 2025

Since #18862 we simply copy the batched tensors to GPU memory without any additional casting.
This allows use to stack the inputs into pinned memory for faster CPU->GPU copy and maybe slightly faster torch.stack speed.

Before:
Screenshot 2025-06-05 at 01 16 38

After:
Screenshot 2025-06-05 at 01 19 22

@github-actions
Copy link

github-actions bot commented Jun 4, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @lgeiger, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! Gemini or gemini-code-assist here, providing a summary of this pull request. This PR aims to improve the performance of batching multi-modal inputs by leveraging pinned memory. According to the description and the visual comparisons provided, this change is expected to result in faster CPU to GPU data transfer and potentially faster torch.stack operations, leading to overall performance gains in handling multi-modal data.

Highlights

  • Performance Optimization: The core goal of this PR is to optimize the batching process for multi-modal inputs, specifically focusing on improving the speed of data transfer from CPU to GPU.
  • Pinned Memory: The optimization is achieved by introducing the use of pinned memory during the batching of tensors. Pinned memory (or page-locked memory) is a special type of host (CPU) memory that is accessible by the GPU, allowing for faster asynchronous memory transfers.
  • Modified Batching Logic: The _try_stack and batch methods within the multi-modal input handling logic have been updated to accept a pin_memory flag. When enabled, tensors are created or stacked into pinned memory.
  • Integration with GPU Runner: The GpuModelRunner now passes its self.pin_memory setting to the multi-modal batching function, ensuring that the optimization is applied during actual model execution and profiling.

Changelog

  • vllm/multimodal/inputs.py
    • Added pin_memory boolean parameter to the static method _try_stack.
    • Modified _try_stack to recursively pass the pin_memory flag.
    • Updated _try_stack to use torch.empty with the pin_memory flag and then stack into this pre-allocated tensor using the out argument of torch.stack.
    • Added pin_memory boolean parameter to the static method batch.
    • Modified batch to pass the pin_memory flag when calling _try_stack.
  • vllm/v1/worker/gpu_model_runner.py
    • Passed self.pin_memory to MultiModalKwargs.batch within the _execute_mm_encoder method.
    • Passed self.pin_memory to MultiModalKwargs.batch within the profile_run method when creating dummy multi-modal inputs.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Memory pinned down tight,
CPU to GPU, swift flight.
Data moves so fast.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added multi-modality Related to multi-modality (#4194) v1 labels Jun 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the use of pinned memory for batching multi-modal inputs, aiming to accelerate CPU-to-GPU data transfers. The core idea is to pre-allocate pinned memory for tensors before stacking them, which is a sound approach for performance improvement.

The changes are primarily in vllm/multimodal/inputs.py, where _try_stack and batch methods in MultiModalKwargs now accept a pin_memory flag. This flag is then utilized when creating tensors with torch.empty(..., pin_memory=True) to serve as the output for torch.stack. The GPUModelRunner in vllm/v1/worker/gpu_model_runner.py has been updated to pass its self.pin_memory attribute to these batching functions.

Overall, the implementation looks good and directly addresses the stated goal. The performance improvements indicated in the PR description are promising. I have one suggestion for a minor improvement to ensure consistency in the application of pin_memory.

Summary of Findings

  • Consistency of pin_memory application: The pin_memory flag is not explicitly applied in the single-tensor optimization path within _try_stack. Suggesting to use tensor.pin_memory() for consistency.

Merge Readiness

The pull request is in good shape and addresses an important performance aspect. With the suggested minor improvement for consistency, it should be ready for merging. As I am an AI, I am not authorized to approve pull requests; please ensure this change is reviewed and approved by a human maintainer.

@DarkLight1337
Copy link
Member

The idea looks reasonable. How much speed up have you observed from this?

@lgeiger
Copy link
Contributor Author

lgeiger commented Jun 5, 2025

The idea looks reasonable. How much speed up have you observed from this?

On some internal with multiple images per prompt I'm seeing around a 0.8-1.7% improvement in throughput on a gemma 3 4b model. With the lmarena-ai/VisionArena-Chat it's very noise maybe a 0.3% improvement averaged across 3 runs.

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this looks reasonable to me as well

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) June 6, 2025 08:03
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 6, 2025
@DarkLight1337
Copy link
Member

PTAL at the failing multimodal test

auto-merge was automatically disabled June 9, 2025 15:44

Head branch was pushed to by a user without write access

@lgeiger
Copy link
Contributor Author

lgeiger commented Jun 9, 2025

PTAL at the failing multimodal test

Good catch, fixed in 091e858

@lgeiger lgeiger force-pushed the mm-batch-pinned-memory branch from da6d179 to 091e858 Compare June 9, 2025 15:45
@DarkLight1337 DarkLight1337 merged commit 319cb1e into vllm-project:main Jun 10, 2025
64 checks passed
@gemini-code-assist
Copy link
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@lgeiger lgeiger deleted the mm-batch-pinned-memory branch June 10, 2025 08:12
hsubramony added a commit to HabanaAI/vllm-fork that referenced this pull request Jul 31, 2025
hsubramony added a commit to HabanaAI/vllm-fork that referenced this pull request Aug 4, 2025
hsubramony added a commit to HabanaAI/vllm-fork that referenced this pull request Aug 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

2 participants