-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[Core] Batch multi modal input using pinned memory #19169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Batch multi modal input using pinned memory #19169
Conversation
Signed-off-by: Lukas Geiger <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @lgeiger, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello! Gemini or gemini-code-assist here, providing a summary of this pull request. This PR aims to improve the performance of batching multi-modal inputs by leveraging pinned memory. According to the description and the visual comparisons provided, this change is expected to result in faster CPU to GPU data transfer and potentially faster torch.stack operations, leading to overall performance gains in handling multi-modal data.
Highlights
- Performance Optimization: The core goal of this PR is to optimize the batching process for multi-modal inputs, specifically focusing on improving the speed of data transfer from CPU to GPU.
- Pinned Memory: The optimization is achieved by introducing the use of pinned memory during the batching of tensors. Pinned memory (or page-locked memory) is a special type of host (CPU) memory that is accessible by the GPU, allowing for faster asynchronous memory transfers.
- Modified Batching Logic: The
_try_stackandbatchmethods within the multi-modal input handling logic have been updated to accept apin_memoryflag. When enabled, tensors are created or stacked into pinned memory. - Integration with GPU Runner: The
GpuModelRunnernow passes itsself.pin_memorysetting to the multi-modal batching function, ensuring that the optimization is applied during actual model execution and profiling.
Changelog
- vllm/multimodal/inputs.py
- Added
pin_memoryboolean parameter to the static method_try_stack. - Modified
_try_stackto recursively pass thepin_memoryflag. - Updated
_try_stackto usetorch.emptywith thepin_memoryflag and then stack into this pre-allocated tensor using theoutargument oftorch.stack. - Added
pin_memoryboolean parameter to the static methodbatch. - Modified
batchto pass thepin_memoryflag when calling_try_stack.
- Added
- vllm/v1/worker/gpu_model_runner.py
- Passed
self.pin_memorytoMultiModalKwargs.batchwithin the_execute_mm_encodermethod. - Passed
self.pin_memorytoMultiModalKwargs.batchwithin theprofile_runmethod when creating dummy multi-modal inputs.
- Passed
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Memory pinned down tight,
CPU to GPU, swift flight.
Data moves so fast.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces the use of pinned memory for batching multi-modal inputs, aiming to accelerate CPU-to-GPU data transfers. The core idea is to pre-allocate pinned memory for tensors before stacking them, which is a sound approach for performance improvement.
The changes are primarily in vllm/multimodal/inputs.py, where _try_stack and batch methods in MultiModalKwargs now accept a pin_memory flag. This flag is then utilized when creating tensors with torch.empty(..., pin_memory=True) to serve as the output for torch.stack. The GPUModelRunner in vllm/v1/worker/gpu_model_runner.py has been updated to pass its self.pin_memory attribute to these batching functions.
Overall, the implementation looks good and directly addresses the stated goal. The performance improvements indicated in the PR description are promising. I have one suggestion for a minor improvement to ensure consistency in the application of pin_memory.
Summary of Findings
- Consistency of
pin_memoryapplication: Thepin_memoryflag is not explicitly applied in the single-tensor optimization path within_try_stack. Suggesting to usetensor.pin_memory()for consistency.
Merge Readiness
The pull request is in good shape and addresses an important performance aspect. With the suggested minor improvement for consistency, it should be ready for merging. As I am an AI, I am not authorized to approve pull requests; please ensure this change is reviewed and approved by a human maintainer.
|
The idea looks reasonable. How much speed up have you observed from this? |
On some internal with multiple images per prompt I'm seeing around a 0.8-1.7% improvement in throughput on a gemma 3 4b model. With the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, this looks reasonable to me as well
|
PTAL at the failing multimodal test |
Head branch was pushed to by a user without write access
Good catch, fixed in 091e858 |
Signed-off-by: Lukas Geiger <[email protected]>
da6d179 to
091e858
Compare
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
Signed-off-by: Lukas Geiger <[email protected]>
Signed-off-by: Lukas Geiger <[email protected]>
Signed-off-by: Lukas Geiger <[email protected]>
Since #18862 we simply copy the batched tensors to GPU memory without any additional casting.
This allows use to stack the inputs into pinned memory for faster CPU->GPU copy and maybe slightly faster torch.stack speed.
Before:

After:
