-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
[TPU] Add TPU specific var VLLM_TPU_MOST_MODEL_LEN #19919
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Chenyaaang <[email protected]>
Signed-off-by: Chenyaaang <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Chenyaaang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a key optimization for vLLM's TPU backend by enabling a dual-length model processing strategy. By differentiating between typical and maximum request lengths, the system can achieve higher throughput and more efficient memory utilization, particularly for common workloads. The changes are confined to the TPU-specific components, ensuring targeted improvements without affecting other parts of the system.
Highlights
- New Environment Variable for TPU Optimization: Introduced
VLLM_TPU_MOST_MODEL_LENas a new optional environment variable. This allows users to specify a typical or 'most common' request length for TPU workloads, which is distinct from themax_model_lenthat handles the longest possible requests. This distinction is crucial for optimizing performance and memory usage. - Dynamic Request Batching and Execution: Implemented a sophisticated request processing mechanism within
tpu_model_runner.py. Requests are now dynamically batched and routed based on their length: shorter requests (withinmost_model_len) can be processed more efficiently, potentially in larger batches, while longer requests (up tomax_model_len) are handled separately. This allows for optimal resource allocation and throughput. - Enhanced SMEM OOM Prevention: Added a new utility function,
PallasAttentionBackend.get_max_num_seqs, to precisely calculate the maximum number of sequences that can be processed without encountering Shared Memory (SMEM) Out-Of-Memory errors. This calculation is now used to determine optimal batch sizes for bothmost_model_lenandmax_model_lenscenarios, replacing a previous, less granular approach. - Optimized Model Pre-compilation: Updated the model backbone pre-compilation routines (
_precompile_backboneandprofile_run) to ensure that the model is compiled for both themax_model_lenand, if specified, themost_model_lenconfigurations. This pre-compilation prevents costly runtime recompilations when switching between different request length profiles, contributing to overall performance gains. - Refactored Input Preparation for TPU: The
_prepare_inputsmethod intpu_model_runner.pyhas been significantly refactored to handle the new dual-length strategy. It now intelligently prepares input tensors (likeblock_tables,query_start_loc, andseq_lens) based on whether the current batch of requests aligns with themost_model_lenormax_model_lenprofile, and can split a single scheduler output into multiple execution batches.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request introduces a new TPU-specific environment variable VLLM_TPU_MOST_MODEL_LEN to optimize performance by handling requests with lengths within most_model_len differently from outlier requests exceeding it. The changes are localized within tpu_model_runner.py and envs.py, with no impact on other systems. The code includes modifications to initialize and utilize num_reqs_most_model_len and num_reqs_max_model_len based on the new environment variable, and adjustments to the input preparation and model execution to accommodate the new logic. I have provided some suggestions for improved documentation and comments.
Signed-off-by: Chenyaaang <[email protected]>
9ede871 to
70577df
Compare
yaochengji
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! Left a few comments.
Signed-off-by: Chenyaaang <[email protected]>
59c394e to
26cf1f6
Compare
yaochengji
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Purpose
Add TPU specific environment variable: VLLM_TPU_MOST_MODEL_LEN. This is used to pass in the request length for most of requests, comparing with the existing variable
max_model_lenfor the length of longest requests. This will benefit whenmost_model_lenis much longer thanmost_model_len, and a large portion of requests fall insidemost_model_len, such as 1% requests are 32k formost_model_lenand 99% requests are 2k formost_model_len, in the following 2 ways:max_model_lenis very large, attention kernel usually doesn't reach its best performance (because it falls outside the tuned shape), by usingmost-model-len, we can easily see perf gainmax_model_lenandmax_num_reqsare both large, it's very likely to cause SMEM OOM, and we have to increase the page size above the pre-set limit 256 (which also harms kernel performance). By introducingmost-model-len, we keep themax-num-seqswhen the requests' lengths are withinmost-model-len, and decreasemax-num-seqsfor the extra long outlier requests.The change is kept within tpu_model_runner, with no change in scheduler, so it doesn't interfere other systems.
In tpu_model_runner.py, we initialize
num_reqs_most_model_lenandnum_reqs_max_model_lenformost_model_lenandmax_model_lencondition respectively based on the max number without SMEM OOM. The scheduler's output is then routed to eithermax_model_lenormost_model_lendepending on the request lengths. The requests might be executed in more than one batch, and the final result will be concated together. To avoid recompilation, we pre-compiled the backbone with bothmax_model_lenandmost_model_len.Test
The change is tested and compared to baseline, at commit dac8cc4. We started the server of llama3-8B, with
max-model-len = 32k,max-num-reqs = 128, and send benchmark requests withinput-len = 1800,output-len=128. We compare the compilation time and throughput with and without usingmost-model-len = 2048.The result shows a ~30% perf gain and the increase of compilation time is acceptable.