-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
[Core] Performance: Use int instead of list[int] for single output token for GC optimization #26368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
CC @yeqcharlotte @houseroad @WoosukKwon @njhill for RFC on the proposal and high level code changes, but the PR might not be completely ready yet
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a clever optimization to reduce garbage collection overhead by changing the representation of single output tokens from a list to an integer. The use of Union[int, list[int]] is a good way to handle both single and multiple token outputs efficiently. The changes are consistently applied across the codebase. My only feedback is to add unit tests for the new utility functions to ensure the correctness and robustness of this performance-critical change.
|
Resolve #26369 |
|
Gentle nudge @houseroad for the review :P |
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Jialin Ouyang <[email protected]>
Signed-off-by: Jialin Ouyang <[email protected]>
8512558 to
02914f9
Compare
Signed-off-by: Jialin Ouyang <[email protected]>
|
thanks for the change. hhh it's hard to keep classes around to avoid GC. wondering what's the actual gain we see from bigger models? we can play with deepseek v3, qwen-235b etc. and try it for both throughput and latency sensitive use cases. wondering if you folks have strong opinion for this @heheda12345 @njhill |
Signed-off-by: Jialin Ouyang <[email protected]>
|
I am a bit hesitant on this PR. I feel it's over optimization. To achieve low latency, we probably should invest in:
I am worried that optimizations like the one in this PR can pile up and make the code much harder for people to understand without actually improving the perf very significantly. |
Thanks @zhuohan123 for your suggestions and I totally aligned with the tradeoff between code complexity and performance. But would love to better clarify my change in this PR.
Next step: @zhuohan123 per the new data and new clarification I provided, do you feel that's something we should worth to further investigate? If yes, I could try to explore the np.array suggestion more. Appreciate that. |
@yeqcharlotte To be very honest, this change might only be helpful for small models and large batch size scenarios. So the effectiveness could be small to large models like deepseek v3 and qwen 235b :/ |
|
I'm OK with the merged PRs for GC optimizations inside KVCacheManager as I do get some complain about KVCacheManager overhead. But I believe optimizing GC for the whole project is not very practical. |
Purpose
The default setup of GC is (700, 10, 10) which means
In this scenario, large batch size scenarios (small models) each batch could be as large as 1024, which means GC0 will be triggered per decode cycle, GC1 will triggered per 10 decode cycle and GC2 per 100 decode cycle, which is very inefficient!
In this PR, we change output tokens from list[int] to Union[int, list[int]] to cut down the GC0 triggering frequency.
Test Plan & Test Result
Request Throughput Change
With the change GC improved significantly.

Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.