-
-
Couldn't load subscription status.
- Fork 10.8k
[ROCm] Enable chunked prefill/paged attention in MLA on ROCm #14316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ROCm] Enable chunked prefill/paged attention in MLA on ROCm #14316
Conversation
Signed-off-by: Sage Moore <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts?
Signed-off-by: Sage Moore <[email protected]>
|
Nice! Thank you - cc @mawong-amd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look good to me. The changes are only applied to hip, and straightforward.
Signed-off-by: Sage Moore <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now, thanks!
…oject#14316) Signed-off-by: Sage Moore <[email protected]> Signed-off-by: Richard Liu <[email protected]>
…oject#14316) Signed-off-by: Sage Moore <[email protected]> Signed-off-by: Louis Ulmer <[email protected]>
…oject#14316) Signed-off-by: Sage Moore <[email protected]>
…oject#14316) Signed-off-by: Sage Moore <[email protected]> Signed-off-by: Mu Huai <[email protected]>
This PR is largely just removing the guards in config.py to allow chunked prefill and paged attention in MLA. The LSE computation in the triton kernel doesn't work so we always fall back to flash attention in this case.
I ran
lm_eval --model vllm --model_args pretrained=deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct,trust_remote_code=True,enable_chunked_prefill=True --tasks gsm8k --num_fewshot 5 --batch_size autoand gotCC: @LucasWilkinson