-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
Enable CUDA graph support for llama 3.2 vision #14917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable CUDA graph support for llama 3.2 vision #14917
Conversation
Signed-off-by: Matt Ritter <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job! Some early feedback. Will check the code more carefully after the tests are passed.
- Can you remove the enforce_eager in
tests/models/encoder_decoder/vision_language/test_mllama.py
and try whether the tests can pass? - Can you use
model_config.enforce_eager
to check whether we should always run the cross attention layers instead of adding acapture_mode
argument?
Signed-off-by: Matt Ritter <[email protected]>
Thanks for the quick review! For (1), I removed For (2), I tried that, but during text-only inference the model crashed. I think the crash happened because cross attention does not work when there are no image inputs (the
|
(1) That's great to know. Thanks! |
Signed-off-by: Matt Ritter <[email protected]>
50ac218
to
8876572
Compare
Switching to I'm not sure about the answer to (3). I know the model works with text-only, image-only, and text-image. But, I'm not sure how to tell if the CUDA graph is actually being used. I suppose we may be able to tell with some load test, but I'm not sure if we have any existing tooling for that in vllm |
If the tests are passing with enforce_eager=False it should be working fine with CudaGraph assuming we have test cases with text-only, image-only and text-image combinations? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sroy745 Thanks for your advice.
I've tested this PR on my local machine. All tests in test_mllama.py can pass except the following two.
FAILED test_mllama.py::test_models_interleaved_images[_Backend.XFORMERS-5-128-bfloat16-meta-llama/Llama-3.2-11B-Vision-Instruct] - AttributeError: 'list' object has no attribute 'shape'
FAILED test_mllama.py::test_models_interleaved_images[_Backend.FLASH_ATTN-5-128-bfloat16-meta-llama/Llama-3.2-11B-Vision-Instruct] - AttributeError: 'list' object has no attribute 'shape'
I think this problem will be fixed by #14883, so I approve this PR. @mritterfigma Thanks for your contribution.
…for ROCm only Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Matt Ritter <[email protected]>
…ut for ROCm only (#15413) Signed-off-by: Gregory Shtrasberg <[email protected]>
…ect#14917, but for ROCm only (vllm-project#15413) Signed-off-by: Gregory Shtrasberg <[email protected]> Signed-off-by: xinyuxiao <[email protected]>
Signed-off-by: Matt Ritter <[email protected]> Signed-off-by: Louis Ulmer <[email protected]>
…ect#14917, but for ROCm only (vllm-project#15413) Signed-off-by: Gregory Shtrasberg <[email protected]> Signed-off-by: Louis Ulmer <[email protected]>
…ect#14917, but for ROCm only (vllm-project#15413) Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Matt Ritter <[email protected]>
…ect#14917, but for ROCm only (vllm-project#15413) Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: Matt Ritter <[email protected]> Signed-off-by: Mu Huai <[email protected]>
…ect#14917, but for ROCm only (vllm-project#15413) Signed-off-by: Gregory Shtrasberg <[email protected]> Signed-off-by: Mu Huai <[email protected]>
Add support for llama 3.2 vision with CUDA graph capture. Removes block on CUDA graph capture for mllama.
Note: Follows a similar approach to the one taken by SGLang, making mllama aware of whether it is in graph capture mode or not (sgl-project/sglang@94cde10)
FIX "Enabling CUDA graph" in #8826 (comment)
Tested:
started and responded successfully to requests (no enforce-eager). Also verified that it still worked with
--enforce-eager
. Example command: