-
-
Notifications
You must be signed in to change notification settings - Fork 11.4k
Open
Labels
feature requestNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is needed
Description
🚀 The feature, motivation and pitch
Today many vision transformers on vLLM leverage standard F.scaled_dot_product_attention to compute attention scores.
While there has been some effort in vision.py to help developers easily choose which backend to use, it would be great if vLLM can consolidate non-mask MHA implementations with different backends without caching so that developers can easily plug them in.
We should also investigate integrating FA3 for a few vision models we have and make sure there's no accuracy regression.
Alternatives
No response
Additional context
No response
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
noooop, Isotr0py and huachenheli
Metadata
Metadata
Assignees
Labels
feature requestNew feature or requestNew feature or requestgood first issueGood for newcomersGood for newcomershelp wantedExtra attention is neededExtra attention is needed
Type
Projects
Status
Todo