-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
[Core] Rework dtype resolution #18751
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Rework dtype resolution #18751
Conversation
Signed-off-by: DarkLight1337 <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM, thanks for the rework!
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
@noooop it looks like the test fails for both float32 (https://buildkite.com/vllm/ci/builds/21117/steps?jid=019720de-8721-4243-b969-ed37a0c12143) and float16 (https://buildkite.com/vllm/ci/builds/21121/steps?jid=0197215d-dc0c-4177-b921-7e0120cc9ca6)... any idea why? |
Hmm, I tried the tests pipeline with this PR and it can pass locally:
CI outputs:
Seems the degradation is from sentence-transformers side again... |
torch.float16 0.6811108652277692 This result is closer to bfloat16, so I think the model was converted to bfloat16 somewhere. I don't know why SentenceTransformer's model_kwargs={"torch_dtype": dtype} is not working, still use torch.float32. but torch.set_default_dtype(dtype) does.
I don't think using bfloat16 is a good idea for embedding models. |
I can reproduce this error if there is somewhere torch.set_default_dtype(torch.bfloat16).
Placing with hf_runner below with set_default_torch_dtype(vllm_dtype) can fix it.
you may need
Using SentenceTransformers and VLLM with the same dtype might miss discovering potential issues. |
I see, let's default to float16 for embedding models and later change it to float32 in your PR then |
Now there is a serious problem. I don't know why SentenceTransformer's model_kwargs={"torch_dtype": dtype} is not working. but torch.set_default_dtype(dtype) does. |
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: amit <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: amit <[email protected]>
Key changes:
current_platform.supported_dtypes
). This means that models may be downcasted to bfloat16 instead of float16.gemma2
,gemma3
,plamo2
,glm4
) from being downcasted to float16 because of reported numerical stability issues.Related to #17123