-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[Misc] compressed-tensors code reuse
#7277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Misc] compressed-tensors code reuse
#7277
Conversation
|
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these:
🚀 |
aaa041e to
8960860
Compare
compressed-tensors code reusecompressed-tensors code reuse
|
/ready |
vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py
Outdated
Show resolved
Hide resolved
|
@kylesayrs you're missing Line 20 in 5923532
|
|
Current state looks good so far. Biggest piece of feedback is that we are still rewriting the logic associated with parsing the It will be tricky to fix this (because the vLLM state_dict is not a 1:1 map with the transformers state_dict), so feel free to reach out if you need any pointers. |
|
@robertgshaw2-neuralmagic I think updating the |
|
These test failures seem unrelated to this PR? The a few seem to be cuda errors and one is complaining about bad llm metrics measurements |
Sounds good. @kylesayrs im just running this by simon but we should be good to go |
049dc9c to
ce29b08
Compare
This reverts commit 373538f.
Signed-off-by: Alvant <[email protected]>
Signed-off-by: LeiWang1999 <[email protected]>
The reused classes are
CompressionFormatQuantizationArgsQuantizationStrategyQuantizationType