Skip to content

Conversation

@elvircrn
Copy link
Contributor

@elvircrn elvircrn commented Nov 27, 2024

What does this PR do?

Adds support for efficient single-batch inference for SpQR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@SunMarc @MekkCyber

@elvircrn elvircrn marked this pull request as draft November 27, 2024 15:40
@elvircrn elvircrn changed the title Spqr quantizer Efficient Inference Kernel for SpQR Nov 27, 2024
@SunMarc SunMarc requested a review from MekkCyber November 28, 2024 14:45
@MekkCyber
Copy link
Contributor

Hey @elvircrn, Thanks for adding this quantization method ! A very smooth integration 🔥! Just left a few comments

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to add the checks here, validate_environment in the quantizer will take care of that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by removing the checks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not used, to be removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by removing

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's raise a warning here if you need to override the torch_dtype like in other quantizers

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since SpQR really only supports float16, I raise an error if it's None or anything other than float16. Is this the desired behavior?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I can't find this model on the hub 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, didn't expect a review so soon. Thank you!

I've uploaded the model and updated this part of the test.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries 😊 take your time and ping me when you finish the integration

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to add the requirement here, it's already verified with the decorator require_spqr

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by removing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can also add a small section about how to load a model quantized using SpQR using transformers

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by updating the README.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the quantization method requires a gpu we need to check if it's availabe here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved by adding an is_cuda check.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to add tests for multi_gpu setups ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolving by bringing this back form AQLM. Will the CI be able to pick this up? I don't have the means of testing this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it does 😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on that, you can add the test and we will check to make sure it to pass the right expected output

@elvircrn
Copy link
Contributor Author

elvircrn commented Dec 2, 2024

@MekkCyber I've removed the Draft tag from the PR.

It seems that I am unable to pass the test_raise_if_non_quantized. Could you give me some pointers on this?

@elvircrn elvircrn force-pushed the spqr-quantizer branch 4 times, most recently from 56c016d to e1a91fa Compare December 5, 2024 17:16
@MekkCyber
Copy link
Contributor

Could you rebase @elvircrn to update the branch :)

@elvircrn
Copy link
Contributor Author

elvircrn commented Dec 6, 2024

@MekkCyber Rebase done.

@MekkCyber
Copy link
Contributor

Hey @elvircrn, thanks for iterating ! It looks great I just left some minor comments

Comment on lines 49 to 56
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to add the checks here if the accelerate and spqr_quant packages are available before doing the import, what we don't need to do is raising errors, because the packages are assumed to be installed when executing the function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if I understood this, please double-check my newest change.

Comment on lines 53 to 57
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are not assuming here because we already validated the environment 😉

Comment on lines 58 to 61
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
elif torch_dtype != torch.float16:
raise ValueError(
"You cannot any type other than torch.float16 for SpQR. Please either leave it None or set it to"
"torch.float16 explicitly."
)
return torch_dtype
elif torch_dtype != torch.float16:
raise ValueError(
"You cannot use any type other than torch.float16 for SpQR. Please either leave it None or set it to"
"torch.float16 explicitly."
)
return torch_dtype

Comment on lines +73 to +105
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just FMI, are the beta and shapes parameters used solely for loading, or do they also play a role in quantization? In other words, if a model is quantized using specific beta or shapes values, can it only be loaded with those same parameters?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SpQR quantized weights are a product of tile-based (beta corresponds to the dimension of this tile where beta1, beta2 represent tile width and height respectively) bi-level quantization. You always specify beta before the quantization starts.

Each SpQR weight also comes with a sparse tensor representing the outlier weights. As this tensor is unstructured, one had to keep track of the size of this matrix during loading.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following is a visualization of the compression format from the original publication (https://arxiv.org/pdf/2306.03078):

image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed explanation

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add it to the doc!

Comment on lines 64 to 90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the shapes attribute do we need to specify the shape for all the tensors ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The colval size needs to be specified ahead of time for all weights. The rest can be computed from m, n and bits. Can we keep it the way it is now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do it both ways easily, not sure what is preferred here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just worried about the quantization config, I'm not sure if we can verify that shapes contains all the necessary elements in the post_init. Maybe before trying to access the key in replace_with_spqr_linear we verify if it's there or we raise an error to inform the user there is something wrong with the config.json they are using, wdyt @SunMarc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my estimate, these configurations are directly tied to the quantized weights. This would be a fatal error in quantization serialization if the shapes don't cover the full set of quantized weights. Perhaps this is something that should be done on the side of SpQR (https://pypi.org/project/spqr-quant/)?

If we were to do it on huggingface/transformers-side:

a) during replace_with_linear, we raise a fatal error (please let me know which one would be appropriate here) if the quantized weights don't have the corresponding key in the shapes config.
b) we do a full check before conducting the replacement in order to conserve memory/device cycles.

Let me know what best suits transformers here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see the tag to @SunMarc - I thought the wdyt referred to me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it does 😄

Comment on lines +113 to +116
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If possible it's better to use small models for the CI like Qwen 0.5, Bloom 560M, SmolLM 135M...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will have to get back to you on this (might require a significant time investment).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a blocker for the merge?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really a blocker for now

@elvircrn
Copy link
Contributor Author

elvircrn commented Dec 9, 2024

@MekkCyber The latest batch of comments are addressed.

@MekkCyber
Copy link
Contributor

MekkCyber commented Dec 9, 2024

Thanks for the quick iteration @elvircrn 🔥 ! can you re-rebase please 😅

Comment on lines 64 to 90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just worried about the quantization config, I'm not sure if we can verify that shapes contains all the necessary elements in the post_init. Maybe before trying to access the key in replace_with_spqr_linear we verify if it's there or we raise an error to inform the user there is something wrong with the config.json they are using, wdyt @SunMarc

Comment on lines +73 to +105
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed explanation

Comment on lines +52 to +55
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that's exactly what I meant, thank you for fixing it !

Comment on lines +113 to +116
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really a blocker for now

@elvircrn
Copy link
Contributor Author

elvircrn commented Dec 9, 2024

@MekkCyber rebased.

@MekkCyber MekkCyber requested a review from SunMarc December 9, 2024 13:04
@elvircrn
Copy link
Contributor Author

elvircrn commented Dec 9, 2024

@MekkCyber Resolved the latest batch of comments.

@elvircrn
Copy link
Contributor Author

elvircrn commented Dec 9, 2024

@SunMarc @MekkCyber
I've pushed a check for keys() in shapes in replace_with_spqr_linear. Let me know if something in this ballpark works.

@SunMarc
Copy link
Member

SunMarc commented Jan 15, 2025

pinging @ArthurZucker to have a quick look !

@SunMarc
Copy link
Member

SunMarc commented Jan 15, 2025

Could you fix the conflits @elvircrn ?

@elvircrn
Copy link
Contributor Author

elvircrn commented Jan 15, 2025

@SunMarc resolved quickly, as it was just an md conflict in the overview table.

jiqing-feng and others added 2 commits February 7, 2025 14:53
* gptqmodel

Signed-off-by: jiqing-feng <[email protected]>

* fix format

Signed-off-by: jiqing-feng <[email protected]>

* update readme

Signed-off-by: jiqing-feng <[email protected]>

* gptqmodel need use checkpoint_format (huggingface#1)

* gptqmodel need use checkpoint_format

* fix quantize

* Update quantization_config.py

* Update quantization_config.py

* Update quantization_config.py

---------

Co-authored-by: ZX-ModelCloud <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>

* Revert quantizer_gptq.py (huggingface#2)

* revert quantizer_gptq.py change

* pass **kwargs

* limit gptqmodel and optimum version

Signed-off-by: jiqing-feng <[email protected]>

* fix format

Signed-off-by: jiqing-feng <[email protected]>

* fix warning

Signed-off-by: jiqing-feng <[email protected]>

* fix version check

Signed-off-by: jiqing-feng <[email protected]>

* revert unrelated changes

Signed-off-by: jiqing-feng <[email protected]>

* enable gptqmodel tests

Signed-off-by: jiqing-feng <[email protected]>

* fix requires gptq

Signed-off-by: jiqing-feng <[email protected]>

* Fix Transformer compat (huggingface#3)

* revert quantizer_gptq.py change

* pass **kwargs

* add meta info

* cleanup

* cleanup

* Update quantization_config.py

* hf_select_quant_linear pass checkpoint_format and meta

* fix GPTQTestCUDA

* Update test_gptq.py

* gptqmodel.hf_select_quant_linear() now does not select ExllamaV2

* cleanup

* add backend

* cleanup

* cleanup

* no need check exllama version

* Update quantization_config.py

* lower checkpoint_format and backend

* check none

* cleanup

* Update quantization_config.py

* fix self.use_exllama == False

* spell

* fix unittest

* fix unittest

---------

Co-authored-by: LRL <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>

* fix format

Signed-off-by: jiqing-feng <[email protected]>

* fix format again

Signed-off-by: jiqing-feng <[email protected]>

* update gptqmodel version (huggingface#6)

* update gptqmodel version

* update gptqmodel version

* fix unit test (huggingface#5)

* update gptqmodel version

* update gptqmodel version

* "not self.use_exllama" is not equivalent to "self.use_exllama==False"

* fix unittest

* update gptqmodel version

* backend is loading_attibutes (huggingface#7)

* fix format and tests

Signed-off-by: jiqing-feng <[email protected]>

* fix memory check

Signed-off-by: jiqing-feng <[email protected]>

* fix device mismatch

Signed-off-by: jiqing-feng <[email protected]>

* fix result check

Signed-off-by: jiqing-feng <[email protected]>

* Update src/transformers/quantizers/quantizer_gptq.py

Co-authored-by: Marc Sun <[email protected]>

* Update src/transformers/quantizers/quantizer_gptq.py

Co-authored-by: Marc Sun <[email protected]>

* Update src/transformers/quantizers/quantizer_gptq.py

Co-authored-by: Marc Sun <[email protected]>

* update tests

Signed-off-by: jiqing-feng <[email protected]>

* review: update docs (huggingface#10)

* review: update docs (huggingface#12)

* review: update docs

* fix typo

* update tests for gptqmodel

Signed-off-by: jiqing-feng <[email protected]>

* update document (huggingface#9)

* update overview.md

* cleanup

* Update overview.md

* Update overview.md

* Update overview.md

* update gptq.md

* Update gptq.md

* Update gptq.md

* Update gptq.md

* Update gptq.md

* Update gptq.md

* Update gptq.md

---------

Co-authored-by: Qubitium-ModelCloud <[email protected]>

* typo

* doc note for asymmetric quant

* typo with apple silicon(e)

* typo for marlin

* column name revert: review

* doc rocm support

* Update docs/source/en/quantization/gptq.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/quantization/gptq.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/quantization/gptq.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/quantization/gptq.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/quantization/overview.md

Co-authored-by: Steven Liu <[email protected]>

* Update docs/source/en/quantization/overview.md

Co-authored-by: Steven Liu <[email protected]>

---------

Signed-off-by: jiqing-feng <[email protected]>
Co-authored-by: LRL-ModelCloud <[email protected]>
Co-authored-by: ZX-ModelCloud <[email protected]>
Co-authored-by: Qubitium-ModelCloud <[email protected]>
Co-authored-by: ZX-ModelCloud <[email protected]>
Co-authored-by: LRL <[email protected]>
Co-authored-by: Marc Sun <[email protected]>
Co-authored-by: Mohamed Mekkouri <[email protected]>
Co-authored-by: Steven Liu <[email protected]>
@elvircrn
Copy link
Contributor Author

elvircrn commented Feb 7, 2025

@SunMarc @ArthurZucker Conflicts resolved.

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("elvircrn/Llama-2-7b-SPQR-3Bit-16x16-red_pajama-hf")
```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can't quantize a model on the fly with quantization_config=... using spqr as the quantize type?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can't, this is not supported as of yet. This PR only adds Inference support.

Comment on lines +73 to +105
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add it to the doc!

@ArthurZucker
Copy link
Collaborator

Can you just fix this:

RuntimeError: The following files are not present in the table of contents:
- quantization/spqr
Add them to ../transformers/docs/source/en/_toctree.yml.

@elvircrn
Copy link
Contributor Author

@ArthurZucker Is the PR in order now?

@elvircrn elvircrn force-pushed the spqr-quantizer branch 2 times, most recently from 0340152 to 14f21c1 Compare February 13, 2025 09:42
@elvircrn
Copy link
Contributor Author

@ArthurZucker Erreneous rebase issue resolved.

@MekkCyber
Copy link
Contributor

Hi @elvircrn, can you just resolve the conflicts and once the CI is green we can merge ! Again sorry for the delay 🙏 and thanks for your patience

@elvircrn
Copy link
Contributor Author

@MekkCyber Done.

@elvircrn
Copy link
Contributor Author

@MekkCyber The failing tests, based on the logs, seem to be unrelated to my own source code.

@MekkCyber
Copy link
Contributor

Yes they seem unrelated @elvircrn, i tried updating the branch I hope it works

@elvircrn
Copy link
Contributor Author

elvircrn commented Feb 13, 2025

@MekkCyber I don't think that did it - unrelated tests are still failing.

@elvircrn
Copy link
Contributor Author

@MekkCyber Seems that everything is green now. :)

@MekkCyber MekkCyber merged commit 845b0a2 into huggingface:main Feb 13, 2025
25 checks passed
@elvircrn
Copy link
Contributor Author

Thanks everyone!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants