-
Notifications
You must be signed in to change notification settings - Fork 344
Add Int4PlainInt32Tensor #2845
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Int4PlainInt32Tensor #2845
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2845
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 1 PendingAs of commit 78f6bb2 with merge base 568c193 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
"int4_xpu_int_zp is referring to the format used by int4 weight-only quantization on XPU with int zero point, which is a groupwise quantization format." | ||
INT4_XPU_INT_ZP = "int4_xpu_int_zp" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please don't include int4 and xpu in the name, can you name this in terms of of how the quantized data is packed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The int4 weight xpu is a plain format tensor according to this doc, it just pack 2 int4 weight elements in a byte and then store the 4*int4 as int32. So I change it to the plain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, we have plain that stores 2*int4 as int8, can you reuse it or would need a new one? https://github.com/pytorch/ao/blob/main/torchao/quantization/quantize_/workflows/int4/int4_tensor.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liangan1 can you use PLAIN_INT32
for packing_format, and rename things accordingly (tensor subclass, files etc.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jerryzh168. I have added PLAIN_INT32 to be used by the xpu int4. Per my understanding, the packing format should be a dispatch policy to select the right tensor subclassing and a tensor subclass should cover a specific quantization recipe. So I suppose I should keep the current tensor name for int4 xpu.
In this PR, we just want to enable the int xpu with int zp domain. The current oneDNN backend can not support the float zp as CUDA/CPU backend and the feature is WIP. I plain to reuse this packing format in the future and dispatch the tensor with the zero point domain information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can reuse the packing format and the tensor for float32 zero_point as well in the future I think, but today we structure tensor subclass by: dtype + packing_format, so Int4PlainInt32
might be better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. change it to Int4PlainInt32. pls help to review again.
torchao/quantization/quantize_/workflows/int4/int4_xpu_tensor.py
Outdated
Show resolved
Hide resolved
torchao/quantization/quantize_/workflows/int4/int4_plain_int32_tensor.py
Outdated
Show resolved
Hide resolved
return Int4WeightOnlyConfig( | ||
group_size=group_size, | ||
packing_format="plain_int32", | ||
zero_point_domain=ZeroPointDomain.INT, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: we don't need this anymore I think, also we want to remove ZeroPointDomain in the future
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed. But I have a question that how to selelct the int zp domain for user if there is no this param?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we'll know how to quantize based on the type of tensor, so user just need to choose the packing_format
torchao/quantization/quantize_/workflows/int4/int4_plain_int32_tensor.py
Outdated
Show resolved
Hide resolved
torchao/quantization/quantize_/workflows/int4/int4_plain_int32_tensor.py
Outdated
Show resolved
Hide resolved
torchao/quantization/quantize_/workflows/int4/int4_plain_int32_tensor.py
Outdated
Show resolved
Hide resolved
torchao/quantization/quantize_/workflows/int4/int4_plain_int32_tensor.py
Show resolved
Hide resolved
please rebase, and also fix the CI error as well, need to skip the test when there is no xpu I think maybe update the Summary to make sure the naming are correct as well |
Done. @jerryzh168 pls help review again. |
torchao/quantization/quantize_/workflows/int4/int4_plain_int32_tensor.py
Show resolved
Hide resolved
|
||
@unittest.skipIf(not torch_version_at_least("2.8.0"), "Need pytorch 2.8+") | ||
@unittest.skipIf(not torch.xpu.is_available(), "XPU not available") | ||
class Int4PlainInt32Tensor(TestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably need more tests like serailization etc. but can add these later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. we are working on the XPU CI enabling in other PRs. Pls refer to #2917
This PR is used to enable the Int4PlainInt32Tensor. The pacing format name is "plain_int32"
Testcase:
bash test/quantization/quantize_/workflows/int4/test_int4_plain_int32_tensor.py