Skip to content

Commit 2b81f76

Browse files
committed
<Replace this line with a title. Use 1 line only, 67 chars or less>
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags:
2 parents 9acb991 + 5239ce7 commit 2b81f76

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+1468
-2992
lines changed

.github/workflows/regression_test_rocm.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ jobs:
3131
contents: read
3232
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main
3333
with:
34-
timeout: 120
34+
timeout: 150
3535
no-sudo: ${{ matrix.gpu-arch-type == 'rocm' }}
3636
runner: ${{ matrix.runs-on }}
3737
gpu-arch-type: ${{ matrix.gpu-arch-type }}

README.md

Lines changed: 147 additions & 128 deletions
Large diffs are not rendered by default.

benchmarks/float8/training/torchtitan_benchmark.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ fi
2929
# validate recipe name
3030
if [ -n "${FLOAT8_RECIPE_WITH_BEST_SETTINGS}" ]; then
3131
if [ "${FLOAT8_RECIPE_WITH_BEST_SETTINGS}" == "tensorwise" ]; then
32-
FLOAT8_ARGS="--model.converters="float8" --float8.enable_fsdp_float8_all_gather --float8.precompute_float8_dynamic_scale_for_fsdp --float8.force_recompute_fp8_weight_in_bwd"
32+
FLOAT8_ARGS="--model.converters="float8" --float8.enable_fsdp_float8_all_gather --float8.precompute_float8_dynamic_scale_for_fsdp"
3333
else
3434
FLOAT8_ARGS="--model.converters="float8" --float8.recipe_name=${FLOAT8_RECIPE_WITH_BEST_SETTINGS}"
3535
fi

docs/requirements.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,4 +4,6 @@ sphinx_design
44
sphinx_copybutton
55
sphinx-tabs
66
matplotlib
7+
myst-parser
8+
sphinxcontrib-mermaid==1.0.0
79
-e git+https://github.com/pytorch/pytorch_sphinx_theme.git#egg=pytorch_sphinx_theme

docs/source/api_ref_sparsity.rst

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@ torchao.sparsity
1212

1313
sparsify_
1414
semi_sparse_weight
15-
int8_dynamic_activation_int8_semi_sparse_weight
1615
apply_fake_sparsity
1716
WandaSparsifier
1817
PerChannelNormObserver

docs/source/conf.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,8 @@
5050
"sphinx_design",
5151
"sphinx_gallery.gen_gallery",
5252
"sphinx_copybutton",
53+
"myst_parser",
54+
"sphinxcontrib.mermaid",
5355
]
5456

5557
sphinx_gallery_conf = {
@@ -96,7 +98,10 @@
9698
# The suffix(es) of source filenames.
9799
# You can specify multiple suffix as a list of string:
98100
#
99-
source_suffix = [".rst"]
101+
source_suffix = {
102+
".rst": "restructuredtext",
103+
".md": "markdown",
104+
}
100105

101106
# The master toctree document.
102107
master_doc = "index"

docs/source/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,3 +42,4 @@ for an overall introduction to the library and recent highlight and updates.
4242
subclass_advanced
4343
static_quantization
4444
pretraining
45+
torchao_vllm_integration

docs/source/quantization.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ First we want to lay out the torchao stack::
1212
Basic dtypes: uint1-uint7, int1-int8, float3-float8
1313

1414

15-
Any quantization algorithm will be using some components from the above stack, for example int4_weight_only quantization uses:
15+
Any quantization algorithm will be using some components from the above stack, for example int4 weight-only quantization uses:
1616
(1) weight only quantization flow
1717
(2) `tinygemm bf16 activation + int4 weight kernel <https://github.com/pytorch/pytorch/blob/136e28f616140fdc9fb78bb0390aeba16791f1e3/aten/src/ATen/native/native_functions.yaml#L4148>`__ and `quant primitive ops <https://github.com/pytorch/ao/blob/main/torchao/quantization/quant_primitives.py>`__
1818
(3) `AffineQuantizedTensor <https://github.com/pytorch/ao/blob/main/torchao/dtypes/affine_quantized_tensor.py>`__ tensor subclass with `TensorCoreTiledLayout <https://github.com/pytorch/ao/blob/e41ca4ee41f5f1fe16c59e00cffb4dd33d25e56d/torchao/dtypes/affine_quantized_tensor.py#L573>`__
@@ -201,7 +201,7 @@ Case Study: How int4 weight only quantization works in torchao?
201201
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
202202
To connect everything together, here is a more detailed walk through for how int4 weight only quantization is implemented in torchao.
203203

204-
Quantization Flow: quantize_(model, int4_weight_only())
204+
Quantization Flow: quantize_(model, Int4WeightOnlyConfig())
205205
* What happens: linear.weight = torch.nn.Parameter(to_affine_quantized_intx(linear.weight), requires_grad=False)
206206
* quantization primitive ops: choose_qparams and quantize_affine are called to quantize the Tensor
207207
* quantized Tensor will be `AffineQuantizedTensor`, a quantized tensor with derived dtype (e.g. int4 with scale and zero_point)
@@ -212,10 +212,10 @@ During Model Execution: model(input)
212212

213213
During Quantization
214214
###################
215-
First we start with the API call: ``quantize_(model, int4_weight_only())`` what this does is it converts the weights of nn.Linear modules in the model to int4 quantized tensor (``AffineQuantizedTensor`` that is int4 dtype, asymmetric, per group quantized), using the layout for tinygemm kernel: ``tensor_core_tiled`` layout.
215+
First we start with the API call: ``quantize_(model, Int4WeightOnlyConfig())`` what this does is it converts the weights of nn.Linear modules in the model to int4 quantized tensor (``AffineQuantizedTensor`` that is int4 dtype, asymmetric, per group quantized), using the layout for tinygemm kernel: ``tensor_core_tiled`` layout.
216216

217-
* `quantize_ <https://github.com/pytorch/ao/blob/4865ee61340cc63a1469f437388067b853c9289e/torchao/quantization/quant_api.py#L403>`__: the model level API that quantizes the weight of linear by applying the conversion function from user (second argument)
218-
* `int4_weight_only <https://github.com/pytorch/ao/blob/242f181fe59e233b458740b06464ad42da8df6af/torchao/quantization/quant_api.py#L522>`__: the function that returns a function that converts weight of linear to int4 weight only quantized weight
217+
* `quantize_ <https://docs.pytorch.org/ao/main/generated/torchao.quantization.quantize_.html#torchao.quantization.quantize_>`__: the model level API that quantizes the weight of linear by applying the conversion function from user (second argument)
218+
* `Int4WeightOnlyConfig <https://docs.pytorch.org/ao/main/generated/torchao.quantization.Int4WeightOnlyConfig.html#torchao.quantization.Int4WeightOnlyConfig>`__: the function that returns a function that converts weight of linear to int4 weight only quantized weight
219219
* Calls quantization primitives ops like choose_qparams_affine and quantize_affine to quantize the Tensor
220220
* `TensorCoreTiledLayout <https://github.com/pytorch/ao/blob/242f181fe59e233b458740b06464ad42da8df6af/torchao/dtypes/affine_quantized_tensor.py#L573>`__: the tensor core tiled layout type, storing parameters for the packing format
221221
* `TensorCoreTiledAQTTensorImpl <https://github.com/pytorch/ao/blob/242f181fe59e233b458740b06464ad42da8df6af/torchao/dtypes/affine_quantized_tensor.py#L1376>`__: the tensor core tiled TensorImpl, stores the packed weight for efficient int4 weight only kernel (tinygemm kernel)

docs/source/quick_start.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,8 @@ for efficient mixed dtype matrix multiplication:
5656
.. code:: py
5757
5858
# torch 2.4+ only
59-
from torchao.quantization import int4_weight_only, quantize_
60-
quantize_(model, int4_weight_only(group_size=32))
59+
from torchao.quantization import Int4WeightOnlyConfig, quantize_
60+
quantize_(model, Int4WeightOnlyConfig(group_size=32))
6161
6262
The quantized model is now ready to use! Note that the quantization
6363
logic is inserted through tensor subclasses, so there is no change

docs/source/serialization.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Here is the serialization and deserialization flow::
1414
from torchao.utils import get_model_size_in_bytes
1515
from torchao.quantization.quant_api import (
1616
quantize_,
17-
int4_weight_only,
17+
Int4WeightOnlyConfig,
1818
)
1919

2020
class ToyLinearModel(torch.nn.Module):
@@ -36,7 +36,7 @@ Here is the serialization and deserialization flow::
3636
print(f"original model size: {get_model_size_in_bytes(m) / 1024 / 1024} MB")
3737

3838
example_inputs = m.example_inputs(dtype=dtype, device="cuda")
39-
quantize_(m, int4_weight_only())
39+
quantize_(m, Int4WeightOnlyConfig())
4040
print(f"quantized model size: {get_model_size_in_bytes(m) / 1024 / 1024} MB")
4141

4242
ref = m(*example_inputs)
@@ -70,7 +70,7 @@ quantized model ``state_dict``::
7070
{"linear1.weight": quantized_weight1, "linear2.weight": quantized_weight2, ...}
7171

7272

73-
The size of the quantized model is typically going to be smaller to the original floating point model, but it also depends on the specific techinque and implementation you are using. You can print the model size with ``torchao.utils.get_model_size_in_bytes`` utility function, specifically for the above example using int4_weight_only quantization, we can see the size reduction is around 4x::
73+
The size of the quantized model is typically going to be smaller to the original floating point model, but it also depends on the specific techinque and implementation you are using. You can print the model size with ``torchao.utils.get_model_size_in_bytes`` utility function, specifically for the above example using Int4WeightOnlyConfig quantization, we can see the size reduction is around 4x::
7474

7575
original model size: 4.0 MB
7676
quantized model size: 1.0625 MB

0 commit comments

Comments
 (0)