You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
make quantize_.set_inductor_config None by default for future deprecation
Summary:
We want to migrate this to individual workflows, see #1715 for migration plan.
This PR is step 1 where we enable distinguishing whether the user
specified this argument or not. After this PR, we can control the
behavior per-workflow, such as setting this functionality to False for
future training workflows.
Test Plan: CI
Reviewers:
Subscribers:
Tasks:
Tags:
Copy file name to clipboardExpand all lines: torchao/quantization/README.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -386,6 +386,9 @@ The benchmarks below were run on a single NVIDIA-A6000 GPU.
386
386
You try can out these apis with the `quantize_` api as above alongside the constructor `codebook_weight_only` an example can be found in in `torchao/_models/llama/generate.py`.
387
387
388
388
### Automatic Inductor Configuration
389
+
390
+
:warning: <em>This functionality is being migrated from the top level `quantize_` API to individual workflows, see https://github.com/pytorch/ao/issues/1715 for more details.</em>
391
+
389
392
The `quantize_` and `autoquant` apis now automatically use our recommended inductor configuration setings. You can mimic the same configuration settings for your own experiments by using the `torchao.quantization.utils.recommended_inductor_config_setter` to replicate our recommended configuration settings. Alternatively if you wish to disable these recommended settings, you can use the key word argument `set_inductor_config` and set it to false in the `quantize_` or `autoquant` apis to prevent assignment of those configuration settings. You can also overwrite these configuration settings after they are assigned if you so desire, as long as they are overwritten before passing any inputs to the torch.compiled model. This means that previous flows which referenced a variety of inductor configurations that needed to be set are now outdated, though continuing to manually set those same inductor configurations is unlikely to cause any issues.
390
393
391
394
## (To be moved to prototype) A16W4 WeightOnly Quantization with GPTQ
"""Convert the weight of linear modules in the model with `config`, model is modified inplace
@@ -498,7 +498,7 @@ def quantize_(
498
498
config (Union[AOBaseConfig, Callable[[torch.nn.Module], torch.nn.Module]]): either (1) a workflow configuration object or (2) a function that applies tensor subclass conversion to the weight of a module and return the module (e.g. convert the weight tensor of linear to affine quantized tensor). Note: (2) will be deleted in a future release.
499
499
filter_fn (Optional[Callable[[torch.nn.Module, str], bool]]): function that takes a nn.Module instance and fully qualified name of the module, returns True if we want to run `config` on
500
500
the weight of the module
501
-
set_inductor_config (bool, optional): Whether to automatically use recommended inductor config settings (defaults to True)
501
+
set_inductor_config (bool, optional): Whether to automatically use recommended inductor config settings (defaults to None)
502
502
device (device, optional): Device to move module to before applying `filter_fn`. This can be set to `"cuda"` to speed up quantization. The final model will be on the specified `device`.
503
503
Defaults to None (do not change device).
504
504
@@ -522,6 +522,15 @@ def quantize_(
522
522
quantize_(m, int4_weight_only(group_size=32))
523
523
524
524
"""
525
+
ifset_inductor_config!=None:
526
+
warnings.warn(
527
+
"""The `set_inductor_config` argument to `quantize_` will be removed in a future release. This functionality is being migrated to individual workflows. Please see https://github.com/pytorch/ao/issues/1715 for more details."""
528
+
)
529
+
else: # None
530
+
# for now, default to True to not change existing behavior when the
0 commit comments