Skip to content

Commit 8b00eec

Browse files
Update on "[executorch] Add TorchAO wrapper config to allow filter_fn for quantize_"
Fixing tests for stack that got reverted: #13264 Changes: Support filter function in quantize_ function when using torchao quantize. Update unittests accordingly Use ComposableQuantizer if there are multiple quantizers and is of type torchao, for legacy quantizers use them directly with prepare_pt2e. Source transform modifies model inplace, so deep copy first to avoid modifying user provided model. Differential Revision: [D80206543](https://our.internmc.facebook.com/intern/diff/D80206543/) [ghstack-poisoned]
1 parent 9635b92 commit 8b00eec

File tree

1 file changed

+0
-3
lines changed

1 file changed

+0
-3
lines changed

backends/xnnpack/test/recipes/test_xnnpack_recipes.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -154,9 +154,6 @@ def forward(self, x) -> torch.Tensor:
154154
self._compare_eager_quantized_model_outputs(
155155
session, example_inputs, 1e-3
156156
)
157-
self._compare_eager_unquantized_model_outputs(
158-
session, model, example_inputs, 14
159-
)
160157

161158
def _get_recipe_for_quant_type(self, quant_type: QuantType) -> XNNPackRecipeType:
162159
# Map QuantType to corresponding recipe name.

0 commit comments

Comments
 (0)