From 9deef7a392ec937ea0cca5f6074138e7db7d1416 Mon Sep 17 00:00:00 2001 From: "Lin, Fanli" Date: Mon, 18 Nov 2024 03:06:36 -0800 Subject: [PATCH] add XPU --- docs/source/en/quantization/quanto.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/quantization/quanto.md b/docs/source/en/quantization/quanto.md index 18135b2ec2fc..f5bba54a6e6b 100644 --- a/docs/source/en/quantization/quanto.md +++ b/docs/source/en/quantization/quanto.md @@ -28,7 +28,7 @@ Try Quanto + transformers with this [notebook](https://colab.research.google.com - weights quantization (`float8`,`int8`,`int4`,`int2`) - activation quantization (`float8`,`int8`) - modality agnostic (e.g CV,LLM) -- device agnostic (e.g CUDA,MPS,CPU) +- device agnostic (e.g CUDA,XPU,MPS,CPU) - compatibility with `torch.compile` - easy to add custom kernel for specific device - supports quantization aware training