Skip to content

Commit 000946f

Browse files
violetch24violetch24chensuyue
authored
add SDXL model example to INC 3.x (#1887)
* add SDXL model example to INC 3.x Signed-off-by: Cheng, Zixuan <[email protected]> * add evaluation script Signed-off-by: violetch24 <[email protected]> * add test script Signed-off-by: violetch24 <[email protected]> * minor fix Signed-off-by: violetch24 <[email protected]> * Update run_quant.sh * add iter limit Signed-off-by: violetch24 <[email protected]> * modify test script Signed-off-by: violetch24 <[email protected]> * update json Signed-off-by: chensuyue <[email protected]> * add requirements Signed-off-by: violetch24 <[email protected]> * Update run_benchmark.sh * Update sdxl_smooth_quant.py * minor fix Signed-off-by: violetch24 <[email protected]> --------- Signed-off-by: Cheng, Zixuan <[email protected]> Signed-off-by: violetch24 <[email protected]> Signed-off-by: chensuyue <[email protected]> Co-authored-by: violetch24 <[email protected]> Co-authored-by: chensuyue <[email protected]>
1 parent aa42e5e commit 000946f

File tree

10 files changed

+1184
-0
lines changed

10 files changed

+1184
-0
lines changed

examples/.config/model_params_pytorch_3x.json

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,6 +147,13 @@
147147
"main_script": "run_clm_no_trainer.py",
148148
"batch_size": 1
149149
},
150+
"sdxl_ipex_sq":{
151+
"model_src_dir": "diffusion_model/diffusers/stable_diffusion/smooth_quant",
152+
"dataset_location": "",
153+
"input_model": "",
154+
"main_script": "main.py",
155+
"batch_size": 1
156+
},
150157
"resnet18_mixed_precision": {
151158
"model_src_dir": "cv/mixed_precision",
152159
"dataset_location": "/tf_dataset/pytorch/ImageNet/raw",
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
Step-by-Step
2+
============
3+
This document describes the step-by-step instructions to run [stable diffusion XL model](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) using Smooth Quantization to accelerate inference while maintain the quality of output image.
4+
5+
# Prerequisite
6+
7+
## Environment
8+
Recommend python 3.9 or higher version.
9+
10+
```shell
11+
pip install -r requirements.txt
12+
```
13+
**Note**: IPEX along with torch require nightly version (2.4) for compatibility. Please refer to [installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=main&os=linux%2fwsl2&package=source).
14+
15+
# Run
16+
17+
To quantize the model:
18+
```bash
19+
python sdxl_smooth_quant.py --model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 --quantize --alpha 0.44 --output_dir "./saved_results"
20+
```
21+
or
22+
```bash
23+
sh run_quant.sh --alpha=0.44
24+
```
25+
To load a quantized model:
26+
```bash
27+
python sdxl_smooth_quant.py --model_name_or_path stabilityai/stable-diffusion-xl-base-1.0 --quantize --load --int8
28+
```
29+
or
30+
```bash
31+
sh run_quant.sh --int8=true
32+
```
33+
34+
# Results
35+
## Image Generated
36+
37+
With caption `"A brown and white dog runs on some brown grass near a Frisbee that is just sailing above the ground."`, results of fp32 model and int8 model are listed left and right respectively.
38+
39+
<p float="left">
40+
<img src="./images/fp32.jpg" width = "300" height = "300" alt="bf16" align=center />
41+
<img src="./images/int8.jpg" width = "300" height = "300" alt="int8" align=center />
42+
</p>
43+
44+
## CLIP evaluation
45+
We have also evaluated CLIP scores on 5000 samples from COCO2014 validation dataset for FP32 model and INT8 model. CLIP results are listed below.
46+
47+
| Precision | FP32 | INT8 |
48+
|----------------------|-------|-------|
49+
| CLIP on COCO2014 val | 32.05 | 31.77 |
50+
51+
We're using the mlperf_sd_inference [repo](https://github.com/ahmadki/mlperf_sd_inference) to evaluate CLIP scores. In order to support evaluation on quantized model,
52+
we made some modification on the script (`main.py`). Please use as following:
53+
```bash
54+
git clone https://github.com/ahmadki/mlperf_sd_inference.git
55+
cd mlperf_sd_inference
56+
mv ../main.py ./
57+
```
58+
After setting the environment as instructed in the repo, you can execute the modified `main.py` script to generate images:
59+
```bash
60+
python main.py \
61+
--model-id stabilityai/stable-diffusion-xl-base-1.0 \
62+
--quantized-unet ./saved_results \ # quantized model saving path, should include `qconfig.json` and `quantized_model.pt`
63+
--precision fp32 \
64+
--guidance 8.0 \
65+
--steps 20 \
66+
--iters 200 \ # change to 5000 for the full 5k dataset
67+
--latent-path latents.pt \
68+
--base-output-dir ./output
69+
```
70+
Then you can compute CLIP score using the images generated by the quantized model:
71+
```bash
72+
mv ./output/stabilityai--stable-diffusion-xl-base-1.0__euler__20__8.0__fp32/* ./output/ # switch directory
73+
rm -rf ./output/stabilityai--stable-diffusion-xl-base-1.0__euler__20__8.0__fp32/
74+
75+
python clip/clip_score.py \
76+
--tsv-file captions_5k.tsv \
77+
--image-folder ./output \ # folder with the generated images
78+
--device "cpu"
79+
```
80+
Or you can use the bash script for all steps above:
81+
```bash
82+
sh run_benchmark.sh --mode=accuracy --int8=true
83+
```
134 KB
Loading
136 KB
Loading
257 KB
Binary file not shown.

0 commit comments

Comments
 (0)