Skip to content

[Bug]: [vllm-openvino]: ValueError: use_cache was set to True but the loaded model only supports use_cache=False.  #6473

@HPUedCSLearner

Description

@HPUedCSLearner

Your current environment

The output of `python collect_env.py`

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$ python collect_env.py 
Collecting environment information...
WARNING 07-16 19:50:52 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
PyTorch version: 2.3.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35

Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             128
On-line CPU(s) list:                0-127
Vendor ID:                          GenuineIntel
Model name:                         INTEL(R) XEON(R) GOLD 6530
CPU family:                         6
Model:                              207
Thread(s) per core:                 2
Core(s) per socket:                 32
Socket(s):                          2
Stepping:                           2
Frequency boost:                    enabled
CPU max MHz:                        2101.0000
CPU min MHz:                        800.0000
BogoMIPS:                           4200.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization:                     VT-x
L1d cache:                          3 MiB (64 instances)
L1i cache:                          2 MiB (64 instances)
L2 cache:                           128 MiB (64 instances)
L3 cache:                           320 MiB (2 instances)
NUMA node(s):                       4
NUMA node0 CPU(s):                  0-15,64-79
NUMA node1 CPU(s):                  16-31,80-95
NUMA node2 CPU(s):                  32-47,96-111
NUMA node3 CPU(s):                  48-63,112-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.1
[pip3] torch==2.3.1+cpu
[pip3] transformers==4.42.4
[pip3] triton==3.0.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] torch                     2.3.1+cpu                pypi_0    pypi
[conda] transformers              4.42.4                   pypi_0    pypi
[conda] triton                    3.0.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

🐛 Describe the bug

1、bug description

It is normal to let vllm-openvino convert openvino IR at runtime;
However, manually converting the model to openvino IR will result in an error use_cache XXXX

2、manualy convert module to OpenVINO IR,and run ,get error:

convert commad

optimum-cli export openvino -m Qwen1.5-4B-Chat --task text-generation --weight-format int4   Qwen1.5-4B-Chat-optimum-int4

convert OpenVION IR logs

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$ 
optimum-cli export openvino \
    -m Qwen1.5-4B-Chat \
    --task text-generation \
    --weight-format int4   \
    Qwen1.5-4B-Chat-optimum-int4
Framework not specified. Using pt to export the model.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00,  1.33it/s]
The task `text-generation` was manually specified, and past key values will not be reused in the decoding. if needed, please pass `--task text-generation-with-past` to export using the past key values.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Using framework PyTorch: 2.3.1+cpu
Overriding 1 configuration item(s)
        - use_cache -> False
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1116: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if sequence_length != 1:
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:128: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if seq_len > self.max_seq_len_cached:
['input_ids', 'attention_mask', 'position_ids']
Mixed-Precision assignment ━━━━━━━━━━━━━━╸━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   9% 2
Mixed-Precision assignment ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 280/280 • 0:00:46 • 0:00:00
INFO:nncf:Statistics of the bitwidth distribution:
┍━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑
│   Num bits (N) │ % all parameters (layers)   │ % ratio-defining parameters (layers)   │
┝━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┥
│              8 │ 36% (77 / 282)              │ 20% (75 / 280)                         │
├────────────────┼─────────────────────────────┼────────────────────────────────────────┤
│              4 │ 64% (205 / 282)             │ 80% (205 / 280)                        │
┕━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙
Applying Weight Compression ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 282/282 • 0:01:36 • 0:00:00
Replacing `(?!\S)` pattern to `(?:$|[^\S])` in RegexSplit operation

run command

use manuly convert model : /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4

VLLM_OPENVINO_KVCACHE_SPACE=30 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4 \
                --port 10003

get use_cache error

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$
VLLM_OPENVINO_KVCACHE_SPACE=30 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4 \
                --port 10003
WARNING 07-16 19:48:08 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 07-16 19:48:11 api_server.py:212] vLLM API server version 0.5.2
INFO 07-16 19:48:11 api_server.py:213] args: Namespace(host=None, port=10003, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 07-16 19:48:11 config.py:1374] Downcasting torch.float32 to torch.float16.
INFO 07-16 19:48:11 llm_engine.py:174] Initializing an LLM engine (v0.5.2) with config: model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', speculative_config=None, tokenizer='/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4, use_v2_block_manager=False, enable_prefix_caching=False)
WARNING 07-16 19:48:11 openvino_executor.py:132] Only float32 dtype is supported on OpenVINO, casting from torch.float16.
WARNING 07-16 19:48:11 openvino_executor.py:137] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 07-16 19:48:11 openvino_executor.py:159] OpenVINO optimal block size is 32, overriding currently set 16
INFO 07-16 19:48:14 selector.py:121] Cannot use _Backend.FLASH_ATTN backend on OpenVINO.
INFO 07-16 19:48:14 selector.py:69] Using OpenVINO Attention backend.
WARNING 07-16 19:48:14 openvino.py:130] OpenVINO IR is available for provided model id /home/yongshuai_wang/models/Qwen1.5-4B-Chat-optimum-int4. This IR will be used for inference as-is, all possible options that may affect model conversion are ignored.
[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/runpy.py", line 196, in _run_module_as_main
[rank0]:     return _run_code(code, main_globals, None,
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/runpy.py", line 86, in _run_code
[rank0]:     exec(code, run_globals)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 282, in <module>
[rank0]:     run_server(args)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 224, in run_server
[rank0]:     if llm_engine is not None else AsyncLLMEngine.from_engine_args(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 444, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 373, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 520, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 249, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 150, in __init__
[rank0]:     super().__init__(model_config, cache_config, parallel_config,
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 46, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/openvino_executor.py", line 28, in _init_executor
[rank0]:     self._init_worker()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/executor/openvino_executor.py", line 55, in _init_worker
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/worker/openvino_worker.py", line 199, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/worker/openvino_model_runner.py", line 91, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 210, in get_model
[rank0]:     return OpenVINOCasualLM(model_config, device_config, kv_cache_dtype)
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 137, in __init__
[rank0]:     pt_model = OVModelForCausalLM.from_pretrained(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/modeling_base.py", line 427, in from_pretrained
[rank0]:     return from_pretrained_method(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/intel/openvino/modeling_decoder.py", line 796, in _from_pretrained
[rank0]:     causal_model = init_cls(
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/intel/openvino/modeling_decoder.py", line 171, in __init__
[rank0]:     raise_error(self.use_cache, use_cache, "use_cache")
[rank0]:   File "/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/optimum/intel/openvino/modeling_decoder.py", line 159, in raise_error
[rank0]:     raise ValueError(
[rank0]: ValueError: `use_cache` was set to `True` but the loaded model only supports `use_cache=False`. Please load your current model with `use_cache=False` or export the original model once again with `use_cache=True` when calling the `from_pretrained` method. To export your model, simply set `export=True`.

3、however, directly run vllm openvion with original modle Qwen1.5-4B-Chat , is OK:

run log

(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models/Qwen1.5-4B-Chat$ 
VLLM_OPENVINO_KVCACHE_SPACE=30 \
LLM_OPENVINO_CPU_KV_CACHE_PRECISION=u8 \
VLLM_OPENVINO_ENABLE_QUANTIZED_WEIGHTS=ON \
        python3 -m vllm.entrypoints.openai.api_server \
                --model /home/yongshuai_wang/models/Qwen1.5-4B-Chat \
                --port 10003
WARNING 07-16 19:33:14 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/vllm/usage/usage_lib.py:19: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 07-16 19:33:17 api_server.py:212] vLLM API server version 0.5.2
INFO 07-16 19:33:17 api_server.py:213] args: Namespace(host=None, port=10003, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 07-16 19:33:17 llm_engine.py:174] Initializing an LLM engine (v0.5.2) with config: model='/home/yongshuai_wang/models/Qwen1.5-4B-Chat', speculative_config=None, tokenizer='/home/yongshuai_wang/models/Qwen1.5-4B-Chat', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/home/yongshuai_wang/models/Qwen1.5-4B-Chat, use_v2_block_manager=False, enable_prefix_caching=False)
WARNING 07-16 19:33:17 openvino_executor.py:132] Only float32 dtype is supported on OpenVINO, casting from torch.bfloat16.
WARNING 07-16 19:33:17 openvino_executor.py:137] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 07-16 19:33:17 openvino_executor.py:159] OpenVINO optimal block size is 32, overriding currently set 16
INFO 07-16 19:33:19 selector.py:121] Cannot use _Backend.FLASH_ATTN backend on OpenVINO.
INFO 07-16 19:33:19 selector.py:69] Using OpenVINO Attention backend.
WARNING 07-16 19:33:20 openvino.py:123] Provided model id /home/yongshuai_wang/models/Qwen1.5-4B-Chat does not contain OpenVINO IR, the model will be converted to IR with default options. If you need to use specific options for model conversion, use optimum-cli export openvino with desired options.
Framework not specified. Using pt to export the model.
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00,  1.56it/s]
Using framework PyTorch: 2.3.1+cpu
Overriding 1 configuration item(s)
	- use_cache -> True
We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:1116: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if sequence_length != 1:
/home/yongshuai_wang/miniconda3/envs/vllm-openvino/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:128: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if seq_len > self.max_seq_len_cached:
['input_ids', 'attention_mask', 'position_ids', 'past_key_values']
INFO:nncf:Statistics of the bitwidth distribution:
┍━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┑
│   Num bits (N) │ % all parameters (layers)   │ % ratio-defining parameters (layers)   │
┝━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┿━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┥
│              8 │ 100% (282 / 282)            │ 100% (282 / 282)                       │
┕━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┙
Applying Weight Compression ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 282/282 • 0:00:26 • 0:00:00
INFO 07-16 19:34:36 openvino_executor.py:72] # CPU blocks: 2457
INFO 07-16 19:34:47 serving_chat.py:94] Using default chat template:
INFO 07-16 19:34:47 serving_chat.py:94] {% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
INFO 07-16 19:34:47 serving_chat.py:94] You are a helpful assistant.<|im_end|>
INFO 07-16 19:34:47 serving_chat.py:94] ' }}{% endif %}{{'<|im_start|>' + message['role'] + '
INFO 07-16 19:34:47 serving_chat.py:94] ' + message['content'] + '<|im_end|>' + '
INFO 07-16 19:34:47 serving_chat.py:94] '}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
INFO 07-16 19:34:47 serving_chat.py:94] ' }}{% endif %}
WARNING 07-16 19:34:48 serving_embedding.py:141] embedding_mode is False. Embedding API will not work.
INFO 07-16 19:34:48 api_server.py:257] Available routes are:
INFO 07-16 19:34:48 api_server.py:262] Route: /openapi.json, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /docs, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /redoc, Methods: HEAD, GET
INFO 07-16 19:34:48 api_server.py:262] Route: /health, Methods: GET
INFO 07-16 19:34:48 api_server.py:262] Route: /tokenize, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /detokenize, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/models, Methods: GET
INFO 07-16 19:34:48 api_server.py:262] Route: /version, Methods: GET
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/chat/completions, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/completions, Methods: POST
INFO 07-16 19:34:48 api_server.py:262] Route: /v1/embeddings, Methods: POST
INFO:     Started server process [42639]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:10003 (Press CTRL+C to quit)
INFO 07-16 19:34:58 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 07-16 19:35:08 metrics.py:295] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions