-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
Description
Your current environment
The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.10-060510-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i9-12900K
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
BogoMIPS: 6374.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm arat pln pts hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 640 KiB (16 instances)
L1i cache: 768 KiB (16 instances)
L2 cache: 14 MiB (10 instances)
L3 cache: 30 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1+cpu
[pip3] transformers==4.47.1
[pip3] transformers-stream-generator==0.0.5
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
Model Input Dumps
python -m vllm.entrypoints.openai.api_server --disable-log-requests --port 8066 --trust-remote-code --enable-chunked-prefill --max-num-batched-tokens 256 --model meta-llama/Meta-Llama-3-8B-Instruct
🐛 Describe the bug
run cmd:
python -m vllm.entrypoints.openai.api_server --disable-log-requests --port 8066 --trust-remote-code --enable-chunked-prefill --max-num-batched-tokens 256 --model meta-llama/Meta-Llama-3-8B-Instruct
then error:
INFO 01-28 12:59:16 init.py:183] Automatically detected platform openvino.
13:14:33 INFO 01-28 12:59:17 api_server.py:835] vLLM API server version 0.7.0
13:14:33 INFO 01-28 12:59:17 api_server.py:836] args: Namespace(host=None, port=8066, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='meta-llama/Meta-Llama-3-8B-Instruct', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=256, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=True, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, disable_log_requests=True, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False)
13:14:33 INFO 01-28 12:59:17 api_server.py:203] Started engine process with PID 794605
13:14:33 INFO 01-28 12:59:20 init.py:183] Automatically detected platform openvino.
13:14:33 INFO 01-28 12:59:24 config.py:520] This model supports multiple tasks: {'reward', 'generate', 'embed', 'classify', 'score'}. Defaulting to 'generate'.
13:14:33 INFO 01-28 12:59:24 config.py:1483] Chunked prefill is enabled with max_num_batched_tokens=256.
13:14:33 WARNING 01-28 12:59:24 config.py:656] Async output processing is not supported on the current platform type openvino.
13:14:33 WARNING 01-28 12:59:24 openvino.py:79] Only float32 dtype is supported on OpenVINO, casting from torch.bfloat16.
13:14:33 WARNING 01-28 12:59:24 openvino.py:84] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
13:14:33 INFO 01-28 12:59:24 openvino.py:101] KV cache type is overridden to u8 via VLLM_OPENVINO_CPU_KV_CACHE_PRECISION env var.
13:14:33 INFO 01-28 12:59:24 openvino.py:118] OpenVINO CPU optimal block size is 32, overriding currently set 16
13:14:33 INFO 01-28 12:59:27 config.py:520] This model supports multiple tasks: {'embed', 'classify', 'generate', 'reward', 'score'}. Defaulting to 'generate'.
13:14:33 INFO 01-28 12:59:27 config.py:1483] Chunked prefill is enabled with max_num_batched_tokens=256.
13:14:33 WARNING 01-28 12:59:27 config.py:656] Async output processing is not supported on the current platform type openvino.
13:14:33 WARNING 01-28 12:59:27 openvino.py:79] Only float32 dtype is supported on OpenVINO, casting from torch.bfloat16.
13:14:33 WARNING 01-28 12:59:27 openvino.py:84] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
13:14:33 INFO 01-28 12:59:27 openvino.py:101] KV cache type is overridden to u8 via VLLM_OPENVINO_CPU_KV_CACHE_PRECISION env var.
13:14:33 INFO 01-28 12:59:27 openvino.py:118] OpenVINO CPU optimal block size is 32, overriding currently set 16
13:14:33 INFO 01-28 12:59:27 llm_engine.py:232] Initializing an LLM engine (v0.7.0) with config: model='meta-llama/Meta-Llama-3-8B-Instruct', speculative_config=None, tokenizer='meta-llama/Meta-Llama-3-8B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float32, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=<Type: 'uint8_t'>, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=meta-llama/Meta-Llama-3-8B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=True, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
13:14:33 INFO 01-28 12:59:29 openvino.py:35] Cannot use None backend on OpenVINO.
13:14:33 INFO 01-28 12:59:29 openvino.py:36] Using OpenVINO Attention backend.
13:14:33 WARNING 01-28 12:59:29 _custom_ops.py:19] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
13:14:33 WARNING 01-28 12:59:29 config.py:3353] Current VLLM config is not set.
13:14:33 ERROR 01-28 12:59:29 engine.py:387] 'NoneType' object has no attribute 'dtype'
13:14:33 ERROR 01-28 12:59:29 engine.py:387] Traceback (most recent call last):
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
13:14:33 ERROR 01-28 12:59:29 engine.py:387] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 121, in from_engine_args
13:14:33 ERROR 01-28 12:59:29 engine.py:387] return cls(ipc_path=ipc_path,
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 73, in init
13:14:33 ERROR 01-28 12:59:29 engine.py:387] self.engine = LLMEngine(*args, **kwargs)
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 271, in init
13:14:33 ERROR 01-28 12:59:29 engine.py:387] self.model_executor = executor_class(vllm_config=vllm_config, )
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 49, in init
13:14:33 ERROR 01-28 12:59:29 engine.py:387] self._init_executor()
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 40, in _init_executor
13:14:33 ERROR 01-28 12:59:29 engine.py:387] self.collective_rpc("load_model")
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 49, in collective_rpc
13:14:33 ERROR 01-28 12:59:29 engine.py:387] answer = run_method(self.driver_worker, method, args, kwargs)
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
13:14:33 ERROR 01-28 12:59:29 engine.py:387] return func(*args, **kwargs)
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/worker/openvino_worker.py", line 253, in load_model
13:14:33 ERROR 01-28 12:59:29 engine.py:387] self.model_runner.load_model()
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/worker/openvino_model_runner.py", line 82, in load_model
13:14:33 ERROR 01-28 12:59:29 engine.py:387] self.model = get_model(model_config=self.model_config,
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 202, in get_model
13:14:33 ERROR 01-28 12:59:29 engine.py:387] return OpenVINOCausalLM(ov_core, model_config, device_config,
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 108, in init
13:14:33 ERROR 01-28 12:59:29 engine.py:387] self.logits_processor = LogitsProcessor(
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/model_executor/layers/logits_processor.py", line 48, in init
13:14:33 ERROR 01-28 12:59:29 engine.py:387] parallel_config = get_current_vllm_config().parallel_config
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/config.py", line 3355, in get_current_vllm_config
13:14:33 ERROR 01-28 12:59:29 engine.py:387] return VllmConfig()
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "", line 19, in init
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/config.py", line 3203, in post_init
13:14:33 ERROR 01-28 12:59:29 engine.py:387] current_platform.check_and_update_config(self)
13:14:33 ERROR 01-28 12:59:29 engine.py:387] File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/platforms/openvino.py", line 78, in check_and_update_config
13:14:33 ERROR 01-28 12:59:29 engine.py:387] if model_config.dtype != torch.float32:
13:14:33 ERROR 01-28 12:59:29 engine.py:387] AttributeError: 'NoneType' object has no attribute 'dtype'
13:14:33 Process SpawnProcess-1:
13:14:33 Traceback (most recent call last):
13:14:33 File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
13:14:33 self.run()
13:14:33 File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
13:14:33 self._target(*self._args, **self._kwargs)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 389, in run_mp_engine
13:14:33 raise e
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 378, in run_mp_engine
13:14:33 engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 121, in from_engine_args
13:14:33 return cls(ipc_path=ipc_path,
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 73, in init
13:14:33 self.engine = LLMEngine(*args, **kwargs)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 271, in init
13:14:33 self.model_executor = executor_class(vllm_config=vllm_config, )
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 49, in init
13:14:33 self._init_executor()
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 40, in _init_executor
13:14:33 self.collective_rpc("load_model")
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/executor/uniproc_executor.py", line 49, in collective_rpc
13:14:33 answer = run_method(self.driver_worker, method, args, kwargs)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/utils.py", line 2208, in run_method
13:14:33 return func(*args, **kwargs)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/worker/openvino_worker.py", line 253, in load_model
13:14:33 self.model_runner.load_model()
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/worker/openvino_model_runner.py", line 82, in load_model
13:14:33 self.model = get_model(model_config=self.model_config,
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 202, in get_model
13:14:33 return OpenVINOCausalLM(ov_core, model_config, device_config,
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/model_executor/model_loader/openvino.py", line 108, in init
13:14:33 self.logits_processor = LogitsProcessor(
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/model_executor/layers/logits_processor.py", line 48, in init
13:14:33 parallel_config = get_current_vllm_config().parallel_config
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/config.py", line 3355, in get_current_vllm_config
13:14:33 return VllmConfig()
13:14:33 File "", line 19, in init
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/config.py", line 3203, in post_init
13:14:33 current_platform.check_and_update_config(self)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/platforms/openvino.py", line 78, in check_and_update_config
13:14:33 if model_config.dtype != torch.float32:
13:14:33 AttributeError: 'NoneType' object has no attribute 'dtype'
13:14:33 Traceback (most recent call last):
13:14:33 File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
13:14:33 return _run_code(code, main_globals, None,
13:14:33 File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
13:14:33 exec(code, run_globals)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 899, in
13:14:33 uvloop.run(run_server(args))
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/uvloop/init.py", line 82, in run
13:14:33 return loop.run_until_complete(wrapper())
13:14:33 File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/uvloop/init.py", line 61, in wrapper
13:14:33 return await main
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 863, in run_server
13:14:33 async with build_async_engine_client(args) as engine_client:
13:14:33 File "/usr/lib/python3.10/contextlib.py", line 199, in aenter
13:14:33 return await anext(self.gen)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 133, in build_async_engine_client
13:14:33 async with build_async_engine_client_from_engine_args(
13:14:33 File "/usr/lib/python3.10/contextlib.py", line 199, in aenter
13:14:33 return await anext(self.gen)
13:14:33 File "/opt/home/sys_k8sworker/ci-ov-dlbenchmark/workspace/DL-Benchmark/prod/WW04-2025.0.0-17933-RC2/V_OV_CPU_throughput_ubuntu22_spr-preprod31@2/venv/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 227, in build_async_engine_client_from_engine_args
13:14:33 raise RuntimeError(
13:14:33 RuntimeError: Engine process failed to start. See stack trace for the root cause.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.