-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of python collect_env.py
INFO 06-10 16:47:15 [__init__.py:244] Automatically detected platform cuda.
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (x86_64)
GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version : 14.0.0-1ubuntu1.1
CMake version : version 3.22.1
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.7.0+cu128
Is debug build : False
CUDA used to build PyTorch : 12.8
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.9.20 (main, Oct 16 2024, 04:36:33) [Clang 18.1.8 ] (64-bit runtime)
Python platform : Linux-6.5.0-35-generic-x86_64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : True
CUDA runtime version : Could not collect
CUDA_MODULE_LOADING set to : LAZY
GPU models and configuration :
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version : 570.133.20
cuDNN version : Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.8.3.14
[pip3] nvidia-cuda-cupti-cu12==12.8.57
[pip3] nvidia-cuda-nvrtc-cu12==12.8.61
[pip3] nvidia-cuda-runtime-cu12==12.8.57
[pip3] nvidia-cudnn-cu12==9.7.1.26
[pip3] nvidia-cufft-cu12==11.3.3.41
[pip3] nvidia-cufile-cu12==1.13.0.11
[pip3] nvidia-curand-cu12==10.3.9.55
[pip3] nvidia-cusolver-cu12==11.7.2.55
[pip3] nvidia-cusparse-cu12==12.5.7.53
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.8.61
[pip3] nvidia-nvtx-cu12==12.8.55
[pip3] pyzmq==26.4.0
[pip3] torch==2.7.0+cu128
[pip3] torchaudio==2.7.0+cu128
[pip3] torchvision==0.22.0+cu128
[pip3] transformers==4.52.4
[pip3] triton==3.3.0
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.9.1rc2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PIX NODE NODE SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX NODE SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE PIX SYS SYS SYS SYS 0-31,64-95 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS PIX NODE NODE NODE 32-63,96-127 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS NODE PIX NODE NODE 32-63,96-127 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS NODE NODE PIX NODE 32-63,96-127 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE NODE PIX 32-63,96-127 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS SYS SYS SYS
NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS SYS SYS SYS
NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS SYS SYS SYS
NIC3 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS SYS SYS SYS
NIC4 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE
NIC5 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE
NIC6 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE
NIC7 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
NIC6: mlx5_6
NIC7: mlx5_7
==============================
Environment Variables
==============================
CUDA_VISIBLE_DEVICES=6
CUDA_VISIBLE_DEVICES=6
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
🐛 Describe the bug
In this commit: 46ecc57#diff-80ee7e2a62f9dcfbb8a312dc4e3948557e97ef187290daebbcae1e28596bda29R463-R466
The use of the zip
built-in is introduced with the strict
keyword argument, however, that argument was not added until Python 3.10, so this code is incompatible with Python 3.9. As such, the error from the issue title is generated when using Python 3.9.
As a minimal example, an environment was created and server started with Python 3.9 and the latest RC:
uv venv -p 3.9 --seed
source .venv/bin/activate
uv pip install -U vllm \
--torch-backend=auto \
--extra-index-url https://wheels.vllm.ai/nightly
# vllm==0.9.1rc2
vllm serve facebook/opt-125m --port 30303
With the server started, a simple request was made via curl
:
curl http://localhost:30303/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{"model": "facebook/opt-125m", "prompt": "Say this is a test", "temperature": 0, "max_tokens": 7}'
This crashed the server, with the full output below, where the root cause was the titular TypeError
:
Server output with traceback(s)
INFO 06-10 16:45:35 [logger.py:43] Received request cmpl-53fed68be7a043b4b4d2d9fdea41b078-0: prompt: 'Say this is a test', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.0, top_p=1.0, top_k=0, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=7, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None, extra_args=None), prompt_token_ids: [2, 34673, 42, 16, 10, 1296], prompt_embeds shape: None, lora_request: None, prompt_adapter_request: None.
INFO 06-10 16:45:35 [async_llm.py:271] Added request cmpl-53fed68be7a043b4b4d2d9fdea41b078-0.
ERROR 06-10 16:45:35 [dump_input.py:69] Dumping input data
ERROR 06-10 16:45:35 [dump_input.py:71] V1 LLM engine (v0.9.1rc2) with config: model='facebook/opt-125m', speculative_config=None, tokenizer='facebook/opt-125m', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=facebook/opt-125m, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"/home/domenic/.cache/vllm/torch_compile_cache/1cd9403be3","backend":"","custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":"/home/domenic/.cache/vllm/torch_compile_cache/1cd9403be3/rank_0_0"},
ERROR 06-10 16:45:35 [dump_input.py:79] Dumping scheduler output for model execution:
ERROR 06-10 16:45:35 [dump_input.py:80] SchedulerOutput(scheduled_new_reqs=[], scheduled_cached_reqs=[CachedRequestData(req_id='cmpl-53fed68be7a043b4b4d2d9fdea41b078-0', resumed_from_preemption=false, new_token_ids=[4], new_block_ids=[[]], num_computed_tokens=6)], num_scheduled_tokens={cmpl-53fed68be7a043b4b4d2d9fdea41b078-0: 1}, total_num_scheduled_tokens=1, scheduled_spec_decode_tokens={}, scheduled_encoder_inputs={}, num_common_prefix_blocks=[1], finished_req_ids=[], free_encoder_input_ids=[], structured_output_request_ids={}, grammar_bitmask=null, kv_connector_metadata=null)
ERROR 06-10 16:45:35 [dump_input.py:82] SchedulerStats(num_running_reqs=1, num_waiting_reqs=0, gpu_cache_usage=1.5918117205138138e-05, prefix_cache_stats=PrefixCacheStats(reset=False, requests=0, queries=0, hits=0), spec_decoding_stats=None)
ERROR 06-10 16:45:35 [core.py:517] EngineCore encountered a fatal error.
ERROR 06-10 16:45:35 [core.py:517] Traceback (most recent call last):
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 508, in run_engine_core
ERROR 06-10 16:45:35 [core.py:517] engine_core.run_busy_loop()
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 535, in run_busy_loop
ERROR 06-10 16:45:35 [core.py:517] self._process_engine_step()
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 560, in _process_engine_step
ERROR 06-10 16:45:35 [core.py:517] outputs, model_executed = self.step_fn()
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 231, in step
ERROR 06-10 16:45:35 [core.py:517] model_output = self.execute_model(scheduler_output)
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 217, in execute_model
ERROR 06-10 16:45:35 [core.py:517] raise err
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 211, in execute_model
ERROR 06-10 16:45:35 [core.py:517] return self.model_executor.execute_model(scheduler_output)
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/executor/abstract.py", line 87, in execute_model
ERROR 06-10 16:45:35 [core.py:517] output = self.collective_rpc("execute_model",
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/executor/uniproc_executor.py", line 57, in collective_rpc
ERROR 06-10 16:45:35 [core.py:517] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/utils.py", line 2671, in run_method
ERROR 06-10 16:45:35 [core.py:517] return func(*args, **kwargs)
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 06-10 16:45:35 [core.py:517] return func(*args, **kwargs)
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/worker/gpu_worker.py", line 293, in execute_model
ERROR 06-10 16:45:35 [core.py:517] output = self.model_runner.execute_model(scheduler_output,
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 06-10 16:45:35 [core.py:517] return func(*args, **kwargs)
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1177, in execute_model
ERROR 06-10 16:45:35 [core.py:517] self._update_states(scheduler_output)
ERROR 06-10 16:45:35 [core.py:517] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/worker/gpu_model_runner.py", line 465, in _update_states
ERROR 06-10 16:45:35 [core.py:517] for block_ids, new_block_ids in zip( # type: ignore[call-overload]
ERROR 06-10 16:45:35 [core.py:517] TypeError: zip() takes no keyword arguments
Process EngineCore_0:
Traceback (most recent call last):
File "/home/domenic/.local/share/uv/python/cpython-3.9.20-linux-x86_64-gnu/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/domenic/.local/share/uv/python/cpython-3.9.20-linux-x86_64-gnu/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 519, in run_engine_core
raise e
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 508, in run_engine_core
engine_core.run_busy_loop()
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 535, in run_busy_loop
self._process_engine_step()
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 560, in _process_engine_step
outputs, model_executed = self.step_fn()
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 231, in step
model_output = self.execute_model(scheduler_output)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 217, in execute_model
raise err
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core.py", line 211, in execute_model
return self.model_executor.execute_model(scheduler_output)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/executor/abstract.py", line 87, in execute_model
output = self.collective_rpc("execute_model",
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/executor/uniproc_executor.py", line 57, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/utils.py", line 2671, in run_method
return func(*args, **kwargs)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/worker/gpu_worker.py", line 293, in execute_model
output = self.model_runner.execute_model(scheduler_output,
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1177, in execute_model
self._update_states(scheduler_output)
File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/worker/gpu_model_runner.py", line 465, in _update_states
for block_ids, new_block_ids in zip( # type: ignore[call-overload]
TypeError: zip() takes no keyword arguments
ERROR 06-10 16:45:35 [async_llm.py:420] AsyncLLM output_handler failed.
ERROR 06-10 16:45:35 [async_llm.py:420] Traceback (most recent call last):
ERROR 06-10 16:45:35 [async_llm.py:420] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/async_llm.py", line 379, in output_handler
ERROR 06-10 16:45:35 [async_llm.py:420] outputs = await engine_core.get_output_async()
ERROR 06-10 16:45:35 [async_llm.py:420] File "/home/domenic/code/temp/.venv/lib/python3.9/site-packages/vllm/v1/engine/core_client.py", line 790, in get_output_async
ERROR 06-10 16:45:35 [async_llm.py:420] raise self._format_exception(outputs) from None
ERROR 06-10 16:45:35 [async_llm.py:420] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
INFO 06-10 16:45:35 [async_llm.py:346] Request cmpl-53fed68be7a043b4b4d2d9fdea41b078-0 failed (engine dead).
INFO: 127.0.0.1:39620 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
[rank0]:[W610 16:45:36.943746875 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [3468401]
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
astiferDoubleVII, pechpo and mikaDing
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working