-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
Description
Your current environment
The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Rocky Linux release 8.9 (Green Obsidian) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20)
Clang version: 16.0.6 (Red Hat 16.0.6-2.module+el8.9.0+1651+e10a8f6d)
CMake version: version 3.29.5
Libc version: glibc-2.28
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
GPU 2: NVIDIA A100-PCIE-40GB
GPU 3: NVIDIA A100-PCIE-40GB
GPU 4: NVIDIA A100-PCIE-40GB
GPU 5: NVIDIA A100-PCIE-40GB
GPU 6: NVIDIA A100-PCIE-40GB
GPU 7: NVIDIA A100-PCIE-40GB
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz
Stepping: 6
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 36864K
NUMA node0 CPU(s): 0-23
NUMA node1 CPU(s): 24-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] pytorch-lightning==2.3.0
[pip3] torch==2.3.0
[pip3] torchaudio==2.3.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.18.1
[pip3] transformers==4.42.0.dev0
[pip3] triton==2.3.0
[conda] blas 2.116 mkl conda-forge
[conda] blas-devel 3.9.0 16_linux64_mkl conda-forge
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 16_linux64_mkl conda-forge
[conda] libopenvino-pytorch-frontend 2024.1.0 he02047a_7 conda-forge
[conda] mkl 2022.1.0 h84fe81f_915 conda-forge
[conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge
[conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-lightning 2.3.0 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torchaudio 2.3.1 py310_cu121 pytorch
[conda] torchmetrics 1.4.0.post0 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.18.1 py310_cu121 pytorch
[conda] transformers 4.42.0.dev0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 SYS SYS SYS SYS SYS SYS 0-23 0 N/A
GPU1 NV12 X SYS SYS SYS SYS SYS SYS 0-23 0 N/A
GPU2 SYS SYS X NV12 SYS SYS SYS SYS 0-23 0 N/A
GPU3 SYS SYS NV12 X SYS SYS SYS SYS 0-23 0 N/A
GPU4 SYS SYS SYS SYS X NV12 SYS SYS 24-47 1 N/A
GPU5 SYS SYS SYS SYS NV12 X SYS SYS 24-47 1 N/A
GPU6 SYS SYS SYS SYS SYS SYS X NV12 24-47 1 N/A
GPU7 SYS SYS SYS SYS SYS SYS NV12 X 24-47 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
🐛 Describe the bug
Description
When setting tensor_parallel_size
to a value greater than 1, the program gets stuck and raises a RuntimeError
related to the bootstrapping phase of new processes. This issue does not occur when using version v0.4.3, but persists in versions v0.5.0.post1 and v0.5.0.
Steps to Reproduce
-
Install version v0.5.0.post1 or v0.5.0 of the library.
-
Run the following Python script with
tensor_parallel_size
set to 2:import os import argparse import ray from vllm import SamplingParams, LLM from dotenv import load_dotenv, find_dotenv _ = load_dotenv(find_dotenv()) ray.init(address="auto") def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("--model_path", type=str, default=os.path.join(os.environ.get("LLAMA_MODEL_FOLDER"))) return parser.parse_args() args = parse_args() MODEL_PATH = args.model_path llm = LLM(model=MODEL_PATH, tensor_parallel_size=2)
python load.py --model_path $JOBFS/fine_tuned_models/checkpoint-1857
Expected Behavior
The program should run without any issues regarding process bootstrapping, similar to how it behaves with version v0.4.3.
Observed Behavior
The program raises the following RuntimeError
when tensor_parallel_size
is set to 2:
python load.py --model_path $JOBFS/fine_tuned_models/checkpoint-1857
INFO 06-18 21:35:42 config.py:623] Defaulting to use mp for distributed inference
INFO 06-18 21:35:42 llm_engine.py:161] Initializing an LLM engine (v0.5.0) with config: model='/scratch/pbs.5401450.kman.restech.unsw.edu.au/fine_tuned_models/checkpoint-1857', speculative_config=None, tokenizer='/scratch/pbs.5401450.kman.restech.unsw.edu.au/fine_tuned_models/checkpoint-1857', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/scratch/pbs.5401450.kman.restech.unsw.edu.au/fine_tuned_models/checkpoint-1857)
INFO 06-18 21:35:44 config.py:623] Defaulting to use mp for distributed inference
INFO 06-18 21:35:44 llm_engine.py:161] Initializing an LLM engine (v0.5.0) with config: model='/scratch/pbs.5401450.kman.restech.unsw.edu.au/fine_tuned_models/checkpoint-1857', speculative_config=None, tokenizer='/scratch/pbs.5401450.kman.restech.unsw.edu.au/fine_tuned_models/checkpoint-1857', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/scratch/pbs.5401450.kman.restech.unsw.edu.au/fine_tuned_models/checkpoint-1857)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/codes/kgqa/load.py", line 20, in <module>
llm = LLM(model = MODEL_PATH, tensor_parallel_size=2)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 144, in __init__
self.llm_engine = LLMEngine.from_engine_args(
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 360, in from_engine_args
engine = cls(
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 223, in __init__
self.model_executor = executor_class(
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/executor/distributed_gpu_executor.py", line 25, in __init__
super().__init__(*args, **kwargs)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 41, in __init__
self._init_executor()
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/executor/multiproc_gpu_executor.py", line 48, in _init_executor
self.workers = [
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/executor/multiproc_gpu_executor.py", line 49, in <listcomp>
ProcessWorkerWrapper(
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 162, in __init__
self.process.start()
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/context.py", line 288, in _Popen
return Popen(process_obj)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/scratch/pbs.5401450.kman.restech.unsw.edu.au/miniforge3/envs/llm/lib/python3.10/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
ERROR 06-18 21:35:45 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 3449547 died, exit code: 1
INFO 06-18 21:35:45 multiproc_worker_utils.py:123] Killing local vLLM worker processes
Environment
- OS: GNU/Linux
- Python Version: Python 3.10.14
- Library Version: v0.5.0.post1 and v0.5.0
Additional Context
Reverting back to version v0.4.3 resolves the issue. It appears there might be a change in how processes are handled in newer versions that could be causing this error.