-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17)
Clang version: Could not collect
CMake version: version 3.29.6
Libc version: glibc-2.26
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.10.209-198.812.amzn2.x86_64-x86_64-with-glibc2.26
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A10G
GPU 1: NVIDIA A10G
GPU 2: NVIDIA A10G
GPU 3: NVIDIA A10G
GPU 4: NVIDIA A10G
GPU 5: NVIDIA A10G
GPU 6: NVIDIA A10G
GPU 7: NVIDIA A10G
Nvidia driver version: 535.104.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7R32
Stepping: 0
CPU MHz: 2990.124
BogoMIPS: 5600.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] transformers==4.41.2
[pip3] triton==2.3.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] torch 2.3.0 pypi_0 pypi
[conda] transformers 4.41.2 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB PHB PHB PHB PHB PHB PHB 0-191 0-1 N/A
GPU1 PHB X PHB PHB PHB PHB PHB PHB 0-191 0-1 N/A
GPU2 PHB PHB X PHB PHB PHB PHB PHB 0-191 0-1 N/A
GPU3 PHB PHB PHB X PHB PHB PHB PHB 0-191 0-1 N/A
GPU4 PHB PHB PHB PHB X PHB PHB PHB 0-191 0-1 N/A
GPU5 PHB PHB PHB PHB PHB X PHB PHB 0-191 0-1 N/A
GPU6 PHB PHB PHB PHB PHB PHB X PHB 0-191 0-1 N/A
GPU7 PHB PHB PHB PHB PHB PHB PHB X 0-191 0-1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
🐛 Describe the bug
When following the example https://github.com/vllm-project/vllm/blob/v0.5.0.post1/examples/llava_example.py and enabled multiple GPUs as below:
--- a/examples/llava_example.py
+++ b/examples/llava_example.py
@@ -47,6 +47,7 @@ def run_llava_image_features():
image_token_id=32000,
image_input_shape="1,576,1024",
image_feature_size=576,
+ tensor_parallel_size=8,
)and used image_features as the argument of --type, and ran python example/llava_example.py --type image_features, it would report errors like below:
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
cache[rtype].remove(name)
KeyError: '/psm_fd8af807'
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
cache[rtype].remove(name)
KeyError: '/psm_fd8af807'
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
cache[rtype].remove(name)
KeyError: '/psm_fd8af807'
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
cache[rtype].remove(name)
KeyError: '/psm_fd8af807'
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
cache[rtype].remove(name)
KeyError: '/psm_fd8af807'
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
cache[rtype].remove(name)
KeyError: '/psm_fd8af807'
Traceback (most recent call last):
File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
cache[rtype].remove(name)
KeyError: '/psm_fd8af807'
(VllmWorkerProcess pid=191327) WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=191328) WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=191323) WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=191325) WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=191322) WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=191326) WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=191324) WARNING 06-26 10:49:32 custom_all_reduce.py:166] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=191325) /home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
(VllmWorkerProcess pid=191325) warnings.warn(
(VllmWorkerProcess pid=191328) /home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
(VllmWorkerProcess pid=191328) warnings.warn(
(VllmWorkerProcess pid=191327) /home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
(VllmWorkerProcess pid=191327) warnings.warn(
/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
warnings.warn(
(VllmWorkerProcess pid=191326) /home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
(VllmWorkerProcess pid=191326) warnings.warn(
(VllmWorkerProcess pid=191323) /home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
(VllmWorkerProcess pid=191323) warnings.warn(
(VllmWorkerProcess pid=191325) INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=191328) INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=191327) INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=191326) INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=191323) INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=191322) /home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
(VllmWorkerProcess pid=191322) warnings.warn(
(VllmWorkerProcess pid=191324) /home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:140: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
(VllmWorkerProcess pid=191324) warnings.warn(
(VllmWorkerProcess pid=191322) INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=191324) INFO 06-26 10:49:33 weight_utils.py:218] Using model weights format ['*.safetensors']
(VllmWorkerProcess pid=191325) INFO 06-26 10:49:36 model_runner.py:160] Loading model weights took 1.6265 GB
INFO 06-26 10:49:36 model_runner.py:160] Loading model weights took 1.6265 GB
(VllmWorkerProcess pid=191323) INFO 06-26 10:49:36 model_runner.py:160] Loading model weights took 1.6265 GB
(VllmWorkerProcess pid=191322) INFO 06-26 10:49:36 model_runner.py:160] Loading model weights took 1.6265 GB
(VllmWorkerProcess pid=191327) INFO 06-26 10:49:36 model_runner.py:160] Loading model weights took 1.6265 GB
(VllmWorkerProcess pid=191326) INFO 06-26 10:49:36 model_runner.py:160] Loading model weights took 1.6265 GB
(VllmWorkerProcess pid=191324) INFO 06-26 10:49:36 model_runner.py:160] Loading model weights took 1.6265 GB
(VllmWorkerProcess pid=191328) INFO 06-26 10:49:37 model_runner.py:160] Loading model weights took 1.6265 GB
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm), Traceback (most recent call last):
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks: Expected all tensors to be on the same device, but found at least two devices, cuda:5 and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm), Traceback (most recent call last):
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] output = executor(*args, **kwargs)
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] output = executor(*args, **kwargs)
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] return func(*args, **kwargs)
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] return func(*args, **kwargs)
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/worker/worker.py", line 162, in determine_num_available_blocks
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/worker/worker.py", line 162, in determine_num_available_blocks
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] self.model_runner.profile_run()
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] self.model_runner.profile_run()
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] return func(*args, **kwargs)
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] return func(*args, **kwargs)
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 844, in profile_run
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 844, in profile_run
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] self.execute_model(seqs, kv_caches)
(VllmWorkerProcess pid=191322) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm), Traceback (most recent call last):
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] self.execute_model(seqs, kv_caches)
(VllmWorkerProcess pid=191328) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm), Traceback (most recent call last):
(VllmWorkerProcess pid=191327) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks: Expected all tensors to be on the same device, but found at least two devices, cuda:6 and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm), Traceback (most recent call last):
(VllmWorkerProcess pid=191323) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=191322) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=191326) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=191328) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=191327) ERROR 06-26 10:49:38 multiproc_worker_utils.py:226] File "/home/ec2-user/SageMaker/envs/long-llava-next/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working