-
-
Couldn't load subscription status.
- Fork 10.9k
Closed as not planned
Closed as not planned
Copy link
Labels
installationInstallation problemsInstallation problemsstaleOver 90 days of inactivityOver 90 days of inactivity
Description
Your current environment
INFO 03-24 20:48:52 [__init__.py:239] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.34
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.6.0
/usr/lib64/libcudnn_adv.so.9.6.0
/usr/lib64/libcudnn_cnn.so.9.6.0
/usr/lib64/libcudnn_engines_precompiled.so.9.6.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib64/libcudnn_graph.so.9.6.0
/usr/lib64/libcudnn_heuristic.so.9.6.0
/usr/lib64/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 368
On-line CPU(s) list: 0-367
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 368
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 23 MiB (368 instances)
L1i cache: 23 MiB (368 instances)
L2 cache: 184 MiB (368 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-367
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pynvml==12.0.0
[pip3] pyzmq==26.2.1
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.48.3
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-ml-py 12.570.86 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pynvml 12.0.0 pypi_0 pypi
[conda] pyzmq 26.2.1 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] transformers 4.48.3 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.3.dev28+g97cfa65d
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-367 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-367 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-367 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-367 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 0-367 0 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 0-367 0 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 0-367 0 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 0-367 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
CUDA_CACHE_PATH=/data/users/huydo/.nv/ComputeCache
LD_LIBRARY_PATH=/usr/local/cuda-12.6/lib64/:
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
How you are installing vllm
pip install --editable .After #14570, building vLLM from source starts to fail for me on H100. Here is the build error
-- CUDA target architectures: 9.0
-- CUDA supported target architectures: 9.0
-- FetchContent base directory: /home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps
-- Enabling cumem allocator extension.
-- CMake Version: 3.31.4
-- CUTLASS 3.8.0
-- Found CUDAToolkit: /usr/local/cuda-12.6/targets/x86_64-linux/include (found version "12.6.85")
-- CUDART: /usr/local/cuda-12.6/lib64/libcudart.so
-- CUDA Driver: /usr/local/cuda-12.6/lib64/stubs/libcuda.so
-- NVRTC: /usr/local/cuda-12.6/lib64/libnvrtc.so
-- Default Install Location: install
-- CUDA Compilation Architectures: 70;72;75;80;86;87;89;90;90a
-- Enable caching of reference results in conv unit tests
-- Enable rigorous conv problem sizes in conv unit tests
-- Using the following NVCC flags:
--expt-relaxed-constexpr
-DCUTLASS_TEST_LEVEL=0
-DCUTLASS_TEST_ENABLE_CACHED_RESULTS=1
-DCUTLASS_CONV_UNIT_TEST_RIGOROUS_SIZE_ENABLED=1
-DCUTLASS_DEBUG_TRACE_LEVEL=0
-Xcompiler=-Wconversion
-Xcompiler=-fno-strict-aliasing
-lineinfo
-- Configuring cublas ...
-- cuBLAS Disabled.
-- Configuring cuBLAS ... done.
-- Building Marlin kernels for archs: 9.0
-- Not building AllSpark kernels as no compatible archs found in CUDA target architectures
-- Building scaled_mm_c3x_sm90 for archs: 9.0a
-- Not building scaled_mm_c3x_100 as no compatible archs found in CUDA target architectures
-- Building scaled_mm_c2x for archs: 9.0
-- Building sparse_scaled_mm_c3x for archs: 9.0a
-- Not building NVFP4 as no compatible archs were found.
-- Machete generation script hash: dec2c6596ac38e4b4ac06b8d7ca5054f
-- Last run machete generate script hash: dec2c6596ac38e4b4ac06b8d7ca5054f
-- Machete generation script has not changed, skipping generation.
-- Building Machete kernels for archs: 9.0a
-- Enabling C extension.
-- Building Marlin MOE kernels for archs: 9.0
-- Enabling moe extension.
CMake Warning (dev) at /home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:1564 (cmake_parse_arguments):
The BUILD_COMMAND keyword was followed by an empty string or no value at
all. Policy CMP0174 is not set, so cmake_parse_arguments() will unset the
ARG_BUILD_COMMAND variable rather than setting it to an empty string.
Call Stack (most recent call first):
/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:2145:EVAL:2 (__FetchContent_doPopulation)
/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:2145 (cmake_language)
/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:2384 (__FetchContent_Populate)
cmake/external_projects/flashmla.cmake:30 (FetchContent_MakeAvailable)
CMakeLists.txt:637 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) at /home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:1564 (cmake_parse_arguments):
The CONFIGURE_COMMAND keyword was followed by an empty string or no value
at all. Policy CMP0174 is not set, so cmake_parse_arguments() will unset
the ARG_CONFIGURE_COMMAND variable rather than setting it to an empty
string.
Call Stack (most recent call first):
/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:2145:EVAL:2 (__FetchContent_doPopulation)
/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:2145 (cmake_language)
/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/cmake/data/share/cmake-3.31/Modules/FetchContent.cmake:2384 (__FetchContent_Populate)
cmake/external_projects/flashmla.cmake:30 (FetchContent_MakeAvailable)
CMakeLists.txt:637 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- FlashMLA is available at /home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps/flashmla-src
-- Build type: RelWithDebInfo
-- Target device: cuda
-- Found Python: /home/huydo/miniconda3/envs/py3.12/bin/python (found version "3.12.9") found components: Interpreter Development.Module Development.SABIModule
CMake Warning at .deps/vllm-flash-attn-src/CMakeLists.txt:75 (message):
Pytorch version 2.4.0 expected for CUDA build, saw 2.6.0 instead.
-- CUDA target architectures: 9.0
-- CUDA supported target architectures: 9.0
-- FA2_ARCHS: 9.0
-- FA3_ARCHS: 9.0a
-- vllm-flash-attn is available at /home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps/vllm-flash-attn-src
-- Configuring done (10.4s)
-- Generating done (0.1s)
-- Build files have been written to: /home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/build/temp.linux-x86_64-cpython-312
[144/237] Building CUDA object vllm-flash-attn/CMakeFiles/_vllm_fa3_C.dir/hopper/instantiations/flash_fwd_hdimall_e4m3_paged_split_softcap_sm90.cu.o
FAILED: vllm-flash-attn/CMakeFiles/_vllm_fa3_C.dir/hopper/instantiations/flash_fwd_hdimall_e4m3_paged_split_softcap_sm90.cu.o
sccache /usr/local/cuda-12.6/bin/nvcc -forward-unknown-to-host-compiler -DFLASHATTENTION_DISABLE_BACKWARD -DFLASHATTENTION_DISABLE_DROPOUT -DFLASHATTENTION_DISABLE_PYBIND -DFLASHATTENTION_DISABLE_UNEVEN_K -DFLASHATTENTION_VARLEN_ONLY -DPy_LIMITED_API=3 -DTORCH_EXTENSION_NAME=_vllm_fa3_C -DUSE_C10D_GLOO -DUSE_C10D_NCCL -DUSE_DISTRIBUTED -DUSE_RPC -DUSE_TENSORPIPE -D_vllm_fa3_C_EXPORTS -I/home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps/vllm-flash-attn-src/csrc -I/home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps/vllm-flash-attn-src/hopper -I/home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps/vllm-flash-attn-src/csrc/common -I/home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps/vllm-flash-attn-src/csrc/cutlass/include -isystem /home/huydo/miniconda3/envs/py3.12/include/python3.12 -isystem /home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/torch/include -isystem /home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/torch/include/torch/csrc/api/include -isystem /usr/local/cuda-12.6/include -DONNX_NAMESPACE=onnx_c2 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=bad_friend_decl --expt-relaxed-constexpr --expt-extended-lambda -O3 -g -DNDEBUG -std=c++17 -Xcompiler=-fPIC --expt-relaxed-constexpr -DENABLE_FP8 --threads=1 --expt-extended-lambda --use_fast_math -DCUTLASS_ENABLE_DIRECT_CUDA_DRIVER_CALL=1 -D_GLIBCXX_USE_CXX11_ABI=0 -gencode arch=compute_90a,code=sm_90a -MD -MT vllm-flash-attn/CMakeFiles/_vllm_fa3_C.dir/hopper/instantiations/flash_fwd_hdimall_e4m3_paged_split_softcap_sm90.cu.o -MF vllm-flash-attn/CMakeFiles/_vllm_fa3_C.dir/hopper/instantiations/flash_fwd_hdimall_e4m3_paged_split_softcap_sm90.cu.o.d -x cu -c /home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/.deps/vllm-flash-attn-src/hopper/instantiations/flash_fwd_hdimall_e4m3_paged_split_softcap_sm90.cu -o vllm-flash-attn/CMakeFiles/_vllm_fa3_C.dir/hopper/instantiations/flash_fwd_hdimall_e4m3_paged_split_softcap_sm90.cu.o
ptxas terminated (signal: 11)sccache: Compiler killed by signal 11
[235/237] Building CUDA object vllm-flash-attn/CMakeFiles/_vllm_fa3_C.dir/hopper/instantiations/flash_fwd_hdimall_e4m3_split_softcap_sm90.cu.o
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/setup.py", line 676, in <module>
setup(
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/__init__.py", line 117, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 186, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/core.py", line 202, in run_commands
dist.run_commands()
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 983, in run_commands
self.run_command(cmd)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 1002, in run_command
cmd_obj.run()
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/command/bdist_wheel.py", line 379, in run
self.run_command("build")
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 339, in run_command
self.distribution.run_command(command)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 1002, in run_command
cmd_obj.run()
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/command/build.py", line 136, in run
self.run_command(cmd_name)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/cmd.py", line 339, in run_command
self.distribution.run_command(command)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/dist.py", line 999, in run_command
super().run_command(command)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/dist.py", line 1002, in run_command
cmd_obj.run()
File "/home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/setup.py", line 267, in run
super().run()
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/command/build_ext.py", line 99, in run
_build_ext.run(self)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/site-packages/setuptools/_distutils/command/build_ext.py", line 365, in run
self.build_extensions()
File "/home/huydo/github/pytorch-integration-testing/vllm-benchmarks/vllm/setup.py", line 238, in build_extensions
subprocess.check_call(["cmake", *build_args], cwd=self.build_temp)
File "/home/huydo/miniconda3/envs/py3.12/lib/python3.12/subprocess.py", line 415, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '-j=368', '--target=_moe_C', '--target=_vllm_fa2_C', '--target=_vllm_fa3_C', '--target=cumem_allocator', '--target=_C']' returned non-zero exit status 1.
Reverting #14570 and the build works. Please advise.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
KoyamaSohei, Yusoyea and Huixxi
Metadata
Metadata
Assignees
Labels
installationInstallation problemsInstallation problemsstaleOver 90 days of inactivityOver 90 days of inactivity