-
-
Couldn't load subscription status.
- Fork 10.9k
Description
Your current environment
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 41%
CPU max MHz: 5881.0000
CPU min MHz: 545.0000
BogoMIPS: 8982.91
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-ml-py==12.570.86
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.50.0.dev0
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB 0-31 0 N/A
GPU1 PHB X 0-31 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
How would you like to use vllm
I try to use vllm to host gemma3 12b -it but it showed me this error
vllm serve "google/gemma-3-12b-it" --tensor-parallel-size 2
(venv) developer1@g4090-4:~/vllm$ vllm serve "google/gemma-3-12b-it" --tensor-parallel-size 2
INFO 03-12 16:53:17 init.py:207] Automatically detected platform cuda.
INFO 03-12 16:53:17 api_server.py:912] vLLM API server version 0.7.3
INFO 03-12 16:53:17 api_server.py:913] args: Namespace(subparser='serve', model_tag='google/gemma-3-12b-it', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='google/gemma-3-12b-it', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function ServeSubcommand.cmd at 0x79515ef2c360>)
INFO 03-12 16:53:17 api_server.py:209] Started engine process with PID 3469310
INFO 03-12 16:53:17 config.py:2444] Downcasting torch.float32 to torch.float16.
INFO 03-12 16:53:19 init.py:207] Automatically detected platform cuda.
INFO 03-12 16:53:19 config.py:2444] Downcasting torch.float32 to torch.float16.
INFO 03-12 16:53:20 config.py:549] This model supports multiple tasks: {'classify', 'reward', 'score', 'embed', 'generate'}. Defaulting to 'generate'.
INFO 03-12 16:53:20 config.py:1382] Defaulting to use mp for distributed inference
WARNING 03-12 16:53:20 arg_utils.py:1197] The model has a long context length (1048576). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
INFO 03-12 16:53:23 config.py:549] This model supports multiple tasks: {'generate', 'score', 'embed', 'classify', 'reward'}. Defaulting to 'generate'.
INFO 03-12 16:53:24 config.py:1382] Defaulting to use mp for distributed inference
WARNING 03-12 16:53:24 arg_utils.py:1197] The model has a long context length (1048576). This may cause OOM errors during the initial memory profiling phase, or result in low performance due to small KV cache space. Consider setting --max-model-len to a smaller value.
INFO 03-12 16:53:24 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.3) with config: model='google/gemma-3-12b-it', speculative_config=None, tokenizer='google/gemma-3-12b-it', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=1048576, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=google/gemma-3-12b-it, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
WARNING 03-12 16:53:25 multiproc_worker_utils.py:300] Reducing Torch parallelism from 16 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
INFO 03-12 16:53:25 custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager
INFO 03-12 16:53:26 cuda.py:229] Using Flash Attention backend.
INFO 03-12 16:53:27 init.py:207] Automatically detected platform cuda.
(VllmWorkerProcess pid=3469513) INFO 03-12 16:53:27 multiproc_worker_utils.py:229] Worker ready; awaiting tasks
(VllmWorkerProcess pid=3469513) INFO 03-12 16:53:28 cuda.py:229] Using Flash Attention backend.
INFO 03-12 16:53:28 utils.py:916] Found nccl from library libnccl.so.2
INFO 03-12 16:53:28 pynccl.py:69] vLLM is using nccl==2.21.5
(VllmWorkerProcess pid=3469513) INFO 03-12 16:53:28 utils.py:916] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=3469513) INFO 03-12 16:53:28 pynccl.py:69] vLLM is using nccl==2.21.5
INFO 03-12 16:53:28 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /home/developer1/.cache/vllm/gpu_p2p_access_cache_for_0,1.json
(VllmWorkerProcess pid=3469513) INFO 03-12 16:53:28 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /home/developer1/.cache/vllm/gpu_p2p_access_cache_for_0,1.json
WARNING 03-12 16:53:28 custom_all_reduce.py:145] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=3469513) WARNING 03-12 16:53:28 custom_all_reduce.py:145] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO 03-12 16:53:28 shm_broadcast.py:258] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_efe1e1cb'), local_subscribe_port=37439, remote_subscribe_port=None)
INFO 03-12 16:53:28 model_runner.py:1110] Starting to load model google/gemma-3-12b-it...
(VllmWorkerProcess pid=3469513) INFO 03-12 16:53:28 model_runner.py:1110] Starting to load model google/gemma-3-12b-it...
WARNING 03-12 16:53:28 utils.py:78] Gemma3ForConditionalGeneration has no vLLM implementation, falling back to Transformers implementation. Some features may not be supported and performance may not be optimal.
INFO 03-12 16:53:28 transformers.py:129] Using Transformers backend.
(VllmWorkerProcess pid=3469513) WARNING 03-12 16:53:28 utils.py:78] Gemma3ForConditionalGeneration has no vLLM implementation, falling back to Transformers implementation. Some features may not be supported and performance may not be optimal.
(VllmWorkerProcess pid=3469513) INFO 03-12 16:53:28 transformers.py:129] Using Transformers backend.
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] Exception in worker VllmWorkerProcess while processing method load_model.
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] Traceback (most recent call last):
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/executor/multiproc_worker_utils.py", line 236, in _run_worker_process
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/utils.py", line 2196, in run_method
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] return func(*args, **kwargs)
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/worker/worker.py", line 183, in load_model
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] self.model_runner.load_model()
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1112, in load_model
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] self.model = get_model(vllm_config=self.vllm_config)
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/init.py", line 14, in get_model
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] return loader.load_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 406, in load_model
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] model = _initialize_model(vllm_config=vllm_config)
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] return model_class(vllm_config=vllm_config, prefix=prefix)
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/vllm/model_executor/models/transformers.py", line 135, in init
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] self.vocab_size = config.vocab_size
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] File "/home/developer1/vllm/venv/lib/python3.12/site-packages/transformers/configuration_utils.py", line 214, in getattribute
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] return super().getattribute(key)
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=3469513) ERROR 03-12 16:53:28 multiproc_worker_utils.py:242] AttributeError: 'Gemma3Config' object has no attribute 'vocab_size'
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.