Skip to content

Conversation

@huachenheli
Copy link
Contributor

@huachenheli huachenheli commented Oct 29, 2025

Purpose

(EngineCore_DP0 pid=3095821) Process EngineCore_DP0:
(EngineCore_DP0 pid=3095821) Traceback (most recent call last):
(EngineCore_DP0 pid=3095821)   File "/usr/lib64/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=3095821)     self.run()
(EngineCore_DP0 pid=3095821)   File "/usr/lib64/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=3095821)     self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/engine/core.py", line 783, in run_engine_core
(EngineCore_DP0 pid=3095821)     raise e
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/engine/core.py", line 770, in run_engine_core
(EngineCore_DP0 pid=3095821)     engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=3095821)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/engine/core.py", line 538, in __init__
(EngineCore_DP0 pid=3095821)     super().__init__(
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/engine/core.py", line 109, in __init__
(EngineCore_DP0 pid=3095821)     num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
(EngineCore_DP0 pid=3095821)                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/engine/core.py", line 218, in _initialize_kv_caches
(EngineCore_DP0 pid=3095821)     available_gpu_memory = self.model_executor.determine_available_memory()
(EngineCore_DP0 pid=3095821)                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/executor/abstract.py", line 123, in determine_available_memory
(EngineCore_DP0 pid=3095821)     return self.collective_rpc("determine_available_memory")
(EngineCore_DP0 pid=3095821)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/executor/uniproc_executor.py", line 73, in collective_rpc
(EngineCore_DP0 pid=3095821)     return [run_method(self.driver_worker, method, args, kwargs)]
(EngineCore_DP0 pid=3095821)             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/serial_utils.py", line 459, in run_method
(EngineCore_DP0 pid=3095821)     return func(*args, **kwargs)
(EngineCore_DP0 pid=3095821)            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "/home/huachenheli/uv_env/vllm/lib64/python3.12/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(EngineCore_DP0 pid=3095821)     return func(*args, **kwargs)
(EngineCore_DP0 pid=3095821)            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/worker/gpu_worker.py", line 284, in determine_available_memory
(EngineCore_DP0 pid=3095821)     self.model_runner.profile_run()
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/v1/worker/gpu_model_runner.py", line 3713, in profile_run
(EngineCore_DP0 pid=3095821)     dummy_encoder_outputs = self.model.get_multimodal_embeddings(
(EngineCore_DP0 pid=3095821)                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/model_executor/models/qwen3_omni_moe_thinker.py", line 1274, in get_multimodal_embeddings
(EngineCore_DP0 pid=3095821)     video_embeddings = self._process_video_input(multimodal_input)
(EngineCore_DP0 pid=3095821)                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "//home/huachenheli/github/vllm/vllm/model_executor/models/qwen2_5_omni_thinker.py", line 784, in _process_video_input
(EngineCore_DP0 pid=3095821)     with set_forward_context(None, self.vllm_config):
(EngineCore_DP0 pid=3095821)                                    ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3095821)   File "/home/huachenheli/uv_env/vllm/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1964, in __getattr__
(EngineCore_DP0 pid=3095821)     raise AttributeError(
(EngineCore_DP0 pid=3095821) AttributeError: 'Qwen3OmniMoeThinkerForConditionalGeneration' object has no attribute 'vllm_config'

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@mergify mergify bot added the qwen Related to Qwen models label Oct 29, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a critical AttributeError in Qwen3OmniMoeThinkerForConditionalGeneration. The traceback in the description clearly indicated that self.vllm_config was being accessed before it was assigned. Adding self.vllm_config = vllm_config in the __init__ method is the direct and correct solution to this problem. The change is minimal, well-placed, and effectively resolves the crash.

@ywang96
Copy link
Member

ywang96 commented Oct 29, 2025

@huachenheli I just added this same fix for this inside #27705 actually

@huachenheli
Copy link
Contributor Author

@huachenheli I just added this same fix for this inside #27705 actually

Ah okay. I'll close this one then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

qwen Related to Qwen models

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants