Skip to content

Conversation

@jikunshang
Copy link
Collaborator

@jikunshang jikunshang commented Sep 17, 2025

Purpose

#23693 introduce torch.cuda.Stream in GPUModelRunner, this breaks xpu behavior. this PR fix this issue by forward torch.cuda API to torch.xpu .

Test Plan

CI

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix an issue for XPU devices by patching torch.cuda APIs to forward to torch.xpu. While the intention is correct, the implementation has a critical flaw: it performs monkey-patching on the global torch.cuda module without properly restoring the original attributes. This can lead to global state corruption and unpredictable behavior in other parts of the code. My review includes a suggestion to implement the patching in a safe, contained manner using a try...finally block that saves and restores the original attributes.

@bigPYJ1151 bigPYJ1151 enabled auto-merge (squash) September 17, 2025 01:35
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 17, 2025
@bigPYJ1151 bigPYJ1151 merged commit dd39baf into vllm-project:main Sep 17, 2025
40 checks passed
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants