Skip to content

Conversation

@wangxiyuan
Copy link
Collaborator

@wangxiyuan wangxiyuan commented Jul 14, 2025

This PR fixed the broken CI. It require vllm-project/vllm#20900 merged first.

@codecov
Copy link

codecov bot commented Jul 14, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 54.48%. Comparing base (c30ddb8) to head (063ffbb).
⚠️ Report is 647 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1773       +/-   ##
===========================================
+ Coverage   27.39%   54.48%   +27.09%     
===========================================
  Files          56       82       +26     
  Lines        6191    10063     +3872     
===========================================
+ Hits         1696     5483     +3787     
- Misses       4495     4580       +85     
Flag Coverage Δ
unittests 54.48% <100.00%> (+27.09%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

from typing_extensions import ParamSpec
from vllm.engine.arg_utils import AsyncEngineArgs
from vllm.entrypoints.openai.cli_args import make_arg_parser
from vllm.model_executor.model_loader import get_model_loader
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import vllm.model_executor.model_loader here will lead torch infer_schema error. Since prompt_embedding only works in V0. Let's remove releated test for quick fix.

Signed-off-by: wangxiyuan <[email protected]>
scheduler.requests[req.request_id] = req
scheduler.running.append(req)
if not vllm_version_is("0.9.2"):
req.status = RequestStatus.RUNNING
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is adapted from vllm-project/vllm#20739

from vllm_ascend.utils import adapt_patch # noqa E402

adapt_patch(True)
adapt_patch(False)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ensure torch patch is called.

Copy link
Collaborator

@Yikun Yikun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merge to main to recover CI

@Yikun Yikun merged commit 494b0f4 into vllm-project:main Jul 14, 2025
22 checks passed
@wangxiyuan wangxiyuan deleted the fix_ci_714 branch July 15, 2025 12:02
chopper0126 pushed a commit to chopper0126/vllm-ascend that referenced this pull request Oct 16, 2025
This PR fixed the broken CI. It require
vllm-project/vllm#20900 merged first.

- vLLM version: v0.9.2
- vLLM main:
vllm-project/vllm@e8cc53a

Signed-off-by: wangxiyuan <[email protected]>
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
This PR fixed the broken CI. It require
vllm-project/vllm#20900 merged first.

- vLLM version: v0.9.2
- vLLM main:
vllm-project/vllm@e8cc53a

Signed-off-by: wangxiyuan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants