Skip to content

Conversation

@leo-pony
Copy link
Collaborator

@leo-pony leo-pony commented Jul 1, 2025

What this PR does / why we need it?

Bump torch version to 2.7.1, and cleanup infer schema patch 857f489 (#837), this patch depends on also: #1974

Does this PR introduce any user-facing change?

No

How was this patch tested?

CI passed

torch-npu 2.7.1rc1 install guide:
https://gitee.com/ascend/pytorch/tree/v2.7.1/
install depending:

pip3 install pyyaml
pip3 install setuptools

install torch-npu:

Closes: #1866
Closes: #1390

@leo-pony leo-pony marked this pull request as draft July 1, 2025 09:39
@github-actions
Copy link

github-actions bot commented Jul 1, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@leo-pony leo-pony closed this Jul 1, 2025
@leo-pony leo-pony reopened this Jul 1, 2025
@github-actions github-actions bot added documentation Improvements or additions to documentation and removed merge-conflicts labels Jul 1, 2025
@leo-pony leo-pony force-pushed the torch_2_7_adapt branch 2 times, most recently from 04ddefc to 7bab351 Compare July 3, 2025 08:34
@leo-pony leo-pony force-pushed the torch_2_7_adapt branch 2 times, most recently from 4f45d58 to b67e468 Compare July 4, 2025 09:45
@codecov
Copy link

codecov bot commented Jul 4, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 76.31%. Comparing base (ad366bf) to head (6937b7d).
⚠️ Report is 612 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1562      +/-   ##
==========================================
- Coverage   76.41%   76.31%   -0.11%     
==========================================
  Files         113      111       -2     
  Lines       12553    12485      -68     
==========================================
- Hits         9593     9528      -65     
+ Misses       2960     2957       -3     
Flag Coverage Δ
unittests 76.31% <100.00%> (-0.11%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link

github-actions bot commented Jul 6, 2025

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@wangxiyuan wangxiyuan force-pushed the torch_2_7_adapt branch 2 times, most recently from e113a90 to f10b808 Compare July 9, 2025 03:38
@leo-pony leo-pony marked this pull request as ready for review July 10, 2025 06:11
@leo-pony leo-pony marked this pull request as draft July 10, 2025 06:13
@Yikun Yikun added accuracy-test enable all accuracy test for PR ready-for-test start test by label for PR labels Jul 10, 2025
@leo-pony leo-pony marked this pull request as ready for review July 24, 2025 06:46
Copy link
Collaborator

@Yikun Yikun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's wait torch npu 2.7.1 first POC version.

# limitations under the License.
# This file is a part of the vllm-ascend project.
#

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert unrelated change to avoid cherrypick conflict

size,
loop_cnt,
aiv_num);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert unrelated change to avoid cherrypick conflict

int64_t aiv_num = 0;
TORCH_CHECK(aclGetDeviceCapability(device_id, ACL_DEVICE_INFO_VECTOR_CORE_NUM, &aiv_num) == ACL_SUCCESS);
uint32_t loop_cnt = (size + aiv_num - 1) / aiv_num;

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert unrelated change to avoid cherrypick conflict

README.md Outdated
* Python >= 3.9, < 3.12
* CANN >= 8.1.RC1
* PyTorch >= 2.5.1, torch-npu >= 2.5.1.post1.dev20250619
* PyTorch >= 2.7.1, torch-npu >= 2.7.1rc1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* PyTorch >= 2.7.1, torch-npu >= 2.7.1rc1
* PyTorch >= 2.7.1, torch-npu >= 2.7.1.dev20250724

README.zh.md Outdated
* Python >= 3.9, < 3.12
* CANN >= 8.1.RC1
* PyTorch >= 2.5.1, torch-npu >= 2.5.1.post1.dev20250619
* PyTorch >= 2.5.1, torch-npu >= 2.7.1rc1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* PyTorch >= 2.5.1, torch-npu >= 2.7.1rc1
* PyTorch >= 2.5.1, torch-npu >= 2.7.1.dev20250724

pyproject.toml Outdated
"torch-npu==2.5.1.post1.dev20250619",
"torch>=2.5.1",
"torchvision<0.21.0",
"torch-npu==2.7.1rc1",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"torch-npu==2.7.1rc1",
"torch-npu==2.7.1.dev20250724",

requirements.txt Outdated
--pre
--extra-index-url https://mirrors.huaweicloud.com/ascend/repos/pypi
torch-npu==2.5.1.post1.dev20250619
torch-npu==2.7.1rc1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
torch-npu==2.7.1rc1
torch-npu==2.7.1.dev20250724

@@ -1,4 +1,3 @@
#
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert unrelated change


assert is_hccl_available()

# TODO(Yizhou): The reason we need to set options while vllm does not
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yiz-liu FYI, we can finally clean up this, thanks for your note.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

@github-actions
Copy link

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Yikun and others added 4 commits August 4, 2025 13:41
@wangxiyuan wangxiyuan merged commit 807f089 into vllm-project:main Aug 5, 2025
25 checks passed
zzhx1 pushed a commit to lidenghui1110/vllm-ascend that referenced this pull request Aug 11, 2025
### What this PR does / why we need it?
Bump torch version to 2.7.1, and cleanup infer schema patch
vllm-project@857f489
(vllm-project#837), this patch
depends on also: vllm-project#1974

### Does this PR introduce any user-facing change?
No

#### How was this patch tested?
CI passed

torch-npu 2.7.1rc1 install guide:
https://gitee.com/ascend/pytorch/tree/v2.7.1/
install depending:
```
pip3 install pyyaml
pip3 install setuptools
```
install torch-npu:

Closes: vllm-project#1866
Closes: vllm-project#1390


- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@9af654c

---------

Signed-off-by: Yikun Jiang <[email protected]>
Signed-off-by: leo-pony <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
zzhx1 pushed a commit to lidenghui1110/vllm-ascend that referenced this pull request Aug 11, 2025
### What this PR does / why we need it?
Bump torch version to 2.7.1, and cleanup infer schema patch
vllm-project@857f489
(vllm-project#837), this patch
depends on also: vllm-project#1974

### Does this PR introduce any user-facing change?
No

#### How was this patch tested?
CI passed

torch-npu 2.7.1rc1 install guide:
https://gitee.com/ascend/pytorch/tree/v2.7.1/
install depending:
```
pip3 install pyyaml
pip3 install setuptools
```
install torch-npu:

Closes: vllm-project#1866
Closes: vllm-project#1390


- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@9af654c

---------

Signed-off-by: Yikun Jiang <[email protected]>
Signed-off-by: leo-pony <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
chopper0126 pushed a commit to chopper0126/vllm-ascend that referenced this pull request Sep 26, 2025
### What this PR does / why we need it?
Bump torch version to 2.7.1, and cleanup infer schema patch
vllm-project@857f489
(vllm-project#837), this patch
depends on also: vllm-project#1974

### Does this PR introduce any user-facing change?
No

#### How was this patch tested?
CI passed

torch-npu 2.7.1rc1 install guide:
https://gitee.com/ascend/pytorch/tree/v2.7.1/
install depending:
```
pip3 install pyyaml
pip3 install setuptools
```
install torch-npu:

Closes: vllm-project#1866
Closes: vllm-project#1390


- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@9af654c

---------

Signed-off-by: Yikun Jiang <[email protected]>
Signed-off-by: leo-pony <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
@leo-pony leo-pony deleted the torch_2_7_adapt branch October 14, 2025 02:53
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
### What this PR does / why we need it?
Bump torch version to 2.7.1, and cleanup infer schema patch
vllm-project@857f489
(vllm-project#837), this patch
depends on also: vllm-project#1974

### Does this PR introduce any user-facing change?
No

#### How was this patch tested?
CI passed

torch-npu 2.7.1rc1 install guide:
https://gitee.com/ascend/pytorch/tree/v2.7.1/
install depending:
```
pip3 install pyyaml
pip3 install setuptools
```
install torch-npu:

Closes: vllm-project#1866
Closes: vllm-project#1390


- vLLM version: v0.10.0
- vLLM main:
vllm-project/vllm@9af654c

---------

Signed-off-by: Yikun Jiang <[email protected]>
Signed-off-by: leo-pony <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

accuracy-test enable all accuracy test for PR ci/build documentation Improvements or additions to documentation module:core module:quantization module:tests ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Failed to benchmark on main [Feature]: Request vllm-ascend to support torch_npu>=2.6

4 participants