Skip to content

Conversation

namgyu-youn
Copy link
Contributor

@namgyu-youn namgyu-youn commented Aug 1, 2025

Summary:
Based on https://docs.pytorch.org/docs/stable/generated/torch.norm.html, torch.norm will be deprecated. To prevent foreseeable issues, this PR updates the vector norm function. Following is warning message in docs.

torch.norm is deprecated and may be removed in a future PyTorch release. Its documentation and behavior may be incorrect, and it is no longer actively maintained.

Test plan: CI

Copy link

pytorch-bot bot commented Aug 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2660

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit ae39cc4 with merge base 66384a9 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 1, 2025
@jerryzh168
Copy link
Contributor

thanks, according to the linked doc, it looks like there is three functions:

Use torch.linalg.vector_norm() when computing vector norms and torch.linalg.matrix_norm() when computing matrix norms. For a function with a similar behavior as this one see torch.linalg.norm().

and the closest one is torch.linalg.norm() that probably works for both vector and matrix?

@namgyu-youn
Copy link
Contributor Author

thanks, according to the linked doc, it looks like there is three functions:

Use torch.linalg.vector_norm() when computing vector norms and torch.linalg.matrix_norm() when computing matrix norms. For a function with a similar behavior as this one see torch.linalg.norm().

and the closest one is torch.linalg.norm() that probably works for both vector and matrix?

@jerryzh168 In my last experience, torch.linalg.vector_norm() has better visibility because developers can directly check that it is a vector norm. How about using torch.linalg.vector_norm()?

@namgyu-youn namgyu-youn changed the title Replace torch.norm with torch.linalg.vector_norm for PyTorch future update Replace torch.norm with torch.linalg.vector_norm Aug 11, 2025
@jerryzh168
Copy link
Contributor

thanks, according to the linked doc, it looks like there is three functions:
Use torch.linalg.vector_norm() when computing vector norms and torch.linalg.matrix_norm() when computing matrix norms. For a function with a similar behavior as this one see torch.linalg.norm().
and the closest one is torch.linalg.norm() that probably works for both vector and matrix?

@jerryzh168 In my last experience, torch.linalg.vector_norm() has better visibility because developers can directly check that it is a vector norm. How about using torch.linalg.vector_norm()?

does it work for matrix? or just vector?

@namgyu-youn
Copy link
Contributor Author

@jerryzh168 In my last experience, torch.linalg.vector_norm() has better visibility because developers can directly check that it is a vector norm. How about using torch.linalg.vector_norm()?

does it work for matrix? or just vector?

@jerryzh168 torch.linalg.vector_norm only works for the vector norm as we discussed. We can use torch.linalg.norm for both (matrix & vector) uses, but in my las experience, torch.linalg.vector_norm helped me clarify at NVIDIA/TensorRT-Model-Optimizer#206 and facebookresearch/optimizers#182.

linalg.norm might interrupt developers' understanding of it. So I prefer using linalg.vector_norm and linalg.matrix_norm more. But I am ready to use linalg.norm; let me know which aligns more.

@jerryzh168
Copy link
Contributor

@jerryzh168 In my last experience, torch.linalg.vector_norm() has better visibility because developers can directly check that it is a vector norm. How about using torch.linalg.vector_norm()?

does it work for matrix? or just vector?

@jerryzh168 torch.linalg.vector_norm only works for the vector norm as we discussed. We can use torch.linalg.norm for both (matrix & vector) uses, but in my las experience, torch.linalg.vector_norm helped me clarify at NVIDIA/TensorRT-Model-Optimizer#206 and facebookresearch/optimizers#182.

linalg.norm might interrupt developers' understanding of it. So I prefer using linalg.vector_norm and linalg.matrix_norm more. But I am ready to use linalg.norm; let me know which aligns more.

I don't quite get it, if it only works for vector norm, what happens for matrix inputs? or we expect all callsites to only have vector inputs?

@jerryzh168
Copy link
Contributor

oh sorry, after reading the doc more closely, I think I'm a mistaken about what "vector" describes, it is referring to output instead of input

@namgyu-youn namgyu-youn marked this pull request as draft August 22, 2025 15:24
@namgyu-youn namgyu-youn marked this pull request as ready for review August 24, 2025 16:48
@namgyu-youn
Copy link
Contributor Author

namgyu-youn commented Aug 24, 2025

@jerryzh168 it seems that CI failure (AttributeError: module 'torch' has no attribute 'int1') occurred due to a PyTorch version error. ant-research/MagicQuill#97 suggested upgrading PyTorch, showing CI failure is not related, but not certain about the true reason.

Therefore, we might face troubleshooting about this; feel free to close this PR if you think issue is too chained. This PR addresses deprecated functions in the foreseeable future, but I am fine to close it if it breaks CI.

@namgyu-youn namgyu-youn requested a review from jerryzh168 August 24, 2025 16:57
@jerryzh168
Copy link
Contributor

@namgyu-youn I think you can use

@unittest.skipIf(not torch_version_at_least("2.6.0"), "Need pytorch 2.6+")
to skip the tests that need torch.int1 (added in pytorch 2.6 I think)

@namgyu-youn
Copy link
Contributor Author

@namgyu-youn I think you can use

@unittest.skipIf(not torch_version_at_least("2.6.0"), "Need pytorch 2.6+")

to skip the tests that need torch.int1 (added in pytorch 2.6 I think)

Sorry I didn't make it clear. What I wanted to ask was, "should I add SkipTest to pass (Int1 support) CI?". Wondered why this CI broken for this PR, but I haven't found true reason. (maybe its related to other PRs)

@jerryzh168
Copy link
Contributor

jerryzh168 commented Aug 29, 2025

@namgyu-youn maybe rebase? generally we should already skip the int1 tests for lower torch versions

@namgyu-youn
Copy link
Contributor Author

namgyu-youn commented Aug 29, 2025

@jerryzh168 Rebase was right; the reason was an old CI run. Since 2.5.0 check was dropped at #2720, we don't have to investigate into it; CI fail seems unrelated, I think.

@jerryzh168 jerryzh168 added the topic: not user facing Use this tag if you don't want this PR to show up in release notes label Sep 14, 2025
@jerryzh168 jerryzh168 merged commit e3d9720 into pytorch:main Sep 14, 2025
18 of 19 checks passed
@namgyu-youn namgyu-youn deleted the torch_norm branch September 14, 2025 04:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing Use this tag if you don't want this PR to show up in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants