-
Notifications
You must be signed in to change notification settings - Fork 179
[refactor] Update Ln-norm logic for upcoming PyTorch update #206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
torch.norm is deprecated and may be removed in future PyTorch releases
torch.norm is deprecated and may be removed in future PyTorch releases
Thanks @namgyu-youn for contributing the fix. Can you also apply similar changes to usage of Please also share testing details - whether you have run unit and GPU tests? ( |
@kevalmorabia97; Sure, and it seems there is no other Ln-norm logic in
The result of pytest is the following: Also, local test code is the following: import torch
# Create test tensors
torch.manual_seed(42)
X = torch.randn(10, 10)
X_hat, A_reg = torch.randn(10, 10), torch.randn(5, 5)
# Old Version (torch.norm)
print("=== torch.norm ===")
relative_error = torch.dist(X, X_hat, p=torch.inf) / torch.norm(X, p=torch.inf)
norm_result = torch.norm(A_reg, p=torch.inf)
print("Relative error:", relative_error.item())
print("A_reg infinity norm:", norm_result.item())
print()
# New Version (torch.linalg.vector_norm)
print("=== torch.linalg.vector_norm ===")
relative_error = torch.dist(X, X_hat, p=torch.inf) / torch.linalg.vector_norm(X, ord=torch.inf)
norm_result = torch.linalg.vector_norm(A_reg, ord=torch.inf)
print("Relative error:", relative_error.item())
print("A_reg infinity norm:", norm_result.item()) And the result is: === torch.norm ===
Relative error: 1.512144684791565
A_reg infinity norm: 1.9775909185409546
=== torch.linalg.vector_norm ===
Relative error: 1.512144684791565
A_reg infinity norm: 1.9775909185409546 |
Got it, really thanks for your leading. |
Thanks for the changes. Let me take this in our internal repo where we run a more extensive CI test which is yet to be migrated to Github. |
assert isinstance(out, nn.Linear) | ||
hp_hidden_dim.register_importance(lambda: out._parameters["weight"].detach().norm(dim=0)) | ||
hp_hidden_dim.register_importance( | ||
lambda: torch.linalg.norm(out._parameters["weight"].detach(), dim=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we use vector_norm here as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This case computes vector norm because dim
is int. Please check above comment; let me know if there is anything wrong in internal CI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets use vector_norm explicitly to avoid any ambiguity and consistency with rest of the changes
…or-norm `torch.linalg.norm` supports various calculations based on `dim` parameter: - If dim is an int, the vector norm will be computed. - If dim is a 2-tuple, the matrix norm will be computed. - If dim=None and ord=None, A will be flattened to 1D and the 2-norm of the resulting vector will be computed. - If dim=None and ord!=None, A must be 1D or 2D. Therefore, vector norm is not computed when `dim` is tuple. (Nit: `torch.linalg.vector_norm` is more explicit for vector norms.)
There is one more torch.norm change missing at TensorRT-Model-Optimizer/modelopt/torch/nas/plugins/megatron.py Lines 616 to 619 in bb630db
Can you please address that as well? |
- `torch.norm` is deprecated in favor of `torch.linalg.norm` and `torch.linalg.vector_norm`.
@kevalmorabia97 ; Thanks for your leading. Could you take a look at this PR? |
Also, could you comment about #146 - Comment? Reviewing source codes in But before the start, your (and other maintainers) advice must be valuable background for the directionality. I hope the ticket is still open. |
Co-authored-by: namgyu-youn <[email protected]>
Your PR commit is merged. Thank you for contributing |
commit 7a27f2a Author: Keval Morabia <[email protected]> Date: Mon Jul 14 23:19:04 2025 +0530 Update Ln-norm logic for upcoming PyTorch update (NVIDIA#206) Co-authored-by: namgyu-youn <[email protected]> commit 8e3bfb5 Author: Keval Morabia <[email protected]> Date: Mon Jul 14 23:17:58 2025 +0530 Fix NF4 scale padding (NVIDIA#183) Co-authored-by: ishan-modi <[email protected]> commit cafa7f6 Author: Keval Morabia <[email protected]> Date: Mon Jul 14 23:08:26 2025 +0530 Update for 0.33.0 release commit 33a45be Author: omrialmog <[email protected]> Date: Fri Jun 27 08:31:31 2025 -0700 Update README.md news NVFP4 Blog (NVIDIA#223) commit 5b4dc03 Author: Keval Morabia <[email protected]> Date: Wed Jun 18 04:46:36 2025 +0530 Add Github CI action to build and publish docs (NVIDIA#219) commit de20a6a Author: Keval Morabia <[email protected]> Date: Tue Jun 17 03:27:57 2025 +0530 Enable cpu unit tests in Github CI (NVIDIA#210) commit 11b3eb6 Author: Keval Morabia <[email protected]> Date: Wed Jun 11 16:14:16 2025 -0700 Add tox.ini and fix code_quality workflow commit 6fd7a64 Author: Keval Morabia <[email protected]> Date: Wed Jun 11 15:33:45 2025 -0700 Add SECURITY.md file commit d6e32e9 Author: Keval Morabia <[email protected]> Date: Tue Jun 10 15:56:35 2025 -0700 Add code quality checks for pull requests commit 5a2bf34 Author: Keval Morabia <[email protected]> Date: Thu Jun 5 13:50:50 2025 -0700 Fix installation files
What does this PR do?
Type of change: Refactor
Overview: Based on PyTorch - docs,
torch.norm()
will be removed in the foreseeable version. Therefore, this update replacestorch.norm()
withtorch.linalg.vector_norm()
Usage
Testing
pytest and the comment
Before your PR is "Ready for review"
Additional Information