-
Notifications
You must be signed in to change notification settings - Fork 617
[TorchToLinalg] Use linalg.transpose instead of generic in permuteTensor
#3872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…teTensor`` This PR changes the lowering to use `linalg.transpose` instead of `linalg.generic` in `torch_to_linalg::permuteTensor`.
linalg.transpose instead of generic in permuteTensor`linalg.transpose instead of generic in permuteTensor
ubfx
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this makes sense, but there seems to be a problem with 0D-tensors in the e2e tests?
|
I'll take a look at it. |
vivekkhandelwal1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to add a check for the zero-ranked input. Rest all looks fine.
|
I wonder if we should add a folder for |
Yeah, that's a nice thing to do. You may do it in the same PR or in another one as you wish. |
vivekkhandelwal1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
Thanks, I think it's better to open another PR for the folder. |
This PR changes the lowering to use
linalg.transposeinstead oflinalg.genericintorch_to_linalg::permuteTensor.