-
Notifications
You must be signed in to change notification settings - Fork 618
[TOSA] : Fix float to integer cast for torch.ops.aten.to lowering.
#3946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hi @sahas3, for the Torch float to integer cast behavior, did you observe it to be the same for all Torch casting, or just |
I think |
sjarus
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If Torch mandates round towards zero, can that not be synthetically expressed here @sahas3 ?
|
Looks like we had a misunderstanding earlier @sahas3 . Approving. |
* Previously, in Torch to TOSA, there are 3 ways to create tosa.cast op:
- `rewriter.create<tosa::CastOp>()`
- `tosa::promoteType()`
- `tosa::tosaCastTensorToType()`
* This commit combines the three APIs above into
`tosa::tosaCastTensorToType()` with the following features:
- Checking whether source and destination element types are the same
before casting. If they are same, skip the cast.
- Custom float to integer cast behavior added from this PR:
llvm#3946
TLDR: PyTorch's and TOSA's float to integer casting behaviors are
different (round to zero vs round to nearest, respectively), which
requires a custom casting here.
- Future `TODO`: add a --strict mode which includes
`checkValidityOfCast()` to ensure that the casting pairs follow TOSA
specifications.
* Update LIT tests.
Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I2aef3c79d8f2d98b93e671d5b815b8eab33e697e
* Previously, in Torch to TOSA, there are 3 ways to create tosa.cast op:
- `rewriter.create<tosa::CastOp>()`
- `tosa::promoteType()`
- `tosa::tosaCastTensorToType()`
* This commit combines the three APIs above into
`tosa::tosaCastTensorToType()` with the following features:
- Checking whether source and destination element types are the same
before casting. If they are same, skip the cast.
- Custom float to integer cast behavior added from this PR:
llvm#3946
TLDR: PyTorch's and TOSA's float to integer casting behaviors are
different (round to zero vs round to nearest, respectively), which
requires a custom casting here.
- Future `TODO`: add a --strict mode which includes
`checkValidityOfCast()` to ensure that the casting pairs follow TOSA
specifications.
* Update LIT tests.
Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I2aef3c79d8f2d98b93e671d5b815b8eab33e697e
* Previously, in Torch to TOSA, there are 3 ways to create tosa.cast op:
- `rewriter.create<tosa::CastOp>()`
- `tosa::promoteType()`
- `tosa::tosaCastTensorToType()`
* This commit combines the three APIs above into
`tosa::tosaCastTensorToType()` with the following features:
- Checking whether source and destination element types are the same
before casting. If they are same, skip the cast.
- Custom float to integer cast behavior added from this PR:
llvm#3946
TLDR: PyTorch's and TOSA's float to integer casting behaviors are
different (round to zero vs round to nearest, respectively), which
requires a custom casting here.
- Future `TODO`: add a --strict mode which includes
`checkValidityOfCast()` to ensure that the casting pairs follow TOSA
specifications.
* Update LIT tests.
Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I2aef3c79d8f2d98b93e671d5b815b8eab33e697e
* Previously, in Torch to TOSA, there are 3 ways to create tosa.cast op:
- `rewriter.create<tosa::CastOp>()`
- `tosa::promoteType()`
- `tosa::tosaCastTensorToType()`
* This commit combines the three APIs above into
`tosa::tosaCastTensorToType()` with the following features:
- Checking whether source and destination element types are the same
before casting. If they are same, skip the cast.
- Custom float to integer cast behavior added from this PR:
llvm#3946
TLDR: PyTorch's and TOSA's float to integer casting behaviors are
different (round to zero vs round to nearest, respectively), which
requires a custom casting here.
- Future `TODO`: add a --strict mode which includes
`checkValidityOfCast()` to ensure that the casting pairs follow TOSA
specifications.
* Update LIT tests.
Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I2aef3c79d8f2d98b93e671d5b815b8eab33e697e
The behavior of float -> integer cast in PyTorch (though I haven't found the actual code implementing the cast) appears to be (based on the results produced in PyTorch):
arith.fptosi/ui)Currently we only emit
tosa.castfor this operation but as per the spec https://www.mlplatform.org/tosa/tosa_spec.html#_cast the rounding performed for float -> integer is round to nearest integer (not zero). Hence, the current TOSA lowering fortorch.ops.aten.toproduces incorrect answer.