-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Enable ONNX Test for FasterRcnn #1555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
fmassa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks a lot!
Tests are segfaulting, maybe because they require too much memory on the TravisCI machines?
One possibility would be to reduce the input sizes, or to move ONNX tests to CircleCI.
Thoughts?
|
thanks @fmassa, I am trying with smaller image sizes now. |
| @unittest.skip("Disable test until Resize opset 11 is implemented in ONNX Runtime") | ||
| @unittest.skipIf(torch.__version__ < "1.4.", "Disable test if torch version is less than 1.4") | ||
| def test_faster_rcnn(self): | ||
| images, test_images = self.get_test_images() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we might also need to change the min_size and max_size for this test to be changed.
So something like
model.transform.min_size = 300
model.transform.max_size = 300
Codecov Report
@@ Coverage Diff @@
## master #1555 +/- ##
==========================================
+ Coverage 65.48% 65.92% +0.44%
==========================================
Files 90 90
Lines 7080 7073 -7
Branches 1077 1076 -1
==========================================
+ Hits 4636 4663 +27
+ Misses 2135 2102 -33
+ Partials 309 308 -1
Continue to review full report at Codecov.
|
fmassa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks a lot Lara!
|
Hi, I've been trying to convert a re-trained model that is based on this model and I'm still not getting a model I've been able to use, but I came across this thread yesterday. With the fixed input size, and being limited to 300x300, does that mean inference will be limited to images with that size? That won't do too much good for detection :( |
|
@Ed-Roodzant the input size is set to 300x300 in the tests to make them run faster on the CI. You can export the model with an input of a bigger size, and inference would use that size. |
Faster Rcnn should now be exportable to ONNX.
The PyTorch version should include commit ebc216a0765d85f345f9a5cd1dfd2ec360de3a52 (any nightly version after Nov 5th).
Opset 11 is the minimum ONNX version supported.
Only a batch size of 1 with fixed image size is supported.
The test test_faster_rcnn() in test_onnx.py has an example of exporting the model;
1 - create an input of valid size to export the model.
2 - run the model with the input then export it by calling torch.onnx.export() with the model and input.
(3- you can optionally test/run the model with ONNX Runtime like in ort_validate().)