Skip to content

Conversation

@lara-hdr
Copy link
Contributor

@lara-hdr lara-hdr commented Nov 5, 2019

Faster Rcnn should now be exportable to ONNX.
The PyTorch version should include commit ebc216a0765d85f345f9a5cd1dfd2ec360de3a52 (any nightly version after Nov 5th).
Opset 11 is the minimum ONNX version supported.
Only a batch size of 1 with fixed image size is supported.

The test test_faster_rcnn() in test_onnx.py has an example of exporting the model;
1 - create an input of valid size to export the model.
2 - run the model with the input then export it by calling torch.onnx.export() with the model and input.
(3- you can optionally test/run the model with ONNX Runtime like in ort_validate().)

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks a lot!

Tests are segfaulting, maybe because they require too much memory on the TravisCI machines?
One possibility would be to reduce the input sizes, or to move ONNX tests to CircleCI.

Thoughts?

@lara-hdr
Copy link
Contributor Author

lara-hdr commented Nov 5, 2019

thanks @fmassa, I am trying with smaller image sizes now.

@unittest.skip("Disable test until Resize opset 11 is implemented in ONNX Runtime")
@unittest.skipIf(torch.__version__ < "1.4.", "Disable test if torch version is less than 1.4")
def test_faster_rcnn(self):
images, test_images = self.get_test_images()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we might also need to change the min_size and max_size for this test to be changed.
So something like

model.transform.min_size = 300
model.transform.max_size = 300

@codecov-io
Copy link

codecov-io commented Nov 5, 2019

Codecov Report

Merging #1555 into master will increase coverage by 0.44%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1555      +/-   ##
==========================================
+ Coverage   65.48%   65.92%   +0.44%     
==========================================
  Files          90       90              
  Lines        7080     7073       -7     
  Branches     1077     1076       -1     
==========================================
+ Hits         4636     4663      +27     
+ Misses       2135     2102      -33     
+ Partials      309      308       -1
Impacted Files Coverage Δ
torchvision/models/detection/transform.py 81.59% <100%> (+14.67%) ⬆️
torchvision/ops/_register_onnx_ops.py 100% <100%> (ø) ⬆️
torchvision/models/detection/rpn.py 82.19% <100%> (-0.25%) ⬇️
torchvision/models/detection/faster_rcnn.py 80.48% <0%> (+6.09%) ⬆️
torchvision/models/_utils.py 88.37% <0%> (+20.93%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8909ff4...067c2c9. Read the comment docs.

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks a lot Lara!

@fmassa fmassa merged commit be6f398 into pytorch:master Nov 6, 2019
@Ed-Roodzant
Copy link

Hi, I've been trying to convert a re-trained model that is based on this model and I'm still not getting a model I've been able to use, but I came across this thread yesterday.

With the fixed input size, and being limited to 300x300, does that mean inference will be limited to images with that size? That won't do too much good for detection :(

@lara-hdr
Copy link
Contributor Author

@Ed-Roodzant the input size is set to 300x300 in the tests to make them run faster on the CI. You can export the model with an input of a bigger size, and inference would use that size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants