Skip to content

Conversation

@vinx13
Copy link
Member

@vinx13 vinx13 commented Jul 25, 2019

  • CUDA schedule for pool grad
  • Relay pool_grad tests (I missed this file in previous PR)

Please review @junrushao1994 @MarisaKirisame @merrymercy @masahi

Copy link
Member

@junrushao junrushao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :-)

out = op
else:
out = outs[0].op.output(0)
s[op].set_scope('local')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be set automatically if it is compute_at a threadIdx

@vinx13 vinx13 force-pushed the feature/pool_grad_cuda branch 2 times, most recently from ff29150 to e5e4040 Compare July 26, 2019 02:55
@vinx13 vinx13 merged commit f1ede9a into apache:master Jul 26, 2019
@MarisaKirisame
Copy link
Contributor

@vinx13 can you get the pool_grad test from the current mode into numerical checking? In the current mode, we essentially copy and paste the grad function into the test function, and compare the two result. If our orginal concept is wrong, what we test against is likely to also be wrong. I think we should slowly replace basically all gradient test with numerical checking ones, and the more complex the grad is, the more crucial this is for us.

wweic pushed a commit to wweic/tvm that referenced this pull request Aug 9, 2019
* [TOPI][CUDA] Schedule for pool_grad

* Relay test

* Fix fused op

* doc

* Remove set scope local
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 6, 2019
* [TOPI][CUDA] Schedule for pool_grad

* Relay test

* Fix fused op

* doc

* Remove set scope local
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants