-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[Relay][AutoTVM] Relay op strategy #4644
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
6b4cb34 to
d42c343
Compare
|
When is conv2d gonna be here? Need it for training. |
|
x86 conv2d is already in the pr. working on cuda and more targets now |
a90ee23 to
02cd271
Compare
|
I've added the strategy for all ops. We can start to review this PR since it's huge. Could you help review it? |
comaniac
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leave some comments related to AutoTVM changes.
| self._counter += 1 | ||
| self.update(target, workload, cfg) | ||
| self.update(target, wkl, cfg) | ||
| cfg.workload = wkl |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is cfg.workload initialized? I didn't find a definition in ConfigSpace. Also what's the purpose to have a workload field in a config space?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only specific to ApplyGraphBest. The reason is complicated. Because ApplyGraphBest relies on the order of query, we cannot use relay.backend.compile_engine.select_implement to collect the autotvm workload as it may query more than once. Therefore, this is a temporary work around that we sneak in the workload in the return cfg. We can remove this part of logic after we make ApplyGraphBest no longer relies on the query order.
@kevinthesun
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Could we define self.workload in ConfigSpace and add your comment on it? So that we will remember to remove it in the future.
|
|
||
| if temp.name == "pad_temp": | ||
| data = temp.op.input_tensors[0] | ||
| # TODO(@Laurawly): Do we need to schedule pad op here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we need to schedule temp.
|
cc @merrymercy @vinx13 @ZihengJiang @jwfromm please help review if you have time |
897eb64 to
b233005
Compare
630cbec to
24ae797
Compare
|
@merrymercy Now the tophub version number has been updated and VTA TSIM test time is also back to normal after the fix. Could you review the PR again? |
|
There are couple TODOs related to PR4644 -
|
RFC & discussion: https://discuss.tvm.ai/t/relay-op-strategy/5247/18
Update AutoTVM log format: https://discuss.tvm.ai/t/autotvm-log-format/5630
Todo list
Thanks @kevinthesun @comaniac for the help to this PR!