Skip to content

Conversation

@icemelon
Copy link
Member

@icemelon icemelon commented Jan 7, 2020

RFC & discussion: https://discuss.tvm.ai/t/relay-op-strategy/5247/18
Update AutoTVM log format: https://discuss.tvm.ai/t/autotvm-log-format/5630

Todo list

  • Add op strategy for conv2d & conv2d related ops (e.g., conv2d_nchwc, conv2d_winograd)
  • Add & update op strategy for conv2d_transpose, dense, bitpack, bitserial_conv2d, bitserial_dense
  • Clean up autotvm (after updating all old autotvm templates)
  • Rewrite TOPI test
  • Update AutoTVM related tutorials

Thanks @kevinthesun @comaniac for the help to this PR!

@MarisaKirisame
Copy link
Contributor

When is conv2d gonna be here? Need it for training.

@icemelon
Copy link
Member Author

x86 conv2d is already in the pr. working on cuda and more targets now

@icemelon icemelon force-pushed the op-dispatch branch 2 times, most recently from a90ee23 to 02cd271 Compare January 27, 2020 19:49
@icemelon icemelon marked this pull request as ready for review January 29, 2020 20:20
@icemelon
Copy link
Member Author

icemelon commented Jan 29, 2020

I've added the strategy for all ops. We can start to review this PR since it's huge. Could you help review it?
@tqchen @kevinthesun @comaniac @masahi @MarisaKirisame @hlu1 @yzhliu @zhiics @ZihengJiang @merrymercy @vinx13 @FrozenGene @jroesch @jwfromm

@icemelon icemelon changed the title [Draft] Relay op strategy [WIP] Relay op strategy Jan 29, 2020
Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave some comments related to AutoTVM changes.

self._counter += 1
self.update(target, workload, cfg)
self.update(target, wkl, cfg)
cfg.workload = wkl
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is cfg.workload initialized? I didn't find a definition in ConfigSpace. Also what's the purpose to have a workload field in a config space?

Copy link
Member Author

@icemelon icemelon Jan 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only specific to ApplyGraphBest. The reason is complicated. Because ApplyGraphBest relies on the order of query, we cannot use relay.backend.compile_engine.select_implement to collect the autotvm workload as it may query more than once. Therefore, this is a temporary work around that we sneak in the workload in the return cfg. We can remove this part of logic after we make ApplyGraphBest no longer relies on the query order.
@kevinthesun

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Could we define self.workload in ConfigSpace and add your comment on it? So that we will remember to remove it in the future.


if temp.name == "pad_temp":
data = temp.op.input_tensors[0]
# TODO(@Laurawly): Do we need to schedule pad op here?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we need to schedule temp.

@tqchen
Copy link
Member

tqchen commented Jan 31, 2020

cc @merrymercy @vinx13 @ZihengJiang @jwfromm please help review if you have time

@icemelon icemelon force-pushed the op-dispatch branch 2 times, most recently from 897eb64 to b233005 Compare February 5, 2020 23:06
@icemelon
Copy link
Member Author

@merrymercy Now the tophub version number has been updated and VTA TSIM test time is also back to normal after the fix. Could you review the PR again?

@apivovarov
Copy link
Contributor

There are couple TODOs related to PR4644 - TODO enforce 4-way padding in topi/nn/conv2d after #4644 merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.