Skip to content

[RELAY][OP] Relay Operator Sprint #1799

@tqchen

Description

@tqchen

Now that the Relay RFC is being merged and we are stabilizing the type inference interface, we should sprint to add new operators to relay to make it on parity with NNVM.

#1798 shows an example on how to do so for conv2d operator.

General Steps of Porting

  • Implement the TypeRelation function, when necessary
    • The shapes represented by IndexExpr(symbolic integer)
      • When possible, support symbolic shape inference
      • You can, however, get the integer out from symbolic shape if it is a must, that will require the inference to work on concrete shapes.
    • User reporter->Assign to set the inferred result
    • Use reporter->AssertEQ to assert symbolic integer equivalence
      • It will return false if there is an unsatisfied constraint
  • Use tvm::Attrs to replace dmlc::Parameter
  • We switch to directly create python wrappers by calling into positional functions so that the operator signature is explicit in python

General Principles

  • Numpy consistency, always consistent with numpy
    • All binary operators broadcast
    • This means we will use add, subtract instead of broadcast_add, broadcast_sub ...
    • elemwise_add version will not be supported for now as we can just use the broadcast version
  • Consistent with nnvm when possible
  • Fields in Attrs
    • Use concrete types when possible(int, string, bool)
    • If you need None, you can use IndexExpr, which gives you that

List of Operators to be covered

Generally, we need to cover everything we have so far https://docs.tvm.ai/nnvm_top.html
Please use this issue to coordinate what you will be working on. As we expect things to move quickly, try to do "fine grained locking" and only claim things that you are working on right now and aim to get things in a few days.

The List

Level 1: Common Basic Ops

Enough to get MLP

  • nn.dense
  • nn.relu
  • tanh
  • sigmoid
  • exp
  • log
  • sqrt
  • add
  • subtract
  • multiply
  • divide
  • mod
  • nn.batch_flatten
  • concatenate
  • nn.softmax
  • nn.log_softmax
  • nn.batch_norm
  • nn.dropout
  • expand_dims

Level 2: Convolutions

Enough to get convnet

  • nn.conv2d
  • nn.conv2d_transpose
  • nn.max_pool2d
  • nn.avg_pool2d
  • nn.global_max_pool2d
  • nn.global_avg_pool2d
  • nn.pad
  • nn.lrn

Level 3: Additional Math And Transform Operators

  • reshape
  • copy
  • negative
  • floor
  • ceil
  • round
  • trunc
  • clip
  • abs
  • leaky_relu
  • tranpose
  • split
  • squeeze
  • take
  • full
  • zeros
  • ones
  • transpose
  • zeros_like
  • ones_like

Level 4: All broadcast and reduction functions that are not in previous level

  • pow
  • less
  • greater
  • less_than
  • greater_than
  • right_shift
  • left_shift
  • maximum
  • minimum
  • sum
  • max
  • prod
  • argmax, argmin
  • strided_slice
  • broadcast_to
  • where

Level 5: Vision Operators

  • image.resize
  • vision.multibox_prior
  • vision.nms

Level 10: Backend Operators

Operators necessary as intermediate stage of optimizations, or gradient, can be influx

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions