Skip to content

Conversation

@JR4er
Copy link
Contributor

@JR4er JR4er commented May 5, 2023

import tvm
import numpy as np
from tvm import relay


def should_pass():

    data = relay.const(np.zeros([1,1,10], 'float32'))
    indices = relay.const(np.ones([2,1,1,1], 'int64'))
    updates = relay.const(np.random.rand(1,1,1,10).astype('float32'))

    scatter_nd = relay.scatter_nd(data, indices, updates)

    mod = tvm.IRModule.from_expr(scatter_nd)
    mod = relay.transform.InferType()(mod)


def should_fail():
    data = relay.const(np.zeros([1,1,10], 'float32'))
    indices = relay.const(np.ones([2,1,1], 'int64'))
    updates = relay.const(np.random.rand(1,1,5).astype('float32'))

    scatter_nd = relay.scatter_nd(data, indices, updates)

    mod = tvm.IRModule.from_expr(scatter_nd)
    mod = relay.transform.InferType()(mod)

case should_pass will report a error

Check failed: (0 <= i && i < p->size_) is false: IndexError: indexing 3 on an array of size 3

case should_fail will pass incorrectly. however FoldConstant will find the error:

AssertionError: Dimension of updates[2] (5) must equal dimension of out_shape[2] (10).

It looks like #7927 changes the implementation of relay.scatter_nd into onnx-like. but a change from data into updates is missing.

@tvm-bot
Copy link
Collaborator

tvm-bot commented May 5, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

ScatterND requires updates.shape[K:] == output.shape[M:],
not data.shape[K:] == output.shape[M:]
@JR4er JR4er changed the title [bugfix][Relay] fix scatter_nd type relation [BugFix][Relay] fix scatter_nd type relation May 6, 2023
add testcase for scatter_nd with m != k
@masahi masahi merged commit 571eff9 into apache:main May 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants