Skip to content

Conversation

@comaniac
Copy link
Contributor

This PR makes auto_scheduler prints the function name when no schedule is found, as function name is usually more informatic than the compute DAG at the first glance.

Example:

-----------------------------------
fused_nn.contrib_conv2d_winograd_without_weight_transform_add
Cannot find tuned schedules for target=cuda -keys=cuda,gpu -max_num_threads=1024 -thread_warp_size=32, workload_key=["3ea73fb9b0364374730d09e068821f95", 1, 56, 56, 64, 6, 6, 64, 64, 1, 56, 56, 64, 1, 56, 56, 64]. A fallback TOPI schedule is used, which may bring great performance regression or even compilation failure. Compute DAG info:
placeholder = PLACEHOLDER [1, 56, 56, 64]
data_pad(i0, i1, i2, i3) = tir.if_then_else(((((i1 >= 1) && (i1 < 57)) && (i2 >= 1)) && (i2 < 57)), placeholder[i0, (i1 - 1), (i2 - 1), i3], 0f)
input_tile(eps, nu, p, ci) = data_pad[floordiv(p, 196), ((floormod(floordiv(p, 14), 14)*4) + eps), ((floormod(p, 14)*4) + nu), ci]
B(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 6) == 5)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 6) == 4)),  ..(OMITTED)..  (floormod(j, 6) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 6) == 0)), 1f, 0f))))))))))))))))))))))))))))))))))))
data_pack(eps, nu, p, ci) += ((input_tile[r_a, r_b, p, ci]*B[r_a, eps])*B[r_b, nu])
placeholder = PLACEHOLDER [6, 6, 64, 64]
bgemm(eps, nu, p, co) += (data_pack[eps, nu, p, ci]*placeholder[eps, nu, co, ci])
A(i, j) = select(((floormod(i, 6) == 5) && (floormod(j, 4) == 3)), 1f, select(((floormod(i, 6) == 5) && (floormod(j, 4) == 2)),  ..(OMITTED)..  6) == 0) && (floormod(j, 4) == 1)), 0f, select(((floormod(i, 6) == 0) && (floormod(j, 4) == 0)), 1f, 0f))))))))))))))))))))))))
inverse(vh, vw, p, co) += ((bgemm[r_a, r_b, p, co]*A[r_a, vh])*A[r_b, vw])
conv2d_winograd(n, h, w, co) = inverse[floormod(h, 4), floormod(w, 4), ((((n*14)*14) + (floordiv(h, 4)*14)) + floordiv(w, 4)), co]
placeholder = PLACEHOLDER [1, 56, 56, 64]
T_add(ax0, ax1, ax2, ax3) = (conv2d_winograd[ax0, ax1, ax2, ax3] + placeholder[ax0, ax1, ax2, ax3])

cc @merrymercy

@merrymercy merrymercy merged commit aa494cf into apache:main Mar 19, 2021
@comaniac comaniac deleted the ansor_add_func_name branch March 19, 2021 22:37
AndrewZhaoLuo pushed a commit to AndrewZhaoLuo/tvm that referenced this pull request Mar 22, 2021
* main:
  [AutoScheduler] Add function name in message (apache#7703)
  [Vulkan] Workaround for zero size allocation (apache#7691)
  Change behavior of onnx importer to throw when user provides an input no in the graph. (apache#7699)
  Free TensorRT engine and context (apache#7702)
  [TFLite] Cast operator adapted for MLIR-based convertor (apache#7639)
  [CPP_RPC] allow user supplied work dir (apache#7670)
  Default value for graph_runtime Init lookup_linked_param_func (apache#7676)
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
* [AutoScheduler] Add function name in message

* fix
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request May 11, 2021
* [AutoScheduler] Add function name in message

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants