Skip to content

Conversation

@ZihengJiang
Copy link
Contributor

@ZihengJiang ZihengJiang commented Jun 8, 2017


/*! \brief whether two array have the same content */
template<typename T>
bool IsSame(const Array<T>& a, const Array<T>& b) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SameContent

n->reduce_axis = n->body.as<ir::Reduce>()->axis;
if (n->body[0]->is_type<ir::Reduce>()) {
// batch reduction should have the same axis
n->reduce_axis = n->body[0].as<ir::Reduce>()->axis;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add verification check here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Raise error if that is not true

CHECK_EQ(self.operator->(), this);
Expr new_body = op::ReplaceTensor(this->body, rmap);
if (!new_body.same_as(this->body)) {
Array<Expr> new_body = ReplaceTensor(this->body, rmap);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try to do it simply in a for loop, check the way we do it ir mutator

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can add a Array<NodeRef> UpdateArray(Array<NodeRef> arr, std::function<NodeRef(NodeRef)> fupdate) function?


if (IsCrossThreadReduction(this, stage)) {
LOG(INFO) << stage;
// specially handle cross thread reduction.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this

namespace ir {

template<typename T>
inline Array<T> UpdateArray(Array<T> arr, std::function<T(T)> fupdate) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simply use F fupdate, use a template argument for fupdate

} else {
return Array<Expr>(new_arr);
}
std::function<Expr(Expr)> fupdate = [m] (Expr e) { return m->Mutate(e); };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto fupdate, std::function and lambda are different, lambda is more specialized and can trigger inline

@ZihengJiang ZihengJiang changed the title Support for batch ComputeOp Support for Tuple Inputs of Reducer and ComputeOp Jun 10, 2017
if (!is_one(reduce->condition)) {
*provide = IfThenElse::make(reduce->condition, *provide);
for (size_t i = 0; i < size; ++i) {
provides->at(i) = IfThenElse::make(reduce->condition, provides->at(i));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we have one common condition for all the bodies?

size_t size = self->body.size();
CHECK_GT(size, 0);
std::vector<const Reduce*> reduces(size);
for (size_t i = 0; i < size; ++i) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we assume common reduce, vector is not necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reduces is used for type

/*!
* \brief update array with an unary function
* \param arr array
* \param fupdate an unary function
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add \tparam to document the template argument


std::unordered_set<const Variable*> reduce_set;
for (size_t i = 2; i < call->args.size(); ++i) {
for (size_t i = 2+2*size; i < call->args.size(); ++i) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space between operators

}
} else if (op.as<ComputeOpNode>()) {
std::unordered_map<const Node*, TensorDimKey> vmap;
std::unordered_map<const Node*, std::vector<TensorDimKey>> vmap;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for compatiblity of older compiler

reduce_stage->leaf_iter_vars = reduce_stage->all_iter_vars;
reduce_stage->relations = Array<IterVarRelation>();
return factor_tensor;
return factor_tensors[0];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to return array of Exprs?

include/tvm/ir.h Outdated
Array<IterVar> rdom,
Expr condition = const_true());
Expr condition = const_true(),
int value_index = 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the default value, to be safe

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

never mind, forget this comment

where = convert(True)
if size == 1:
return _make.Reduce(combiner, expr, axis, where, 0)
return [_make.Reduce(combiner, expr, axis, where, i)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change to tuple

Returns
-------
tfactor : Tensor
tfactor : Tensor or Array<Tensor>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

list of Tensor

CHECK(axis[i].defined());
}
n->type = source.type();
n->type = source[value_index].type();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if argument is passing by value, do std::move to save copy constructor

return outputs;
}

bool CheckReduce(const ir::Reduce* a, const ir::Reduce* b) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ReduceEqual

Var res_handle("reduce_temp", Handle());
Array<Expr> freduce_args;
freduce_args.push_back(reduce->source);
freduce_args.push_back(make_const(UInt(32), size));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let us update comment in the intrinsic def to clarify the new convention

body = AttrStmt::make(
res_handle, attr::storage_scope, StringImm::make("local"), body);
Stmt body = Block::make(reduce_body, assign_body);
for (int idx = size - 1; idx >= 0; --idx) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep revserse iteration style consistent

for (i = size; i !=0; --i)

This avoid certain case when reverse iteration is unsigned

Type type,
Var shared_buf,
const std::vector<Type>& types,
Array<Var> shared_bufs,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we don't copy the value, pass by const ref

"""
return _api_internal._ScheduleRFactor(self, tensor, axis)
factored = _api_internal._ScheduleRFactor(self, tensor, axis)
if len(factored) == 1:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

return factored[0] if len(factored) == 1 else factored

@ZihengJiang ZihengJiang merged commit f467f66 into apache:master Jun 11, 2017
@ZihengJiang ZihengJiang deleted the dev branch June 12, 2017 04:36
@ZihengJiang ZihengJiang changed the title Support for Tuple Inputs of Reducer and ComputeOp [LANG] Support for Tuple Inputs of Reducer and ComputeOp Jun 16, 2017
tqchen pushed a commit to tqchen/tvm that referenced this pull request May 26, 2018
* fix for composed symbol

* fix

* clean up

* fix exception type
tqchen pushed a commit that referenced this pull request May 29, 2018
* fix for composed symbol

* fix

* clean up

* fix exception type
tqchen pushed a commit to tqchen/tvm that referenced this pull request Jul 6, 2018
* fix for composed symbol

* fix

* clean up

* fix exception type
sergei-mironov pushed a commit to sergei-mironov/tvm that referenced this pull request Aug 8, 2018
* fix for composed symbol

* fix

* clean up

* fix exception type
vinx13 pushed a commit to vinx13/tvm that referenced this pull request Mar 9, 2022
jinhongyii pushed a commit to jinhongyii/tvm that referenced this pull request Apr 10, 2023
…e#14523) (apache#175)

This PR enhances CanProve to handle symbolic bound. Such analysis is
essential to eliminate predicates in dynamic shape workloads.

We also the int set analysis singlepoint check to avoid recursion and
improve the overall analysis speed.

Added CanProveSinglePoint to serve previous stronger checks.

The new CanProve comes with additinal strength argument that can only be
used in top-level setting with stronger analysis.

Added comment for future implementation efficiency.

Testcases are added to cover the cases.

Co-authored-by: Tianqi Chen <[email protected]>
tqchen added a commit to tqchen/tvm that referenced this pull request Apr 11, 2023
…e#14523) (apache#175)

This PR enhances CanProve to handle symbolic bound. Such analysis is
essential to eliminate predicates in dynamic shape workloads.

We also the int set analysis singlepoint check to avoid recursion and
improve the overall analysis speed.

Added CanProveSinglePoint to serve previous stronger checks.

The new CanProve comes with additinal strength argument that can only be
used in top-level setting with stronger analysis.

Added comment for future implementation efficiency.

Testcases are added to cover the cases.

Co-authored-by: Tianqi Chen <[email protected]>
LeiWang1999 pushed a commit to LeiWang1999/tvm that referenced this pull request Nov 8, 2024
…ache#175)

Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.1.7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](actions/download-artifact@v3...v4.1.7)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
MasterJH5574 pushed a commit to MasterJH5574/tvm that referenced this pull request Aug 17, 2025
junrushao added a commit to junrushao/tvm that referenced this pull request Oct 20, 2025
Upstream : https://github.com/apache/tvm-ffi.git
Branch   : main
New HEAD : e10d1ed7c2e3b69847348effd4ed9d310a300af8
Subject  : doc: Migrate `.pyi` docstrings into Cython (apache#175)
Author   : Junru Shao <[email protected]>
Date     : 2025-10-20T07:14:34-07:00
Delta    : 1 commit(s) since 0729193f475c
Compare  : apache/tvm-ffi@0729193...e10d1ed

This commit updates the tvm-ffi submodule to the latest upstream HEAD.
junrushao added a commit to junrushao/tvm that referenced this pull request Oct 31, 2025
Upstream : https://github.com/apache/tvm-ffi.git
Branch   : main
New HEAD : e10d1ed7c2e3b69847348effd4ed9d310a300af8
Subject  : doc: Migrate `.pyi` docstrings into Cython (apache#175)
Author   : Junru Shao <[email protected]>
Date     : 2025-10-20T07:14:34-07:00
Delta    : 1 commit(s) since 0729193f475c
Compare  : apache/tvm-ffi@0729193...e10d1ed

This commit updates the tvm-ffi submodule to the latest upstream HEAD.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants