Skip to content

Conversation

@cowanmeg
Copy link

TOPI cpu bitserial dense operators for low bit-widths. These very similar to the bitserial conv2d operators.

Please review @vinx13 @eqy @ajtulloch
Thanks!

elif pack_dtype == 'uint64':
binary_op_multiplier = 64

cfg.add_flop(batch * out_dim * in_dim * binary_op_multiplier)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this match how we usually do flop calculation (e.g., including ops for padding) etc.?

elif pack_dtype == 'uint32':
binary_op_multiplier = 32
elif pack_dtype == 'uint64':
binary_op_multiplier = 64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can we pattern match on the dtype str?

elif pack_dtype == 'uint32':
binary_op_multiplier = 32
elif pack_dtype == 'uint64':
binary_op_multiplier = 64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"

index of axis to place bit axis in resulting packed data"""
ishape = data.shape
n = len(ishape)
if pack_type == 'uint8':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"

@tqchen
Copy link
Member

tqchen commented Apr 25, 2019

@cowanmeg please act on comments by @eqy then we can merge

@tqchen tqchen merged commit b405f68 into apache:master Apr 27, 2019
@tqchen
Copy link
Member

tqchen commented Apr 27, 2019

Thanks, @vinx13 @eqy @cowanmeg , this is now merged

wweic pushed a commit to wweic/tvm that referenced this pull request May 13, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request May 13, 2019
@cowanmeg cowanmeg deleted the topi-bitserial-dense branch May 20, 2019 00:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants