Skip to content

Conversation

@fPecc
Copy link
Contributor

@fPecc fPecc commented Sep 15, 2022

For description see this forum topic.

Tagged reviewers: @areusch (don't know who else can review microTVM-related code)

@leandron
Copy link
Contributor

leandron commented Sep 15, 2022

cc @Mousius @ashutosh-arm @grant-arm @gromero

@leandron
Copy link
Contributor

leandron commented Sep 15, 2022

@fPecc from what I understood from reading your Discuss post, this is a non-breaking change, right?

Apart from that, I think the problem is quite restrict and it could be summarised/discussed in the PR, rather than in a separate location, but that's just a minor suggestion.

Copy link
Contributor

@areusch areusch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @fPecc for this suggestion! I'm generally ok with this, my main ask would be whether or not we should consider other properties and how that relates to information we might export in metadata.json. Overall, I'm not too worried if we make breaking naming changes here in the future as TVM is still <1.0, but it would just add an item to our tech debt is all. I think being able to reference this info easily is useful though, so I'm not opposed to merging.

cc @alanmacd @mehrdadh @mkatanbaf @guberti @gromero

@fPecc
Copy link
Contributor Author

fPecc commented Sep 16, 2022

Thanks for the feedback @areusch . Indeed, it shouldn't break, but there are still some tests that are not passing, so I am trying to fix that locally before continuing and commiting again.

@fPecc fPecc requested a review from Mousius September 26, 2022 08:24
@areusch
Copy link
Contributor

areusch commented Oct 17, 2022

@Mousius could you take another look here?

@fPecc do you mind looking at the test failures? i think there are a few that need to get updated

@areusch areusch added the needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it label Oct 19, 2022
# function (in bytes)
input_dict = {}
for input_param in main_func_metadata.relay_primfuncs[target].params:
if hasattr(input_param, "checked_type"):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a test case for when this is not set? How do we reproduce it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Mousius . I reviewed the code and noticed that the parameters of the primfuncs in the main_func_metadata always seem to have the checked_type set, so I removed the check here.

@areusch areusch removed the needs-triage PRs or issues that need to be investigated by maintainers to find the right assignees to address it label Oct 19, 2022
@fPecc
Copy link
Contributor Author

fPecc commented Dec 6, 2022

Hi @Mousius ,

I did the rebase, but I noticed there are still tests failing in CPU MINIMAL. They seem to be related to the CRT, but I cant find a direct relation with this change

@guberti
Copy link
Member

guberti commented Dec 6, 2022

@fPecc Are you sure you did a rebase, not a merge? I suspect git rebase -i main will fix the CRT tests.

@Mousius Mousius merged commit 5a58c58 into apache:main Dec 7, 2022
@Mousius
Copy link
Member

Mousius commented Dec 7, 2022

Thanks for working through this one @fPecc! This is an awesome improvement and it has now landed 👍

Mousius added a commit to Mousius/tvm that referenced this pull request Jan 5, 2023
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json.

Co-authored-by: Ashutosh Parkhi <[email protected]>
Mousius added a commit to Mousius/tvm that referenced this pull request Jan 5, 2023
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json.

Co-authored-by: Ashutosh Parkhi <[email protected]>
Mousius pushed a commit that referenced this pull request Jan 9, 2023
Follow up: #12789

-Added a separate test to detect output size from MLF codegen
mehrdadh pushed a commit that referenced this pull request Jan 9, 2023
Following from #12789, this adds support for determining the output tensor name from the input model within the MLF metadata json.

Co-authored-by: Ashutosh Parkhi <[email protected]>
fzi-peccia added a commit to fzi-peccia/tvm that referenced this pull request Mar 27, 2023
The generated MLF header files for each module contain the struct definition to use as input and outputs to call the generated function. If we want to call this tvmgen_default_run, we need to allocate space (statically or dynamically) for the input and output tensors. This generates macros that define the size of each input and output in bytes, this allows us to reference this new macros to statically or dynamically allocate vectors to store the inputs and outputs of the tvmgen_default_run function.


Co-authored-by: Federico Peccia <[email protected]>
Co-authored-by: Christopher Sidebottom <[email protected]>
fzi-peccia pushed a commit to fzi-peccia/tvm that referenced this pull request Mar 27, 2023
…13655)

Follow up: apache#12789

-Added a separate test to detect output size from MLF codegen
fzi-peccia pushed a commit to fzi-peccia/tvm that referenced this pull request Mar 27, 2023
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json.

Co-authored-by: Ashutosh Parkhi <[email protected]>
mikeseven pushed a commit to mikeseven/tvm that referenced this pull request Sep 27, 2023
The generated MLF header files for each module contain the struct definition to use as input and outputs to call the generated function. If we want to call this tvmgen_default_run, we need to allocate space (statically or dynamically) for the input and output tensors. This generates macros that define the size of each input and output in bytes, this allows us to reference this new macros to statically or dynamically allocate vectors to store the inputs and outputs of the tvmgen_default_run function.


Co-authored-by: Federico Peccia <[email protected]>
Co-authored-by: Christopher Sidebottom <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants