-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Added macro generation in MLF export #12789
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
cc @Mousius @ashutosh-arm @grant-arm @gromero |
|
@fPecc from what I understood from reading your Discuss post, this is a non-breaking change, right? Apart from that, I think the problem is quite restrict and it could be summarised/discussed in the PR, rather than in a separate location, but that's just a minor suggestion. |
areusch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @fPecc for this suggestion! I'm generally ok with this, my main ask would be whether or not we should consider other properties and how that relates to information we might export in metadata.json. Overall, I'm not too worried if we make breaking naming changes here in the future as TVM is still <1.0, but it would just add an item to our tech debt is all. I think being able to reference this info easily is useful though, so I'm not opposed to merging.
|
Thanks for the feedback @areusch . Indeed, it shouldn't break, but there are still some tests that are not passing, so I am trying to fix that locally before continuing and commiting again. |
Co-authored-by: Christopher Sidebottom <[email protected]>
Co-authored-by: Christopher Sidebottom <[email protected]>
Co-authored-by: Christopher Sidebottom <[email protected]>
| # function (in bytes) | ||
| input_dict = {} | ||
| for input_param in main_func_metadata.relay_primfuncs[target].params: | ||
| if hasattr(input_param, "checked_type"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a test case for when this is not set? How do we reproduce it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Mousius . I reviewed the code and noticed that the parameters of the primfuncs in the main_func_metadata always seem to have the checked_type set, so I removed the check here.
Co-authored-by: Christopher Sidebottom <[email protected]>
|
Hi @Mousius , I did the rebase, but I noticed there are still tests failing in CPU MINIMAL. They seem to be related to the CRT, but I cant find a direct relation with this change |
|
@fPecc Are you sure you did a rebase, not a merge? I suspect |
|
Thanks for working through this one @fPecc! This is an awesome improvement and it has now landed 👍 |
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json. Co-authored-by: Ashutosh Parkhi <[email protected]>
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json. Co-authored-by: Ashutosh Parkhi <[email protected]>
Follow up: #12789 -Added a separate test to detect output size from MLF codegen
Following from #12789, this adds support for determining the output tensor name from the input model within the MLF metadata json. Co-authored-by: Ashutosh Parkhi <[email protected]>
The generated MLF header files for each module contain the struct definition to use as input and outputs to call the generated function. If we want to call this tvmgen_default_run, we need to allocate space (statically or dynamically) for the input and output tensors. This generates macros that define the size of each input and output in bytes, this allows us to reference this new macros to statically or dynamically allocate vectors to store the inputs and outputs of the tvmgen_default_run function. Co-authored-by: Federico Peccia <[email protected]> Co-authored-by: Christopher Sidebottom <[email protected]>
…13655) Follow up: apache#12789 -Added a separate test to detect output size from MLF codegen
Following from apache#12789, this adds support for determining the output tensor name from the input model within the MLF metadata json. Co-authored-by: Ashutosh Parkhi <[email protected]>
The generated MLF header files for each module contain the struct definition to use as input and outputs to call the generated function. If we want to call this tvmgen_default_run, we need to allocate space (statically or dynamically) for the input and output tensors. This generates macros that define the size of each input and output in bytes, this allows us to reference this new macros to statically or dynamically allocate vectors to store the inputs and outputs of the tvmgen_default_run function. Co-authored-by: Federico Peccia <[email protected]> Co-authored-by: Christopher Sidebottom <[email protected]>
For description see this forum topic.
Tagged reviewers: @areusch (don't know who else can review microTVM-related code)