You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Pull Request resolved: #2887
The final touches to get ET-VK convolution on-par with ATen-VK's convolution.
## Idea
In our shaders, we add the bias to our sum.
```
${VEC4_T[DTYPE]} sum = texelFetch(bias_in, ivec2(pos.z, 0), 0);
```
To keep our shaders as is, we implement having no bias by allocating a buffer of zeros. Then, our shader adds zero to our sum.
## Issue
If `Bias=False`, dummy buffer of zeros is not serialized with the graph. The bias ValueRef is deserialized in the runtime as `TypeTag::NONE`, not `TypeTag::TENSORREF`.
## Solution
If `TypeTag::NONE` is given, (1) create the `vTensor` using the `out_channels` value from the weights, (2) allocate a StagingBuffer of that size, and (3) `memset` its data to zero. Failure to do (3) will result in undefined behavior.
ghstack-source-id: 221926167
exported-using-ghexport
bypass-github-export-checks
Reviewed By: SS-JIA
Differential Revision: D55814589
fbshipit-source-id: ce7b82c31bb11540ed2d98ab14131841fcee93e4
0 commit comments