Share buffer between CUDA and wgpu #7988
-
I have a CUDA codebase that I want to explore porting to WebGPU, mostly for portability reasons and to avoid vendor lockin. It would be very difficult to port the entire codebase at once, especially because we currently rely on CUDA's nvcomp library for gzip decompression. It would be nice if I could leave the beginning stages of our pipeline in CUDA, then export a raw pointer to the results in GPU memory, then run WebGPU shaders to operate on the data for the following stages. Is anything like this possible with wgpu? I believe this is an example of basically what I'm going for but with CUDA+Vulkan: https://www.gpultra.com/blog/vulkan-cuda-memory-interoperability/ |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
This doesn't include synchronization, you might also want to export the wgpu device's fence and import it into cuda if you don't want to make sure that the two devices are never executing at the same time. You must be very careful with how you drop these imported buffers, the ownership depends on the particular handle type you export and can lead to a bunch of use after frees if done wrongly. You can also do this with DX12 (though conversion to hal buffer is slightly different) |
Beta Was this translation helpful? Give feedback.
wgpu
supports importing a Vulkan/DX12 buffer (it must be the same as the underlying device) using thefrom_hal
functions when running natively. Because it is more internal, and expected to be used less, there are breaking changes more often though. I wrote a similar thing for oidn - https://github.com/Vecvec/oidn-wgpu-interop/blob/master/src/vulkan.rs , but a general idea is thisas_hal
) and vulkan adapter from that.as_hal
) and vulkan …