-
Notifications
You must be signed in to change notification settings - Fork 470
Add LLamaSharp.Backend.Vulkan #2 #514
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@martindevans |
That's fine, I'll add new binaries next I do the update of LLamaSharp to a new llama.cpp version. Usually about once a month. |
OPENBLAS_VERSION: 0.3.23 | ||
OPENCL_VERSION: 2023.04.17 | ||
CLBLAST_VERSION: 1.6.0 | ||
VULKAN_VERSION: 1.3.261.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all of the defines required? It looks like only VULKAN_VERSION
is used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all of the defines required? It looks like only
VULKAN_VERSION
is used?
Well no... I copied that from "compile-clblast:"....
It just needs VULKAN_VERSION:
Your compile.yml is inverted compared to ggerganov/llama.cpp build.yml. Which builds a lot of the backends together per os.
I think this way will be a lot more verbose when adding all of the backends. eg kcompute, openBlas
Apart from that one comment this all looks good, thanks again for putting in the work! |
Note: use this branch as the base of the next binary-update branch |
* Update compile.yml use 11 instead of 14 Jimver/[email protected] * Update compile.yml use 11 instead of 14 Jimver/[email protected] * Update compile.yml add vulkan llama.cpp backend * Update compile.yml * Update compile.yml * Update compile.yml * Update llama.cpp runtimes * Remove unnessary dlls for vulkan nuspec * Update main.yml * More backends (#1) * Update compile.yml use 11 instead of 14 Jimver/[email protected] * Update compile.yml add vulkan llama.cpp backend * Update compile.yml * Update compile.yml * Update compile.yml * Update llama.cpp runtimes * Remove unnessary dlls for vulkan nuspec * Update main.yml * Update Native Load to handle Vulkan * Add WithVulkan to Native Load (#2) * Update compile.yml use 11 instead of 14 Jimver/[email protected] * Update compile.yml add vulkan llama.cpp backend * Update compile.yml * Update compile.yml * Update compile.yml * Update llama.cpp runtimes * Remove unnessary dlls for vulkan nuspec * Update main.yml * Update Native Load to handle Vulkan * Add vulkan backend to run time targets Have Examples use Vulkan backend * Default WithVulkan to false to pass CI tests on GitHub Runners * Remove .WithVulkan() from Examples * Revert runtimes * Revert to Jimver/[email protected] * Add Detection of vulkan * set vulkan to enabled by default * Test CI with deps and WithVulkan true by default * Update main.yml
Add vulkan backend
Sorry I didn't know committing to my fork would affect this pull request. |
This PR adds the llama.cpp vulkan backend.
.WithVulkan is by default false because I wasn't sure how to test for the presence of vulkan and hence it would fail the CI Tests.
One idea, without requiring the vulkan sdk, is to call and parse the result of "vulkaninfo --summary"
I had trouble with Jimver/[email protected] not working so I changed it to match llama.cpp Jimver/[email protected]
I have only tested this LLamaSharp.Backend.Vulkan under windows with a Radeon VII.