Skip to content

Conversation

@zdl010
Copy link

@zdl010 zdl010 commented Jun 29, 2024

Upgrade llama.cpp to b3265, support gemma2, remove beam parameter[ https://github.com/ggml-org/llama.cpp/pull/7985 ]

@kherud
Copy link
Owner

kherud commented Jun 30, 2024

Hey @zdl010 thanks for the pull request! I'll update the C++ code to b3265 and merge after that.

@ardinursyamsu
Copy link

@kherud I see there's a lot of update in server.cpp in llama.cpp. Is that would take long to adjust in server.hpp?

@kherud
Copy link
Owner

kherud commented Aug 5, 2024

Hey @ardinursyamsu yeah it's a challenge to keep up with the rapid development of llama.cpp. Sometimes there are bugs, where it isn't obvious if they come from llama.cpp or the Java binding. I'll have another try to update to the latest version later today, though.

@kherud
Copy link
Owner

kherud commented Aug 5, 2024

Ok, there was a change in llama.cpp to no longer statically link the ggml library (see ggml-org/llama.cpp#8166), which caused the previous Windows builds here to fail. Not sure why it worked for Linux/MacOS (probably because of rpath). I'll look for a solution tomorrow and release a new version then.

@kherud kherud merged commit 4c04cbc into kherud:master Aug 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants