-
Notifications
You must be signed in to change notification settings - Fork 470
feat: update the llama backends. #78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
It looks like something has gone wrong merging the changes in |
Yes and I've fixed them. It's quite strange that I could run the test on my local linux device successfully. Besides when I list the objects of |
I've been investigating this but I can't see any problems with the setup. My best guess is that a dependency of libllama.so is missing, although I'm not sure what. perhaps try adding Edit: I've added this now. |
@AsakusaRinne I think I may have discovered the problem, but I'm not certain. I ran
i.e. the binary is trying to dynamically link against From looking at the readme (here) I assume you have set I suggest building without |
Yes, it's better to remove openblas and I'll re-compile them. Thank you for the catch! |
As shown in the ci log, the MACOS in the ci uses intel cpu instead of M1/M2 cpu. Maybe we should remove the ci for MACOS? |
That's unfortunate, but I don't see any other option. Let's hope GitHub add ARM based runners soon! |
No description provided.