Skip to content

Conversation

@mudler
Copy link
Owner

@mudler mudler commented Nov 26, 2024

Description

This PR drops the bert-cpp backend entirely, as nowadays llama.cpp can be used as embeddings backend with higher accuracy and also it is much more faster in terms of speedup.

This PR also silently moves the model galleries and the examples referencing the bert-embeddings model to a llama3.2 based version.

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

@netlify
Copy link

netlify bot commented Nov 26, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 8fd5ffe
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/674726064d277600087c21ee
😎 Deploy Preview https://deploy-preview-4272--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

use llama.cpp 3.2 as a drop-in replacement for bert.cpp

Signed-off-by: Ettore Di Giacinto <[email protected]>
dave-gray101
dave-gray101 previously approved these changes Nov 26, 2024
Signed-off-by: Ettore Di Giacinto <[email protected]>
@mudler mudler merged commit 3c3050f into master Nov 27, 2024
31 checks passed
@mudler mudler deleted the drop/bert.cpp branch November 27, 2024 15:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants