Skip to content

Conversation

@truecharts-admin
Copy link
Collaborator

This PR contains the following updates:

Package Update Change
docker.io/localai/localai minor v2.17.1-aio-cpu -> v2.19.1-aio-cpu
docker.io/localai/localai minor v2.17.1-aio-gpu-nvidia-cuda-11 -> v2.19.1-aio-gpu-nvidia-cuda-11
docker.io/localai/localai minor v2.17.1-aio-gpu-nvidia-cuda-12 -> v2.19.1-aio-gpu-nvidia-cuda-12
docker.io/localai/localai minor v2.17.1-cublas-cuda11-ffmpeg-core -> v2.19.1-cublas-cuda11-ffmpeg-core
docker.io/localai/localai minor v2.17.1-cublas-cuda11-core -> v2.19.1-cublas-cuda11-core
docker.io/localai/localai minor v2.17.1-cublas-cuda12-ffmpeg-core -> v2.19.1-cublas-cuda12-ffmpeg-core
docker.io/localai/localai minor v2.17.1-cublas-cuda12-core -> v2.19.1-cublas-cuda12-core
docker.io/localai/localai minor v2.17.1-ffmpeg-core -> v2.19.1-ffmpeg-core
docker.io/localai/localai minor v2.17.1 -> v2.19.1

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

mudler/LocalAI (docker.io/localai/localai)

v2.19.1

Compare Source

local-ai-release-219-shadow

LocalAI 2.19.1 is out! 📣
TLDR; Summary spotlight
  • 🖧 Federated Instances via P2P: LocalAI now supports federated instances with P2P, offering both load-balanced and non-load-balanced options.
  • 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up P2P instances with auto-discovery using shared tokens.
  • 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary releases.
  • 🛠️ Enhanced Installer: The installer script now supports setting up federated instances.
  • 📥 Model Pulling: Models can now be pulled directly via URL.
  • 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI and model lists.
  • 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings )
  • ⚙️ Tool Support: Small enhancements to tools with disabled grammars.
🖧 LocalAI Federation and AI swarms

LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.

How it works?

Starting LocalAI with --p2p generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with --p2p --federated and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.

Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

What's Changed
Bug fixes 🐛
🖧 P2P area
Exciting New Features 🎉
🧠 Models
📖 Documentation and examples
👒 Dependencies
Other Changes
New Contributors

Full Changelog: mudler/LocalAI@v2.18.1...v2.19.0

v2.19.0

Compare Source

local-ai-release-219-shadow

LocalAI 2.19.0 is out! 📣
TLDR; Summary spotlight
  • 🖧 Federated Instances via P2P: LocalAI now supports federated instances with P2P, offering both load-balanced and non-load-balanced options.
  • 🎛️ P2P Dashboard: A new dashboard to guide and assist in setting up P2P instances with auto-discovery using shared tokens.
  • 🔊 TTS Integration: Text-to-Speech (TTS) is now included in the binary releases.
  • 🛠️ Enhanced Installer: The installer script now supports setting up federated instances.
  • 📥 Model Pulling: Models can now be pulled directly via URL.
  • 🖼️ WebUI Enhancements: Visual improvements and cleanups to the WebUI and model lists.
  • 🧠 llama-cpp Backend: The llama-cpp (grpc) backend now supports embedding ( https://localai.io/features/embeddings/#llamacpp-embeddings )
  • ⚙️ Tool Support: Small enhancements to tools with disabled grammars.
🖧 LocalAI Federation and AI swarms

LocalAI is revolutionizing the future of distributed AI workloads by making it simpler and more accessible. No more complex setups, Docker or Kubernetes configurations – LocalAI allows you to create your own AI cluster with minimal friction. By auto-discovering and sharing work or weights of the LLM model across your existing devices, LocalAI aims to scale both horizontally and vertically with ease.

How it works?

Starting LocalAI with --p2p generates a shared token for connecting multiple instances: and that's all you need to create AI clusters, eliminating the need for intricate network setups. Simply navigate to the "Swarm" section in the WebUI and follow the on-screen instructions.

For fully shared instances, initiate LocalAI with --p2p --federated and adhere to the Swarm section's guidance. This feature, while still experimental, offers a tech preview quality experience.

Federated LocalAI

Launch multiple LocalAI instances and cluster them together to share requests across the cluster. The "Swarm" tab in the WebUI provides one-liner instructions on connecting various LocalAI instances using a shared token. Instances will auto-discover each other, even across different networks.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

LocalAI P2P Workers

Distribute weights across nodes by starting multiple LocalAI workers, currently available only on the llama.cpp backend, with plans to expand to other backends soon.

346663124-1d2324fd-8b55-4fa2-9856-721a467969c2

Check out a demonstration video: Watch now

What's Changed
Bug fixes 🐛
🖧 P2P area
Exciting New Features 🎉
🧠 Models

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Enabled.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@truecharts-admin truecharts-admin added the automerge Categorises a PR or issue that references a new App. label Jul 24, 2024
@truecharts-admin truecharts-admin requested a review from a user July 24, 2024 13:43
@truecharts-admin truecharts-admin requested a review from a team as a code owner July 24, 2024 13:43
@truecharts-admin truecharts-admin enabled auto-merge (squash) July 24, 2024 13:43
@github-actions
Copy link

📝 Linting results:

✔️ Linting [charts/stable/local-ai]: Passed - Took 0 seconds
Total Charts Linted: 1
Total Charts Passed: 1
Total Charts Failed: 0

✅ Linting: Passed - Took 0 seconds

@truecharts-admin truecharts-admin merged commit ab5d38a into master Jul 24, 2024
@truecharts-admin truecharts-admin deleted the renovate/docker.io-localai-localai-2.x branch July 24, 2024 13:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

automerge Categorises a PR or issue that references a new App.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants