v2.28.0
π LocalAI v2.28.0: New Look & The Rebirth of LocalAGI! π
![]() Our fresh new look! |
Big news, everyone! Not only does LocalAI have a brand new logo, but we're also celebrating the full rebirth of LocalAGI, our powerful agent framework, now completely rewritten and ready to revolutionize your local AI workflows!
Rewinding the Clock: The Journey of LocalAI & LocalAGI
Two years ago, LocalAI emerged as a pioneer in the local AI inferencing space, offering an OpenAI-compatible API layer long before it became common. Around the same time, LocalAGI was born as an experiment in AI agent frameworks β you can even find the original announcement here! Originally built in Python, it inspired many with its local-first approach.
See LocalAGI (Original Python Version) in Action!
Searching the internet (interactive mode):
search.mp4
Planning a road trip (batch mode):
planner.mp4
That early experiment has now evolved significantly!
Introducing LocalAGI v2: The Agent Framework Reborn in Go!
We're thrilled to announce that LocalAGI has been rebuilt from the ground up in Golang! It's now a modern, robust AI Agent Orchestration Platform designed to work seamlessly with LocalAI. Huge thanks to the community, especially @richiejp, for jumping in and helping create a fantastic new WebUI!
LocalAGI leverages all the features that make LocalAI great for agentic tasks. During the refactor, we even spun out the memory layer into its own component: LocalRecall, a standalone REST API for persistent agent memory.
π What Makes LocalAGI v2 Shine?
- π― OpenAI Responses API Compatible: Integrates perfectly with LocalAI, acting as a drop-in replacement for cloud APIs, keeping your interactions local and secure.
- π€ Next-Gen AI Agent Orchestration: Easily configure, deploy, and manage teams of intelligent AI agents through an intuitive no-code web interface.
- π‘οΈ Privacy-First by Design: Everything runs locally. Your data never leaves your hardware.
- π‘ Instant Integrations: Comes with built-in connectors for Slack, Telegram, Discord, GitHub Issues, IRC, and more.
- β‘ Extensible and Multimodal: Supports multiple models (text, vision) and custom actions, perfectly complementing your LocalAI setup.
β¨ Check out the new LocalAGI WebUI:
What's New Specifically in LocalAI v2.28.0?
Beyond the rebranding and the major LocalAGI news, this LocalAI release also brings its own set of improvements:
- πΌοΈ SYCL Support: Added SYCL support for
stablediffusion.cpp
. - β¨ WebUI Enhancements: Continued improvements to the user interface.
- π§ Diffusers Updated: Core diffusers library has been updated.
- π‘ Lumina Model Support: Now supports the Lumina model family for generating stunning images!
- π Bug Fixes: Resolved issues related to setting
LOCALAI_SINGLE_ACTIVE_BACKEND
totrue
.
The Complete Local Stack for Privacy-First AI
With LocalAGI rejoining LocalAI alongside LocalRecall, our ecosystem provides a complete, open-source stack for private, secure, and intelligent AI operations:
![]() LocalAI |
The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required. |
![]() LocalAGI |
A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI. |
![]() LocalRecall |
A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI. |
Join the Movement! β€οΈ
A massive THANK YOU to our incredible community! LocalAI has over 31,800 stars, and LocalAGI has already rocketed past 450+ stars!
As a reminder, LocalAI is real FOSS (Free and Open Source Software) and its sibling projects are community-driven and not backed by VCs or a company. We rely on contributors donating their spare time. If you love open-source, privacy-first AI, please consider starring the repos, contributing code, reporting bugs, or spreading the word!
π Check out the reborn LocalAGI v2 today: https://github.com/mudler/LocalAGI
Let's continue building the future of AI, together! π
Full changelog π
π Click to expand π
What's Changed
Bug fixes π
Exciting New Features π
π§ Models
- chore(model gallery): add all-hands_openhands-lm-32b-v0.1 by @mudler in #5111
- chore(model gallery): add burtenshaw_gemmacoder3-12b by @mudler in #5112
- chore(model gallery): add all-hands_openhands-lm-7b-v0.1 by @mudler in #5113
- chore(model gallery): add all-hands_openhands-lm-1.5b-v0.1 by @mudler in #5114
- chore(model gallery): add gemma-3-12b-it-qat by @mudler in #5117
- chore(model gallery): add gemma-3-4b-it-qat by @mudler in #5118
- chore(model gallery): add tesslate_synthia-s1-27b by @mudler in #5119
- chore(model gallery): add katanemo_arch-function-chat-7b by @mudler in #5120
- chore(model gallery): add katanemo_arch-function-chat-1.5b by @mudler in #5121
- chore(model gallery): add katanemo_arch-function-chat-3b by @mudler in #5122
- chore(model gallery): add gemma-3-27b-it-qat by @mudler in #5124
- chore(model gallery): add open-thoughts_openthinker2-32b by @mudler in #5128
- chore(model gallery): add open-thoughts_openthinker2-7b by @mudler in #5129
- chore(model gallery): add arliai_qwq-32b-arliai-rpr-v by @mudler in #5137
- chore(model gallery): add watt-ai_watt-tool-70b by @mudler in #5138
- chore(model gallery): add eurydice-24b-v2-i1 by @mudler in #5139
- chore(model gallery): add mensa-beta-14b-instruct-i1 by @mudler in #5140
- chore(model gallery): add meta-llama_llama-4-scout-17b-16e-instruct by @mudler in #5141
- fix(gemma): improve prompt for tool calls by @mudler in #5142
- chore(model gallery): add cogito-v1-preview-qwen-14b by @mudler in #5145
- chore(model gallery): add deepcogito_cogito-v1-preview-llama-8b by @mudler in #5147
- chore(model gallery): add deepcogito_cogito-v1-preview-llama-3b by @mudler in #5148
- chore(model gallery): add deepcogito_cogito-v1-preview-qwen-32b by @mudler in #5149
- chore(model gallery): add deepcogito_cogito-v1-preview-llama-70b by @mudler in #5150
- chore(model gallery): add agentica-org_deepcoder-14b-preview by @mudler in #5151
- chore(model gallery): add trappu_magnum-picaro-0.7-v2-12b by @mudler in #5153
- chore(model gallery): add soob3123_amoral-cogito-v1-preview-qwen-14b by @mudler in #5154
- chore(model gallery): add agentica-org_deepcoder-1.5b-preview by @mudler in #5156
- chore(model gallery): add zyphra_zr1-1.5b by @mudler in #5157
- chore(model gallery): add tesslate_gradience-t1-3b-preview by @mudler in #5160
- chore(model gallery): add lightthinker-qwen by @mudler in #5165
- chore(model gallery): add mag-picaro-72b by @mudler in #5166
- chore(model gallery): add hamanasu-adventure-4b-i1 by @mudler in #5167
- chore(model gallery): add hamanasu-magnum-4b-i1 by @mudler in #5168
- chore(model gallery): add daichi-12b by @mudler in #5169
- chore(model gallery): add skywork_skywork-or1-7b-preview by @mudler in #5173
- chore(model gallery): add skywork_skywork-or1-math-7b by @mudler in #5174
- chore(model gallery): add skywork_skywork-or1-32b-preview by @mudler in #5175
- chore(model gallery): add nvidia_llama-3.1-8b-ultralong-1m-instruct by @mudler in #5176
- chore(model gallery): add nvidia_llama-3.1-8b-ultralong-4m-instruct by @mudler in #5177
- chore(model gallery): add m1-32b by @mudler in #5182
π Documentation and examples
- Update README.md by @qwerty108109 in #5172
- Rebrand: the LocalAI stack family by @mudler in #5159
π Dependencies
- chore: β¬οΈ Update ggml-org/llama.cpp to
c80a7759dab10657b9b6c3e87eef988a133b9b6a
by @localai-bot in #5105 - chore: β¬οΈ Update ggml-org/llama.cpp to
f423981ac806bf031d83784bcb47d2721bc70f97
by @localai-bot in #5108 - chore(deps): bump llama.cpp to 'f01bd02376f919b05ee635f438311be8dfc91d7c by @mudler in #5110
- fix(sycl): kernel not found error by forcing -fsycl by @richiejp in #5115
- chore: β¬οΈ Update ggml-org/llama.cpp to
c262beddf29f3f3be5bbbf167b56029a19876956
by @localai-bot in #5116 - chore: β¬οΈ Update ggml-org/llama.cpp to
3e1d29348b5d77269f6931500dd1c1a729d429c8
by @localai-bot in #5123 - chore: β¬οΈ Update ggml-org/llama.cpp to
6bf28f0111ff9f21b3c1b1eace20c590281e7ba6
by @mudler in #5127 - chore: β¬οΈ Update ggml-org/llama.cpp to
916c83bfe7f8b08ada609c3b8e583cf5301e594b
by @localai-bot in #5130 - chore(deps): bump securego/gosec from 2.22.0 to 2.22.3 by @dependabot in #5134
- chore(deps): bump llama.cpp to
4ccea213bc629c4eef7b520f7f6c59ce9bbdaca0
by @mudler in #5143 - chore: β¬οΈ Update ggml-org/llama.cpp to
b32efad2bc42460637c3a364c9554ea8217b3d7f
by @localai-bot in #5146 - chore: β¬οΈ Update ggml-org/llama.cpp to
d3bd7193ba66c15963fd1c59448f22019a8caf6e
by @localai-bot in #5152 - chore: β¬οΈ Update ggml-org/llama.cpp to
64eda5deb9859e87a020e56bab5d2f9ca956f1de
by @localai-bot in #5155 - chore: β¬οΈ Update ggml-org/llama.cpp to
bc091a4dc585af25c438c8473285a8cfec5c7695
by @localai-bot in #5158 - chore: β¬οΈ Update ggml-org/llama.cpp to
71e90e8813f90097701e62f7fce137d96ddf41e2
by @localai-bot in #5171 - chore: β¬οΈ Update ggml-org/llama.cpp to
d6d2c2ab8c8865784ba9fef37f2b2de3f2134d33
by @localai-bot in #5178 - fix(stablediffusion): Pass ROCM LD CGO flags through to recursive make by @richiejp in #5179
Other Changes
- docs: β¬οΈ update docs version mudler/LocalAI by @localai-bot in #5104
- chore: drop remoteLibraryURL from kong vars by @mudler in #5103
- fix: race during stop of active backends by @mudler in #5106
- fix(webui): improve model display, do not block view by @mudler in #5133
- feat(diffusers): add support for Lumina2Text2ImgPipeline by @mudler in #4806
- feat(stablediffusion): Enable SYCL by @richiejp in #5144
- fix(stablediffusion): Avoid overwriting SYCL specific flags from outer make call by @richiejp in #5181
New Contributors
- @qwerty108109 made their first contribution in #5172
Full Changelog: v2.27.0...v2.28.0