
Complete Privacy-First AI Interface
Full Ollama integration with document chat (RAG), OpenAI/Anthropic/Groq plugins, streaming responses, multimodal support, vector embeddings, semantic search, and advanced model management. Zero telemetry, local-first processing with enterprise-grade features.
Your data stays on your hardware by default - with optional external AI connections when you need them. Complete API coverage, tool calling, structured outputs, and document processing built-in.
Complete AI Integration & Privacy Built-In
Privacy by Default & Zero Telemetry
Complete offline operation with local processing. No tracking, no data collection, no telemetry. Your AI conversations and documents stay on your hardware unless you explicitly connect external services. GDPR-compliant by design.
Complete Ollama Integration & API Coverage
Full API coverage: chat completion, text generation, streaming responses, multimodal support (vision/image), embeddings, model management, tool calling, structured outputs with JSON schema validation. Real-time model pulling, memory monitoring, and custom model creation.
Document Chat & RAG (Semantic Search)
Upload PDF, DOCX, TXT, Markdown files and chat with your documents using advanced vector embeddings and semantic search. Smart chunking with overlap for better context and AI-powered content matching. Enterprise-grade document processing with metadata extraction.
Flexible Plugin System & External AI
Connect OpenAI (GPT-4, GPT-3.5), Anthropic (Claude 3 Opus/Sonnet/Haiku), Groq (high-speed Llama 3, Mixtral), or any OpenAI-compatible API with automatic fallback to local Ollama. Secure API key management and rate limiting built-in.
Advanced Model Management & Performance
Pull, delete, create, copy, and push models with real-time progress tracking. Detailed model specs, memory usage monitoring, running model status, and custom model creation from existing ones. GPU acceleration support and performance optimization.
Developer Experience & Enterprise Features
TypeScript throughout, WebSocket streaming, code splitting, lazy loading, VS Code-inspired keyboard shortcuts (βB, βD, β,, ?), responsive design, dark/light mode, accessibility features, screen reader support, and comprehensive API documentation with OpenAPI specs.
Complete Setup Guide - From Zero to AI in Minutes
Building AI with Integrity
Libre WebUI was created to provide a clean, reliable interface for AI interactions. We focus on user control, privacy by default, and practical functionality.
π€ Our Principles
- Open-source development with transparent governance
- Privacy by default, external connections by choice
- Inclusive community welcoming all contributors
- Practical functionality over flashy features
Our Approach
Libre WebUI welcomes contributors from all backgrounds. We believe good software comes from diverse perspectives and collaborative development.
π What We Value
Open collaboration from contributors of all backgrounds
Building tools that work well for real use cases
You decide how and where your AI interactions happen
Apache 2.0 licensed with transparent development
π‘οΈ Our Focus
Clean, functional interfaces. We prioritize usability and reliability over complex features.
Privacy by default. Local processing with optional external connections when you need them.
Community collaboration. Development that welcomes diverse perspectives and contributions.
*Good software serves its users, not the other way around.*
Get Started with Privacy-First AI
Join users who value privacy and control in their AI interactions. Self-hosted, open source, and flexible.