Libre WebUI Screenshot
πŸ”’ Zero Telemetry & Privacy
πŸ“„ Document Chat & RAG
πŸ€– Complete Ollama API
πŸ”Œ Plugin System & External AI
Click to enlarge

Complete Privacy-First AI Interface

Full Ollama integration with document chat (RAG), OpenAI/Anthropic/Groq plugins, streaming responses, multimodal support, vector embeddings, semantic search, and advanced model management. Zero telemetry, local-first processing with enterprise-grade features.
Your data stays on your hardware by default - with optional external AI connections when you need them. Complete API coverage, tool calling, structured outputs, and document processing built-in.

Complete AI Integration & Privacy Built-In

πŸ”’

Privacy by Default & Zero Telemetry

Complete offline operation with local processing. No tracking, no data collection, no telemetry. Your AI conversations and documents stay on your hardware unless you explicitly connect external services. GDPR-compliant by design.

πŸ€–

Complete Ollama Integration & API Coverage

Full API coverage: chat completion, text generation, streaming responses, multimodal support (vision/image), embeddings, model management, tool calling, structured outputs with JSON schema validation. Real-time model pulling, memory monitoring, and custom model creation.

πŸ“„

Document Chat & RAG (Semantic Search)

Upload PDF, DOCX, TXT, Markdown files and chat with your documents using advanced vector embeddings and semantic search. Smart chunking with overlap for better context and AI-powered content matching. Enterprise-grade document processing with metadata extraction.

πŸ”Œ

Flexible Plugin System & External AI

Connect OpenAI (GPT-4, GPT-3.5), Anthropic (Claude 3 Opus/Sonnet/Haiku), Groq (high-speed Llama 3, Mixtral), or any OpenAI-compatible API with automatic fallback to local Ollama. Secure API key management and rate limiting built-in.

πŸš€

Advanced Model Management & Performance

Pull, delete, create, copy, and push models with real-time progress tracking. Detailed model specs, memory usage monitoring, running model status, and custom model creation from existing ones. GPU acceleration support and performance optimization.

⚑

Developer Experience & Enterprise Features

TypeScript throughout, WebSocket streaming, code splitting, lazy loading, VS Code-inspired keyboard shortcuts (⌘B, ⌘D, ⌘,, ?), responsive design, dark/light mode, accessibility features, screen reader support, and comprehensive API documentation with OpenAPI specs.

Complete Setup Guide - From Zero to AI in Minutes

terminal
# One-command setup with automatic dependency installation git clone https://github.com/libre-webui/libre-webui.git cd libre-webui # Auto-install all dependencies and start development servers ./start.sh # Open your browser and navigate to: # Frontend: http://localhost:5173 # Backend: http://localhost:3001 # Ollama: http://localhost:11434 # Optional: Add external AI service API keys # Copy backend/.env.example to backend/.env # Add your OpenAI, Anthropic, or Groq API keys
terminal
1Prerequisites & Dependencies
# Install Node.js (v18+ recommended) # Install Ollama for local AI models curl https://ollama.ai/install.sh | sh # Start Ollama service ollama serve # Optional: Pull your first model ollama pull llama3.2
2Clone & Install (npm workspaces)
git clone https://github.com/libre-webui/libre-webui.git cd libre-webui # Install all dependencies (root, frontend, backend) npm install
3Development Environment & Ports
# Start both frontend and backend in development mode npm run dev # Alternative: Network-accessible development (mobile testing) npm run dev:host # Available development ports: # Frontend (dev): http://localhost:5173 # Frontend (host): http://localhost:8080 (with dev:host) # Backend (dev): http://localhost:3001 # Ollama: http://localhost:11434 # Clean reinstall if needed ./clean-install.sh
4Optional: External AI Services & Plugin Setup
# Copy environment template cp backend/.env.example backend/.env # Add your API keys to backend/.env: OPENAI_API_KEY=your_openai_key_here ANTHROPIC_API_KEY=your_anthropic_key_here GROQ_API_KEY=your_groq_key_here # Install plugins via API (after starting servers) curl -X POST http://localhost:3001/api/plugins/install \ -H "Content-Type: application/json" \ -d @plugins/openai.json # Activate plugin curl -X POST http://localhost:3001/api/plugins/activate/openai # Restart backend to load new environment npm run dev

Building AI with Integrity

Libre WebUI was created to provide a clean, reliable interface for AI interactions. We focus on user control, privacy by default, and practical functionality.

🀝 Our Principles

  • Open-source development with transparent governance
  • Privacy by default, external connections by choice
  • Inclusive community welcoming all contributors
  • Practical functionality over flashy features

Our Approach

Libre WebUI welcomes contributors from all backgrounds. We believe good software comes from diverse perspectives and collaborative development.

🌟 What We Value

🀝
Inclusive Development

Open collaboration from contributors of all backgrounds

βš–οΈ
Practical Design

Building tools that work well for real use cases

πŸ›‘οΈ
User Control

You decide how and where your AI interactions happen

πŸ”“
Open Source

Apache 2.0 licensed with transparent development

πŸ›‘οΈ Our Focus

Clean, functional interfaces. We prioritize usability and reliability over complex features.

Privacy by default. Local processing with optional external connections when you need them.

Community collaboration. Development that welcomes diverse perspectives and contributions.

*Good software serves its users, not the other way around.*

Get Started with Privacy-First AI

Join users who value privacy and control in their AI interactions. Self-hosted, open source, and flexible.