Experience the power of artificial intelligence, completely private and running locally on your hardware. No data collection. No runtime cloud dependency or call-home behavior. Just pure, unrestricted intelligence.
Christopher isn't a website you visit; it's a powerful engine you host. Click any card to learn more.
At the core runs Ollama, serving lightweight yet powerful Large Language Models.
Learn more →
Christopher lives inside a Docker container, isolating the AI environment from your host OS.
Learn more →
Christopher is designed to be opened directly at https://<host-ip>:3001 from any device on your LAN.
Learn more →
A lightweight, responsive web interface with instant chat updates, thread management, and mobile-friendly layout.
Learn more →
Fastest path: Download Docker Desktop or Engine, run one command on your host, then open the printed secure LAN URL. Visual loaders are recommended.
Linux (Ubuntu/Debian) install command: curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh
Navigate to the christopher-ai directory in your terminal and run the following commands:
Use the visual loader for a cleaner user experience while keeping logs available for developers.
chmod +x ./setup-ui.sh ./setup-ui.sh
Use the visual loader. If interface detection misses your adapter, pass the host IP explicitly.
chmod +x ./setup-ui.sh ./setup-ui.sh # if needed: ./setup-ui.sh <host-ip>
Run the PowerShell visual loader. It keeps setup details in .setup-ui.log.
.\setup-ui.ps1
Your conversations never leave your local network. No telemetry, no tracking. No data harvesting, no model training.
Runs locally on your hardware for instant responses. No cloud round-trips.
*Completion speeds may vary based on hardware specifications.
Create unlimited user profiles. Keep work and personal chats separate.
*Never share highly personal information with AI bots unless you are an advanced user or fully trust the deployment.
Offline
⛏️Data Mined
🍪Cookies Crunched
Herculean Tech Giants
Available
Set up Christopher locally on your host machine
Christopher is a local AI chatbot. Get started in a few simple steps.
Christopher runs locally on your host machine inside Docker.
sudo apt install docker.io on Ubuntu.Use the matching command for your operating system. The script detects the host IP, starts the stack, creates certificates, and pulls llama3.2:1b if it is missing.
Linux/macOS:
bash ./setup.sh
Linux/macOS visual loader:
chmod +x ./setup-ui.sh ./setup-ui.sh
If auto-detect misses your network interface on macOS, pass the host IP manually:
./setup-ui.sh <host-ip>
Windows:
.\setup.ps1
Windows visual loader:
.\setup-ui.ps1
docker compose -p christopher logs -f ollama.docker compose -p christopher exec -T ollama ollama list.llama3.2:1b before your first chat.Powered by Ollama & Open Source Models
At the heart of Christopher lies Ollama, a revolutionary tool that brings large language models (LLMs) to your local machine.
When you run the setup scripts (setup.sh, setup.ps1, or visual loader variants), Christopher auto-pulls the default model (llama3.2:1b) if it is missing. These models are *quantized to 4-bit to fit into limited RAM (as low as 4-8 GB).
Despite their smaller size, these models are still powerful enough for general conversation, coding help, and creative tasks. The quantization process allows them to run efficiently on a laptop with no GPU and as little as 4-8 GB of RAM, without sacrificing too much accuracy.
llama3.2:1b) has a published cutting knowledge date of December 2023. For time-sensitive topics (news, regulations, security advisories, medical or legal guidance), always verify against current authoritative sources.
Quantization reduces the precision of the model's weights from 16-bit floating point to 4-bit integers. This shrinks the model size by 75% with minimal loss in accuracy.
Isolation, Stability, and Ease of Use
Christopher lives inside a Docker container 🐳. This isolates the AI environment from your host OS, ensuring stability.
Your Private Intranet AI
Christopher is a Local Area Network (LAN) service. It runs on your own hardware and is meant to be opened directly from the Host or other devices on the same network.
After setup, the default way to connect is the secure LAN URL printed by the installer, such as https://<host-ip>:3001. No DNS records or host file edits are required for the basic deployment.
Because the "brain" (Ollama) and the "server" (Next.js) are running on your specific host device, that device must be powered on for you to chat. If the host sleeps, loses power, or drops off the network, Christopher becomes unavailable to every connected client.
3001 is intentionally exposed to keep the browser experience simple and consistent.That design keeps the project approachable for people who just want to install, open one address, and start chatting without any extra network plumbing.
Built with Next.js & React
The face of Christopher is a modern, responsive web application built using Next.js. It is optimized for quick load times, clean interaction, and a simple chat-first workflow.
If you want to contribute to the user experience, we especially welcome ideas that make the interface feel more native, faster, or easier to use on touch devices.
These Terms and Conditions ("Terms") govern your use of the Christopher AI software ("the Software"), a self-hosted, local-area-network (LAN) artificial intelligence chatbot powered by Ollama and Next.js. By downloading, installing, or using the Software, you agree to be bound by these Terms.
Subject to your compliance with these Terms, the Project Creator grants you a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under the Project Creator's copyright to:
Base License: This Software is licensed under the GNU Affero General Public License v3.0 (AGPL v3). This license ensures that any modifications or derivative works must also be released under the same license, promoting full transparency and community sharing.
View the full AGPL v3 license text
localStorage or Docker volumes on the Host Device.The Software is designed to run locally on your infrastructure. You are solely responsible for:
Christopher AI does not call home or require continuous cloud services during normal runtime. All processing occurs on your Host Device. Setup-time downloads (for example model weights and package dependencies) may still occur from third-party repositories (e.g., HuggingFace, Ollama) during installation or updates. You are responsible for the security of these downloads.
Chat history is stored exclusively in the browser's localStorage of the client device. If you clear your browser cache, switch browsers, use Incognito/Private mode, or lose the device, all chat history will be permanently lost. The Software does not provide a centralized database backup mechanism by default.
The AI models used by Christopher AI are probabilistic engines. They are not experts in law, medicine, finance, or safety-critical fields. Do not rely on AI Output for medical diagnoses, legal advice, financial decisions, or safety-critical operations.
When Christopher is offered as a service, the Project Creator may apply baseline safety guardrails so operation stays within legal and ethical boundaries. These guardrails are a compliance measure, not a claim of perfect moderation.
Perceived "censoring" can also come from the model itself. Many response limits are inherited from the underlying model's training data, alignment process, and provider-level policies, and do not always originate from Christopher's application layer.
THE PROJECT CREATOR SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES, INCLUDING BUT NOT LIMITED TO: LOSS OF DATA, ERRORS IN AI OUTPUT LEADING TO FINANCIAL LOSS, SECURITY BREACHES RESULTING FROM MISCONFIGURED DOCKER CONTAINERS, OR UNAUTHORIZED ACCESS TO YOUR LOCAL NETWORK.
THE SOFTWARE IS PROVIDED "AS IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED.
You agree to indemnify, defend, and hold harmless the Project Creator from any claims, liabilities, damages, losses, or expenses (including legal fees) arising out of your use of the Software, your violation of these Terms, or your deployment of the Software in a manner that violates applicable laws.
These Terms shall remain in effect until terminated. If you breach any provision of these Terms, your license to use the Software terminates automatically. Upon termination, you must cease all use of the Software and destroy all copies of the source code in your possession.
These Terms shall be governed by and construed in accordance with the laws of Great Britain, without regard to its conflict of law provisions.
Last Updated: April 4, 2026
Built with ❤️ for privacy enthusiasts and self-hosters.
"Messages in this chat are encrypted at rest in your browser storage when the profile is password-protected. Use the HTTPS LAN URL for encrypted transport in transit (HTTP fallback is not encrypted). I run locally on your machine."
Christopher is a self-hosted, multi-user cyberpunk AI chatbot powered by Ollama and Next.js. It brings the power of Large Language Models (LLMs) to your local network without sending a single byte of data to the cloud. Designed for privacy enthusiasts, developers, and home-labbers who demand sovereignty over their digital conversations.
Christopher is optimized to run on consumer hardware, but performance scales with your specs.
Before running Christopher, you must have Docker installed on your system. Choose the guide below for your operating system:
curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.shAdd your user to the docker group:
sudo usermod -aG docker $USER (then log out and back in).
Open a terminal in the project folder and run the matching setup command for your operating system:
bash ./setup.shchmod +x ./setup-ui.sh then ./setup-ui.sh./setup-ui.sh <host-ip> if interface detection misses your adapter.\setup.ps1.\setup-ui.ps1The first run may take several minutes while the AI model (~1.2 GB) downloads.
Watch progress: docker compose -p christopher logs -f ollama
Confirm model availability: docker compose -p christopher exec -T ollama ollama list
Ensure llama3.2:1b appears before first chat.
This project uses a simple reverse proxy so the app works immediately over your LAN IP.
https://<host-ip>:3001.http://<host-ip>:3002 is optional and not encrypted in transit.Open the secure LAN IP URL in your browser.
Example: https://<host-ip>:3001
Login: Create/select a profile with a required passphrase.
Example: Neo / CorrectHorseBatteryStaple
Profiles: You can create multiple profiles (e.g., "Work", "Personal", "Dev") to keep chat histories completely separate.
:3001 as the standard LAN access path.:3002 is fallback only for compatibility/troubleshooting and is not encrypted in transit..enc.json).To stop the AI and free up resources:
docker compose down
(Your downloaded models and chat history are saved in Docker volumes and will persist when you start it again.)
docker compose pull docker compose up -d --build
docker compose down -v
⚠️ Warning: Removes volumes (deletes all chats & models)
| Issue | Solution |
|---|---|
| "Connection Lost" | Ensure the christopher-ollama container is running (docker compose ps). Restart with docker compose restart. |
| Cannot Access via localhost | Use your machine's LAN IP address (e.g., 192.168.1.x) instead. Docker networking often requires binding to 0.0.0.0. |
| Very Slow Responses | Your computer might be using CPU instead of GPU. Ensure Docker has access to your NVIDIA GPU (requires NVIDIA Container Toolkit on Linux). |
| Port Conflict | If port 3001 or 3002 is already in use, remap 3001:443 and 3002:80 in docker-compose.yml for the caddy service. |
| Certificate Warning | Expected on first HTTPS load. Trust/import certs/server.crt on each client device, or proceed through the browser warning temporarily. |
| Model Not Loading | Run docker compose -p christopher exec -T ollama ollama pull llama3.2:1b, then retry. Also check disk space (5GB+ free). |
| Sidebar Missing on Mobile | Tap the ☰ icon in the top left to toggle the sidebar. |
Built with ❤️ for privacy enthusiasts and self-hosters.