Your Local AI Companion

Experience the power of artificial intelligence, completely private and running locally on your hardware. No data collection. No runtime cloud dependency or call-home behavior. Just pure, unrestricted intelligence.

Service Transparency: when Christopher is provided as a service, baseline guardrails may be applied for legal and ethical compliance. Some perceived "censoring" can also come from the underlying model's training and alignment, not only the app layer.

Under The Hood

Christopher isn't a website you visit; it's a powerful engine you host. Click any card to learn more.

🧠

The Neural Engine

At the core runs Ollama, serving lightweight yet powerful Large Language Models.

Learn more →

📦

Containerized Deployment

Christopher lives inside a Docker container, isolating the AI environment from your host OS.

Learn more →

📡

Local Network Access

Christopher is designed to be opened directly at https://<host-ip>:3001 from any device on your LAN.

Learn more →

💻

Responsive Frontend

A lightweight, responsive web interface with instant chat updates, thread management, and mobile-friendly layout.

Learn more →

Quick Start

Fastest path: Download Docker Desktop or Engine, run one command on your host, then open the printed secure LAN URL. Visual loaders are recommended.

Download Docker Desktop

Linux (Ubuntu/Debian) install command: curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh

Navigate to the christopher-ai directory in your terminal and run the following commands:

🐧

Linux (Recommended)

Use the visual loader for a cleaner user experience while keeping logs available for developers.

chmod +x ./setup-ui.sh
./setup-ui.sh
🍎

macOS (Recommended)

Use the visual loader. If interface detection misses your adapter, pass the host IP explicitly.

chmod +x ./setup-ui.sh
./setup-ui.sh
# if needed:
./setup-ui.sh <host-ip>
🪟

Windows (Recommended)

Run the PowerShell visual loader. It keeps setup details in .setup-ui.log.

.\setup-ui.ps1

Why Choose Christopher?

🔒

100% Private

Your conversations never leave your local network. No telemetry, no tracking. No data harvesting, no model training.

Lightning Fast

Runs locally on your hardware for instant responses. No cloud round-trips.

*Completion speeds may vary based on hardware specifications.

👥

Multi-User

Create unlimited user profiles. Keep work and personal chats separate.

*Never share highly personal information with AI bots unless you are an advanced user or fully trust the deployment.

100%

Offline

0

⛏️Data Mined

0

🍪Cookies Crunched

0

Herculean Tech Giants

24/7

Available

Get Started

Set up Christopher locally on your host machine

Christopher is a local AI chatbot. Get started in a few simple steps.

1. Install Docker

Christopher runs locally on your host machine inside Docker.

  • Windows: install Docker Desktop for Windows.
  • macOS: install Docker Desktop for Mac.
  • Linux: install Docker Engine plus Compose, such as sudo apt install docker.io on Ubuntu.

2. Run the setup script from the project root

Use the matching command for your operating system. The script detects the host IP, starts the stack, creates certificates, and pulls llama3.2:1b if it is missing.

Linux/macOS:

bash ./setup.sh

Linux/macOS visual loader:

chmod +x ./setup-ui.sh
./setup-ui.sh

If auto-detect misses your network interface on macOS, pass the host IP manually:

./setup-ui.sh <host-ip>

Windows:

.\setup.ps1

Windows visual loader:

.\setup-ui.ps1

3. Wait for initialization

  • The first run may take a few minutes while the model downloads.
  • Check progress with docker compose -p christopher logs -f ollama.
  • Confirm model availability with docker compose -p christopher exec -T ollama ollama list.
  • Look for llama3.2:1b before your first chat.

4. Open Christopher in your browser

  • Open the secure LAN URL printed by setup: https://<host-ip>:3001
  • If needed for compatibility or troubleshooting, use the fallback HTTP URL: http://<host-ip>:3002
  • No DNS setup is required for the default install.
  • No hosts-file edits are required.
  • Browsers will warn on first use until the self-signed certificate is trusted.

5. Best practice

  • Keep the host machine powered on so other devices on the LAN can reach Christopher.
  • Use clear, specific prompts for better responses.
  • Response time depends on your hardware and available RAM.

The Neural Engine

Powered by Ollama & Open Source Models

At the heart of Christopher lies Ollama, a revolutionary tool that brings large language models (LLMs) to your local machine.

Running on Low-End Hardware

By default: The creator of Christopher has configured it to run on low-end hardware, making it accessible to a wider audience.

When you run the setup scripts (setup.sh, setup.ps1, or visual loader variants), Christopher auto-pulls the default model (llama3.2:1b) if it is missing. These models are *quantized to 4-bit to fit into limited RAM (as low as 4-8 GB).

Despite their smaller size, these models are still powerful enough for general conversation, coding help, and creative tasks. The quantization process allows them to run efficiently on a laptop with no GPU and as little as 4-8 GB of RAM, without sacrificing too much accuracy.

Model Recency Notice: The default model (llama3.2:1b) has a published cutting knowledge date of December 2023. For time-sensitive topics (news, regulations, security advisories, medical or legal guidance), always verify against current authoritative sources.

*What is Quantization?

Quantization reduces the precision of the model's weights from 16-bit floating point to 4-bit integers. This shrinks the model size by 75% with minimal loss in accuracy.

Visit Ollama Official Site ↗

Containerized Deployment

Isolation, Stability, and Ease of Use

Christopher lives inside a Docker container 🐳. This isolates the AI environment from your host OS, ensuring stability.

Why Docker🐳?

  • 🛅Dependency Isolation: Christopher brings its own Python, Node.js, and system libraries.
  • 🗝️Security: Docker provides an additional layer of security by isolating the AI environment from the host system.
  • 🤓Ease of Use: 🐳 Docker simplifies the deployment and management of Christopher, making it accessible to users without deep technical knowledge.
  • 🆕Reproducibility: The source distribution of Christopher from this site or GitHub runs bit-for-bit identically on any other machine.
  • 🗑️Clean Uninstall: Want to remove Christopher? Just remove the container.
    • Advanced Features:
    • ⚙️Advanced User Control: Docker allows you to change GPU, CPU, and RAM usage as needed. Change these settings based on your hardware capabilities.
    • 🚀Portability: Move the container to another machine without complex setup.
    • ↗️Ports: Docker allows you to map ports for external access.
    • *Note: Exposing ports can introduce security risks. Always ensure proper firewall settings and access controls are in place.
    • ♾️Connectivity: Docker enables seamless integration with other services and tools. The creator of Christopher is passionate about open source🔓, the Docker folder and files are yours to modify.♥️
    • *Note: Docker is a powerful tool, but it may have a learning curve for those new to containerization. The creator of Christopher is available to help with any questions or issues you may encounter.
~ Always keep yourself, information and data safe. ~
Visit Docker Official Site 🐳↗

Local Network Architecture

Your Private Intranet AI

Christopher is a Local Area Network (LAN) service. It runs on your own hardware and is meant to be opened directly from the Host or other devices on the same network.

After setup, the default way to connect is the secure LAN URL printed by the installer, such as https://<host-ip>:3001. No DNS records or host file edits are required for the basic deployment.

Critical Requirement: Host Must Be On

Because the "brain" (Ollama) and the "server" (Next.js) are running on your specific host device, that device must be powered on for you to chat. If the host sleeps, loses power, or drops off the network, Christopher becomes unavailable to every connected client.

Simple LAN Access

  • Open Christopher from phones, tablets, laptops, or another desktop on the same Wi-Fi.
  • Use the IP address shown by the setup script; it is the one source of truth for the default install.
  • Port 3001 is intentionally exposed to keep the browser experience simple and consistent.

That design keeps the project approachable for people who just want to install, open one address, and start chatting without any extra network plumbing.

Future flexibility: If you later want a custom hostname or HTTPS certificate trust workflow, that can be layered on top as an advanced deployment path.

The Frontend Interface

Built with Next.js & React

The face of Christopher is a modern, responsive web application built using Next.js. It is optimized for quick load times, clean interaction, and a simple chat-first workflow.

Key Technologies

  • React: Updates the chat window instantly without reloading the page and keeps the interface responsive while messages stream.
  • Next.js: Handles the browser experience, routes, and local API integration so the app feels like a single coherent product.
  • Tailwind CSS: Used for the cyberpunk styling, spacing, and responsive layout behavior across screens.

Frontend Experience

  • Profile-based chat history keeps conversations separated and easy to return to.
  • Thread controls let you split work, personal, and experimental chats cleanly.
  • The layout is tuned for desktop first, but still behaves well on smaller screens and tablets.
  • Simple forms and buttons keep the onboarding flow approachable for non-technical users.

If you want to contribute to the user experience, we especially welcome ideas that make the interface feel more native, faster, or easier to use on touch devices.

1. PREAMBLE

These Terms and Conditions ("Terms") govern your use of the Christopher AI software ("the Software"), a self-hosted, local-area-network (LAN) artificial intelligence chatbot powered by Ollama and Next.js. By downloading, installing, or using the Software, you agree to be bound by these Terms.

IF YOU DO NOT AGREE TO THESE TERMS, DO NOT USE THE SOFTWARE.

2. LICENSE GRANT (OPEN SOURCE)

License: AGPL v3

Subject to your compliance with these Terms, the Project Creator grants you a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under the Project Creator's copyright to:

  1. Use: Run the Software for personal, educational, or commercial purposes.
  2. Modify: Alter the source code to suit your needs.
  3. Distribute: Copy and distribute the Software or modified versions.
  4. Sublicense: Grant sublicenses to others.

Base License: This Software is licensed under the GNU Affero General Public License v3.0 (AGPL v3). This license ensures that any modifications or derivative works must also be released under the same license, promoting full transparency and community sharing.

View the full AGPL v3 license text

3. DEFINITIONS

  • "Host Device": The physical machine (server, laptop, desktop) where the Docker container is running.
  • "User": Any individual accessing the Christopher AI interface via a web browser on the local network.
  • "AI Output": Any text, code, image, or data generated by the underlying Large Language Model (LLM) (e.g., Llama 3) running within the container.
  • "Local Storage": Data stored in the browser's localStorage or Docker volumes on the Host Device.
  • "Project Creator": The original author(s) and contributors of the Christopher AI source code.

4. NATURE OF THE SERVICE & LOCAL DEPLOYMENT

4.1 Self-Hosting Responsibility

The Software is designed to run locally on your infrastructure. You are solely responsible for:

  • Providing the hardware (CPU, RAM, GPU) required to run the models.
  • Maintaining the security of your Host Device and network.
  • Ensuring the Host Device is powered on and connected to the network for the service to function.
  • Managing Docker container updates, security patches, and network configurations (ports, firewalls).

4.2 No Ongoing Runtime Cloud Dependency

Christopher AI does not call home or require continuous cloud services during normal runtime. All processing occurs on your Host Device. Setup-time downloads (for example model weights and package dependencies) may still occur from third-party repositories (e.g., HuggingFace, Ollama) during installation or updates. You are responsible for the security of these downloads.

5. DATA PRIVACY & IMPERMANENCE

⚠️ CRITICAL WARNING: VOLATILE DATA STORAGE

Chat history is stored exclusively in the browser's localStorage of the client device. If you clear your browser cache, switch browsers, use Incognito/Private mode, or lose the device, all chat history will be permanently lost. The Software does not provide a centralized database backup mechanism by default.

  • No Data Collection: The Project Creator does not collect, track, or store any user data, chat logs, or usage statistics. The Software contains no telemetry.
  • Network Security: Since the Software runs on your local network, you are responsible for securing your network against unauthorized access. The Project Creator accepts no liability for data exposure resulting from compromised local networks.

6. ARTIFICIAL INTELLIGENCE OUTPUT DISCLAIMER

6.1 No Professional Advice

The AI models used by Christopher AI are probabilistic engines. They are not experts in law, medicine, finance, or safety-critical fields. Do not rely on AI Output for medical diagnoses, legal advice, financial decisions, or safety-critical operations.

6.2 Guardrails, Law, and Model Behavior

When Christopher is offered as a service, the Project Creator may apply baseline safety guardrails so operation stays within legal and ethical boundaries. These guardrails are a compliance measure, not a claim of perfect moderation.

Perceived "censoring" can also come from the model itself. Many response limits are inherited from the underlying model's training data, alignment process, and provider-level policies, and do not always originate from Christopher's application layer.

  • Hallucinations: AI models may generate factually incorrect, misleading, or nonsensical information ("hallucinations"). You acknowledge that the Software may produce errors and agree to verify all critical information independently.
  • Content Moderation: Christopher may include baseline safety instructions at the application layer, but output boundaries are still heavily influenced by the selected model. Depending on model configuration and training, the AI may still generate offensive, biased, or inappropriate content. You remain responsible for configuring deployment behavior to align with local laws and your ethical standards.

7. INTELLECTUAL PROPERTY & MODELS

  • Software Code: The source code of Christopher AI is open source under AGPL v3.
  • Third-Party Models: The underlying AI models (e.g., Llama 3, Mistral) used by the Software are subject to their own separate licenses (e.g., Meta's Llama Community License). You must comply with the terms of the specific model you choose to run.
  • Trademark: "Christopher AI" is used descriptively. No trademark claims are made. You may fork and rename the project, but should not use the name to confuse users about the origin of derivative works.

8. LIMITATION OF LIABILITY

TO THE MAXIMUM EXTENT PERMITTED BY LAW:

THE PROJECT CREATOR SHALL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES, INCLUDING BUT NOT LIMITED TO: LOSS OF DATA, ERRORS IN AI OUTPUT LEADING TO FINANCIAL LOSS, SECURITY BREACHES RESULTING FROM MISCONFIGURED DOCKER CONTAINERS, OR UNAUTHORIZED ACCESS TO YOUR LOCAL NETWORK.

THE SOFTWARE IS PROVIDED "AS IS," WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED.

9. INDEMNIFICATION

You agree to indemnify, defend, and hold harmless the Project Creator from any claims, liabilities, damages, losses, or expenses (including legal fees) arising out of your use of the Software, your violation of these Terms, or your deployment of the Software in a manner that violates applicable laws.

10. TERMINATION

These Terms shall remain in effect until terminated. If you breach any provision of these Terms, your license to use the Software terminates automatically. Upon termination, you must cease all use of the Software and destroy all copies of the source code in your possession.

11. GOVERNING LAW

These Terms shall be governed by and construed in accordance with the laws of Great Britain, without regard to its conflict of law provisions.

Last Updated: April 4, 2026

Built with ❤️ for privacy enthusiasts and self-hosters.

🌑 CHRISTOPHER AI - Neural Interface

"Messages in this chat are encrypted at rest in your browser storage when the profile is password-protected. Use the HTTPS LAN URL for encrypted transport in transit (HTTP fallback is not encrypted). I run locally on your machine."

Christopher is a self-hosted, multi-user cyberpunk AI chatbot powered by Ollama and Next.js. It brings the power of Large Language Models (LLMs) to your local network without sending a single byte of data to the cloud. Designed for privacy enthusiasts, developers, and home-labbers who demand sovereignty over their digital conversations.

🖥️ System Requirements

Christopher is optimized to run on consumer hardware, but performance scales with your specs.

Minimum Specs (Basic Chat)

  • CPU: 4 Cores (Intel i5 / AMD Ryzen 5 or equivalent)
  • RAM: 8 GB (System will use swap if lower, but may be slow)
  • Storage: 10 GB Free Space (for Docker images + Model files)
  • OS: Windows 10/11, macOS 12+, or Linux (Ubuntu/Debian)
  • Software: Docker Desktop (Windows/Mac) or Docker Engine + Compose (Linux)

Recommended Specs (Fast & Smooth)

  • CPU: 6+ Cores
  • RAM: 16 GB or more
  • GPU: NVIDIA RTX 3060 (6GB VRAM) or better (Optional: Enables GPU acceleration in Docker)
  • Storage: SSD/NVMe (Significantly faster model loading times)
⚠️ Note: This project runs the AI model locally on your machine. Performance depends directly on your hardware. No data is sent to the cloud.

🚀 Installation

Prerequisites: Installing Docker

Before running Christopher, you must have Docker installed on your system. Choose the guide below for your operating system:

  • Windows: Download and install Docker Desktop for Windows. Ensure "WSL 2 backend" is enabled during installation for best performance.
  • macOS: Download and install Docker Desktop for Mac. Select the appropriate version for your chip (Intel or Apple Silicon).
  • Linux (Ubuntu/Debian): Open a terminal and run the official installation script:
    curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh
    Add your user to the docker group: sudo usermod -aG docker $USER (then log out and back in).

1. Clone & Start

Open a terminal in the project folder and run the matching setup command for your operating system:

  • Linux/macOS: bash ./setup.sh
  • Linux/macOS visual loader: chmod +x ./setup-ui.sh then ./setup-ui.sh
  • macOS manual fallback: ./setup-ui.sh <host-ip> if interface detection misses your adapter
  • Windows: .\setup.ps1
  • Windows visual loader: .\setup-ui.ps1

2. Wait for Initialization

The first run may take several minutes while the AI model (~1.2 GB) downloads.

Watch progress: docker compose -p christopher logs -f ollama

Confirm model availability: docker compose -p christopher exec -T ollama ollama list

Ensure llama3.2:1b appears before first chat.

3. Launch the App

This project uses a simple reverse proxy so the app works immediately over your LAN IP.

  1. Run the matching setup command for your OS from the project root.
  2. No DNS setup is required for the default install.
  3. Open the secure LAN URL printed by the setup script, such as https://<host-ip>:3001.
Notes:
  • The default install uses HTTPS on your LAN IP for encrypted transport in transit.
  • HTTP fallback on http://<host-ip>:3002 is optional and not encrypted in transit.
  • No DNS setup is required for this secure default path.
  • The setup script generates a self-signed certificate for the host LAN IP.
  • The setup script auto-pulls the default model if it is missing.
  • Browsers may warn until each client device trusts that certificate.

4. Launch Interface

Open the secure LAN IP URL in your browser.

Example: https://<host-ip>:3001

5. Cross-Device Use

  • Works across Windows, macOS, Linux, Android, and iOS clients on the same LAN.
  • Only the host machine needs Docker; client devices only need a browser.
  • Each client may need to trust the generated certificate on first use.

🔐 First Time Setup

User Accounts

Login: Create/select a profile with a required passphrase.
Example: Neo / CorrectHorseBatteryStaple

Note: There is no central database. Profile metadata is stored in browser localStorage.

Profiles: You can create multiple profiles (e.g., "Work", "Personal", "Dev") to keep chat histories completely separate.

Performance Expectations

  • Cold Start: The very first message may take 10-20 seconds to generate while the model loads into RAM.
  • Streaming: Subsequent messages will stream instantly (tokens per second depends on CPU/GPU).

⚠️ Data Privacy & Impermanence

  • Encrypted at Rest: Chat history is encrypted in browser storage and protected by your profile password.
  • Secure-by-Default Transport: Use HTTPS on :3001 as the standard LAN access path.
  • Fallback-Only Compatibility Path: HTTP on :3002 is fallback only for compatibility/troubleshooting and is not encrypted in transit.
  • Volatility: If you clear browser storage, switch devices, or use Incognito mode, history can be lost.
  • No Recovery: If a profile password is forgotten, encrypted chat history cannot be recovered.
  • Backup: Exports from password-protected profiles are encrypted (.enc.json).

✨ Features

  • 🔒 100% Offline: No API keys, no cloud subscriptions, no telemetry.
  • 👥 Multi-User: Different usernames create completely separate chat histories.
  • 🛡️ Security-Forward Profiles: Password-protected profiles keep local chat storage encrypted at rest.
  • 📶 Transport Awareness Badge: Runtime header indicator shows secure HTTPS mode vs HTTP fallback mode.
  • 🧵 Multi-Thread: Click the + in the sidebar to start new topics (e.g., "Coding", "Creative Writing", "Recipes").
  • ✏️ Rename Threads: Double-click any thread name in the sidebar to rename it.
  • 🎨 Cyberpunk UI: Fully responsive, dark-mode, neon-themed interface optimized for low-light environments.
  • 📱 Mobile Ready: Access from any device on your local network (phone, tablet, laptop) using the host IP.

🛑 Maintenance

Stop the Service

To stop the AI and free up resources:

docker compose down

(Your downloaded models and chat history are saved in Docker volumes and will persist when you start it again.)

Update Christopher

docker compose pull
docker compose up -d --build

Remove Completely

docker compose down -v

⚠️ Warning: Removes volumes (deletes all chats & models)

🆘 Troubleshooting

Issue Solution
"Connection Lost" Ensure the christopher-ollama container is running (docker compose ps). Restart with docker compose restart.
Cannot Access via localhost Use your machine's LAN IP address (e.g., 192.168.1.x) instead. Docker networking often requires binding to 0.0.0.0.
Very Slow Responses Your computer might be using CPU instead of GPU. Ensure Docker has access to your NVIDIA GPU (requires NVIDIA Container Toolkit on Linux).
Port Conflict If port 3001 or 3002 is already in use, remap 3001:443 and 3002:80 in docker-compose.yml for the caddy service.
Certificate Warning Expected on first HTTPS load. Trust/import certs/server.crt on each client device, or proceed through the browser warning temporarily.
Model Not Loading Run docker compose -p christopher exec -T ollama ollama pull llama3.2:1b, then retry. Also check disk space (5GB+ free).
Sidebar Missing on Mobile Tap the icon in the top left to toggle the sidebar.

🏗️ Architecture

  • Frontend: Next.js 14, React, Tailwind CSS
  • AI Engine: Ollama (serving Llama 3 1B/3B Quantized)
  • Deployment: Docker Compose + Caddy (LAN HTTPS reverse proxy)
  • Storage: Browser localStorage (Client-side) + Docker Volumes (Server-side models)

Built with ❤️ for privacy enthusiasts and self-hosters.