Skip to content

Docker Lab

πŸ“¦ The Point

This section turns the Compose files I use into something more useful than a pile of YAML.

I grouped the stacks by job instead of dumping one page per file. That makes the section easier to browse while still showing the parts I actually care about: persistence, shared networks, reverse proxies, background workers, GPU access, and the eternal question of which environment variables matter.

Recipes with opinions

These pages are not trying to be the one correct way to run each service.

They are the way I have been testing, using, and thinking about them in my own lab, with the trade-offs left in on purpose.


🐳 Why Docker

I do not run everything in Docker. My homelab is a mix of Docker, VMs, LXCs... and I like it that way.

Docker is the tool I reach for when I want something easy to package, easy to move, and easy to rebuild without littering the host with random dependencies. It gives me a clean middle ground between "install it directly and hope I remember what I changed" and "spin up a whole machine for one service."

If I want stronger isolation, a different OS, or a more appliance-like setup, I will use a VM or an LXC instead. If I just want to run an app cleanly and keep the setup reproducible, Docker usually wins.

βš–οΈ Host vs VM vs LXC vs Docker

Running directly on the host is sometimes the lightest option, but it gets messy fast. VMs are great when I want a stronger boundary. LXCs are a nice middle option when I want something system-like without the full weight of a VM.

Docker is where I land most often for app-style services. The app stays contained, the data lives in volumes or bind mounts, and the host stays much less cursed.

☸️ Why Not Kubernetes

Kubernetes is powerful, but for my personal lab it is usually more ceremony than I want. Most of the time I do not need a control plane and a pile of manifests just to run a wiki, a database, or a media service.

Docker Compose hits a sweet spot for me. It is easier to read, easier to debug, and easier to explain. If I outgrow it later, fine. But I would rather spend that complexity budget on services I actually care about than on orchestration for its own sake.

πŸ” Trade-Offs

Docker is not magic, and I do not treat it like a security blanket. Containers are not the same thing as full virtualization, and some workloads still deserve stronger isolation, such as databases or my AI agents.

For my homelab, though, it is a really nice balance: cleaner than installing everything straight on the host, usually lighter than a full VM for app workloads, and much easier than going full Kubernetes. It is relatively lightweight, but images, volumes, logs, and helper containers still add up.

Option Setup effort Isolation Resource use Portability Day-to-day upkeep Good for
Host install Low at first Low Lowest Low Can get messy fast Tiny native tools, one-off services
Docker Low to medium Medium Low to medium High Pretty friendly Most app-style services
LXC Medium Medium to high Low to medium Medium Fairly manageable Lightweight system-style services
VM Medium to high High High Medium Heavier, but predictable Different OSes, stronger boundaries
Kubernetes High Medium to high Medium to high High Highest Bigger clusters, orchestration-heavy setups

This is not meant to be universal truth. It is just the trade-off map I keep in my head when deciding where a new service should live.


🧭 How To Read These Pages

Each tool page follows the same basic pattern: a short intro, a Config section, the Compose file itself, an Example env, and a Vars table explaining the knobs.

Once you have read one, the rest should feel familiar.

These are recipes, not universal best practices

Most of these files are meant as practical starting points for a home lab or self-hosted stack.

They are intentionally readable and reusable. A few would need extra hardening before I would treat them as production-ready on the open internet.

πŸ—‚οΈ Pages In This Section

Data Stores

PostgreSQL, Redis, and CouchDB. The foundation pieces that many other stacks depend on.

Apps & Collaboration

Nextcloud, Gitea, Wiki.js, and Authentik. These are the "real app" building blocks.

AI & Automation

Ollama, Open WebUI, ComfyUI, and n8n. Local model serving, chat frontends, GPU workloads, and automation glue.

Media & Archives

ArchiveBox, Kavita, Komga, JDownloader 2, and qBittorrent. Useful when the workload is libraries, downloads, or long-term saving.

Edge & Operations

AdGuard Home, Cloudflared, Nginx Proxy Manager, Traefik, and Watchtower. The glue that makes services reachable and maintainable.

πŸ” Concepts

A few patterns show up again and again in these files:

Pattern Why it shows up
Named volumes Keep data alive when containers are recreated
Environment variables Make the same file reusable across hosts
External networks Let stacks talk to a shared proxy or backend service
Health checks Help tell "started" from "actually ready"
Separate worker or cron containers Keep background jobs away from the main app process
Bind mounts Give containers direct access to real files on disk

That is where most of the learning value lives, so these pages focus more on those patterns than on "here is an image and a port."

Comments