Skip to content

Edge & Operations

🌐 The Glue Layer

I keep these tools close because they are the ones that make the rest of the lab actually usable.

They handle the less flashy jobs like routing, tunneling, filtering, certificates, and updates, which is usually where a pile of apps starts feeling more like a real platform.

🛡️ AdGuard Home

AdGuard Home Logo

Network Filter

AdGuard Home is one of my favorite "small effort, instantly noticeable" tools. It makes the network feel cleaner right away, and it is also a nice excuse to document the less glamorous details like DNS using both TCP and UDP.

Config

This recipe defines a front network, separate volumes for config and working data, DNS on port 53 over both protocols, and a web UI on a dedicated management port.

Compose

name: adguardhome

volumes:
  adguard_conf:
    name: ${ADGUARDHOME_VOLUME_CONFIG:-adguard_config}
  adguard_work:
    name: ${ADGUARDHOME_VOLUME_WORK:-adguard_work}

networks:
  front:
    name: ${NETWORK_FRONT:-front}
    external: true

services:
  adguardhome:
    container_name: adguardhome
    image: adguard/adguardhome:${ADGUARDHOME_VERSION:-latest}
    networks:
      - front
    ports:
      - 53:53/tcp
      - 53:53/udp
      - ${ADGUARDHOME_PORT:-3000}:3000/tcp
    volumes:
      - adguard_conf:/opt/adguardhome/conf
      - adguard_work:/opt/adguardhome/work
    cpus: 1
    mem_limit: 500mb
    mem_reservation: 20mb
    restart: always

Example env

NETWORK_FRONT=front
ADGUARDHOME_VERSION=latest
ADGUARDHOME_PORT=3000
ADGUARDHOME_VOLUME_CONFIG=adguard_config
ADGUARDHOME_VOLUME_WORK=adguard_work

Vars

Variable Purpose Why it matters
ADGUARDHOME_VERSION Image tag Version control
ADGUARDHOME_PORT Web UI port Admin access
ADGUARDHOME_VOLUME_CONFIG Config volume Keeps settings
ADGUARDHOME_VOLUME_WORK Working-data volume Stores runtime data
NETWORK_FRONT Frontend network Lets it sit with other edge services

☁️ Cloudflared Tunnel

Cloudflare Logo

Private Tunnel

Cloudflared is what I use when I want something reachable from outside without turning my router into a public invitation. It is simple, practical, and usually much nicer than poking holes in the network.

Config

This is a tiny, single-purpose recipe: run the tunnel client, pass the token, and keep resource usage modest.

Compose

name: cloudflared

services:
  cloudflared:
    container_name: cloudflared
    image: cloudflare/cloudflared:${CLOUDFLARED_VERSION:-latest}
    command: tunnel --no-autoupdate run
    environment:
      TUNNEL_TOKEN: ${CLOUDFLARED_TOKEN:?CLOUDFLARED_TOKEN; no token}
    cpus: 1
    mem_limit: 200mb
    restart: always

Example env

CLOUDFLARED_VERSION=latest
CLOUDFLARED_TOKEN=change-me

Vars

Variable Purpose Why it matters
CLOUDFLARED_VERSION Image tag Version control
CLOUDFLARED_TOKEN Tunnel token Authenticates the tunnel

🚪 Nginx Proxy Manager

Nginx Proxy Manager Logo

GUI Proxy

Nginx Proxy Manager is the one I use when I want routing and certificates without disappearing into config files for half the afternoon. It is friendly, quick, and good for getting services in front of people fast.

Config

This recipe exposes the admin UI on 81, HTTP on 80, HTTPS on 443, and stores app state and Let's Encrypt material in separate named volumes.

Compose

name: nginxpm

volumes:
  nginxpm:
    name: ${NGINXPM_VOLUME:-nginxpm}
  letsencrypt:
    name: ${LETSENCRYPT_VOLUME:-letsencrypt}

networks:
  front:
    name: ${NETWORK_FRONT:-front}

services:
  nginxpm:
    container_name: nginxpm
    image: jc21/nginx-proxy-manager:${NGINXPM_VERSION:-latest}
    networks:
      - front
    ports:
      - ${NGINXPM_PORT:-81}:81
      - 80:80
      - 443:443
    volumes:
      - nginxpm:/data
      - letsencrypt:/etc/letsencrypt
    mem_reservation: 96mb
    restart: always

Example env

NETWORK_FRONT=front
LETSENCRYPT_VOLUME=letsencrypt
NGINXPM_VERSION=latest
NGINXPM_PORT=81
NGINXPM_VOLUME=nginxpm

Vars

Variable Purpose Why it matters
NGINXPM_VERSION Image tag Version control
NGINXPM_PORT Admin UI port Usually 81
NGINXPM_VOLUME App volume Stores proxy manager state
LETSENCRYPT_VOLUME Certificate volume Stores issued certs
NETWORK_FRONT Frontend network Lets proxied services join the same network

🚦 Traefik Proxy

Traefik Logo

Label-Driven Edge

Traefik is what I use when I want the proxy layer to feel more like infrastructure and less like manual clicking. I like it because it nudges me toward labels, automation, and cleaner repeatable setups.

Config

This is the most operations-heavy recipe in the folder: overlay network, host-mode ports, read-only Docker socket, ACME storage, deploy labels, and Cloudflare DNS challenge credentials through Docker secrets. It is built for Swarm-style deployment, which is part of what makes it interesting.

Compose

networks:
  proxy:
    name: ${NETWORK_PROXY:-proxy}
    driver: overlay
    attachable: true

volumes:
  letsencrypt:
    name: ${LETSENCRYPT_VOLUME:-letsencrypt}

services:
  traefik:
    image: traefik:${TRAEFIK_VERSION:-latest}
    networks:
      - proxy
    ports:
      - target: 80
        published: 80
        protocol: tcp
        mode: host
      - target: 443
        published: 443
        protocol: tcp
        mode: host
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - letsencrypt:/letsencrypt
    deploy:
      mode: global
      placement:
        constraints:
          - node.role == manager
      labels:
        - "traefik.enable=true"
        - "traefik.http.routers.traefikdashboard-http.rule=Host(`traefik.localhost`)"
        - "traefik.http.routers.traefikdashboard-http.entrypoints=web"
        - "traefik.http.routers.traefikdashboard-http.service=api@internal"
        - "traefik.http.routers.traefikdashboard-https.rule=Host(`traefik.localhost`)"
        - "traefik.http.routers.traefikdashboard-https.entrypoints=websecure"
        - "traefik.http.routers.traefikdashboard-https.tls=true"
        - "traefik.http.routers.traefikdashboard-https.tls.certresolver=cloudflare"
        - "traefik.http.routers.traefikdashboard-https.service=api@internal"
        - "traefik.http.services.traefikdashboard.loadbalancer.server.port=8080"
    environment:
      TRAEFIK_ENTRYPOINTS_WEB_ADDRESS: ":80"
      TRAEFIK_ENTRYPOINTS_WEBSECURE_ADDRESS: ":443"
      TRAEFIK_PROVIDERS_SWARM_ENDPOINT: "unix:///var/run/docker.sock"
      TRAEFIK_PROVIDERS_SWARM_EXPOSEDBYDEFAULT: "false"
      TRAEFIK_PROVIDERS_SWARM_NETWORK: "${NETWORK_PROXY:-proxy}"
      TRAEFIK_LOG_LEVEL: "INFO"
      TRAEFIK_ACCESSLOG: "true"
      TRAEFIK_API_DASHBOARD: "true"
      TRAEFIK_API_INSECURE: "true"
      CF_DNS_API_TOKEN_FILE: "/run/secrets/cloudflare_api_token"
      TRAEFIK_CERTIFICATESRESOLVERS_CLOUDFLARE_ACME_DNSCHALLENGE: "true"
      TRAEFIK_CERTIFICATESRESOLVERS_CLOUDFLARE_ACME_DNSCHALLENGE_RESOLVERS: "1.1.1.1:53,8.8.8.8:53"
      TRAEFIK_CERTIFICATESRESOLVERS_CLOUDFLARE_ACME_DNSCHALLENGE_PROVIDER: "cloudflare"
      TRAEFIK_CERTIFICATESRESOLVERS_CLOUDFLARE_ACME_STORAGE: "/letsencrypt/traefik.json"
      TRAEFIK_CERTIFICATESRESOLVERS_CLOUDFLARE_ACME_EMAIL: "${CLOUDFLARE_EMAIL}"
    secrets:
      - cloudflare_api_token

secrets:
  cloudflare_api_token:
    name: ${SECRET_CLOUDFLARE_API_TOKEN}
    external: true

Example env

NETWORK_PROXY=proxy
LETSENCRYPT_VOLUME=letsencrypt
CLOUDFLARE_EMAIL="user@example.com"
SECRET_CLOUDFLARE_API_TOKEN=cloudflare-api-token
TRAEFIK_VERSION=latest

Vars

Variable Purpose Why it matters
NETWORK_PROXY Overlay network Shared routing network
LETSENCRYPT_VOLUME ACME storage volume Keeps certificate state
TRAEFIK_VERSION Image tag Version control
CLOUDFLARE_EMAIL ACME account email Used for certificate registration
SECRET_CLOUDFLARE_API_TOKEN Docker secret name Lets Traefik complete DNS challenges

🔄 Watchtower

Watchtower Logo

Auto Updates

Watchtower is convenient enough that I keep coming back to it, even though it always comes with the same honest trade-off: easier updates now, less control later. That tension is part of why it belongs in the lab.

I am using the nickfedor image here because the old containrrr Watchtower image is deprecated.

Config

The container mounts the Docker socket so it can inspect and restart containers, then uses environment variables to define update windows, rolling restarts, cleanup behavior, and email notifications.

Compose

name: watchtower

services:
  watchtower:
    container_name: watchtower
    image: nickfedor/watchtower:${WATCHTOWER_VERSION:-latest}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      TZ: ${TZ:-Etc/UTC}
      WATCHTOWER_SCHEDULE: ${WATCHTOWER_SCHEDULE:-"0 0 5 * * 6"}
      WATCHTOWER_ROLLING_RESTART: ${WATCHTOWER_ROLLING_RESTART:-true}
      WATCHTOWER_TIMEOUT: ${WATCHTOWER_TIMEOUT:-30s}
      WATCHTOWER_CLEANUP: ${WATCHTOWER_CLEANUP:-true}
      WATCHTOWER_NO_STARTUP_MESSAGE: ${WATCHTOWER_NO_STARTUP_MESSAGE:-false}
      WATCHTOWER_NOTIFICATIONS_HOSTNAME: ${HOSTNAME:-"new server"}
      WATCHTOWER_NOTIFICATIONS: email
      WATCHTOWER_NOTIFICATION_EMAIL_TO: ${EMAIL_TO}
      WATCHTOWER_NOTIFICATION_EMAIL_FROM: ${EMAIL_FROM}
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER: ${EMAIL_HOST}
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PORT: ${EMAIL_PORT}
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_USER: ${EMAIL_USER}
      WATCHTOWER_NOTIFICATION_EMAIL_SERVER_PASSWORD: ${EMAIL_PASS}
    restart: always

Example env

TZ="Asia/Tokyo"
HOSTNAME=lab-server
WATCHTOWER_VERSION=latest
WATCHTOWER_SCHEDULE="0 0 5 * * 6"
WATCHTOWER_ROLLING_RESTART=true
WATCHTOWER_TIMEOUT=30s
WATCHTOWER_CLEANUP=true
WATCHTOWER_NO_STARTUP_MESSAGE=false
EMAIL_TO="user@example.com"
EMAIL_FROM="user@example.com"
EMAIL_HOST=smtp.gmail.com
EMAIL_PORT=587
EMAIL_USER="user@example.com"
EMAIL_PASS=app-password

Vars

Variable Purpose Why it matters
WATCHTOWER_VERSION Image tag Version control
WATCHTOWER_SCHEDULE Cron expression Controls when updates happen
WATCHTOWER_ROLLING_RESTART One-by-one restarts Reduces blast radius a bit
WATCHTOWER_TIMEOUT Stop timeout Helps containers shut down cleanly
WATCHTOWER_CLEANUP Remove old images Saves disk space
WATCHTOWER_NO_STARTUP_MESSAGE Startup notification behavior Cuts noise if wanted
HOSTNAME Notification label Makes alerts easier to identify
EMAIL_TO / EMAIL_FROM / EMAIL_HOST / EMAIL_PORT / EMAIL_USER / EMAIL_PASS Email settings Lets updates report back
TZ Timezone Keeps schedules aligned locally

Comments