Skip to content

AI & Automation

🤖 The Fun Heavy Stuff

I keep these tools around because this is where the lab gets a bit more interesting and a bit more demanding.

Once GPUs, model storage, local inference, and creative workflows show up, the setup stops being "just run a web app" and starts needing a little more thought.


🦙 Ollama

Ollama Logo

Local LLM

Ollama is the piece I use when I want local models to feel like infrastructure instead of a one-off experiment. It gives me a stable endpoint for the rest of the stack without sending prompts off to somebody else's API.

Config

This recipe exposes the default API port, mounts model storage into /root/.ollama, and requests NVIDIA GPU access through Compose deploy resources. The persistent model path is the important bit. Without it, every rebuild turns into another large download.

Compose

name: ollama

services:
  ollama:
    image: ollama/ollama:${OLLAMA_VERSION:-latest}
    container_name: ollama
    ports:
      - ${OLLAMA_PORT:-11434}:11434
    volumes:
      - ${OLLAMA_MODEL_PATH}:/root/.ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    restart: unless-stopped

Example env

OLLAMA_VERSION=latest
OLLAMA_PORT=11434
OLLAMA_MODEL_PATH="/srv/ollama"

Vars

Variable Purpose Why it matters
OLLAMA_VERSION Image tag Helpful when runtime changes affect behavior
OLLAMA_PORT API port Other apps use this to connect
OLLAMA_MODEL_PATH Model storage path Keeps pulled models between restarts

💬 Open WebUI

Open WebUI Logo

Chat Frontend

Open WebUI is what makes the local model stack feel usable by actual humans. I like it because it turns a raw inference endpoint into something approachable, and it gets even better once auth, Postgres, and Redis are all wired in properly.

Config

This file uses PostgreSQL for durable app data, Redis for shared runtime state, a persistent data mount for local app files, and OIDC settings for Authentik. It reads much more like a real platform service than a throwaway demo container.

Compose

name: openwebui

services:
  openwebui:
    image: ghcr.io/open-webui/open-webui:${OPENWEBUI_VERSION:-main-slim}
    container_name: openwebui
    ports:
      - ${OPENWEBUI_PORT:-3000}:8080
    environment:
      DATABASE_URL: "postgresql://${OPENWEBUI_PG_USER}:${OPENWEBUI_PG_PASS}@${POSTGRESQL_HOST}:${POSTGRESQL_PORT:-5432}/${OPENWEBUI_PG_DB}"
      REDIS_URL: "redis://:${REDIS_PASS}@${REDIS_HOST}:${REDIS_PORT:-6379}/${REDIS_DBID}"
      ENABLE_DB_MIGRATIONS: "true"
      DATABASE_ENABLE_SESSION_SHARING: "false"
      OAUTH_CLIENT_ID: ${OPENWEBUI_OAUTH_CLIENT_ID}
      OAUTH_CLIENT_SECRET: ${OPENWEBUI_OAUTH_CLIENT_SECRET}
      OAUTH_PROVIDER_NAME: ${OPENWEBUI_OAUTH_PROVIDER_NAME}
      OPENID_PROVIDER_URL: ${OPENWEBUI_OPENID_PROVIDER_URL}
      OPENID_REDIRECT_URI: ${OPENWEBUI_OPENID_REDIRECT_URI}
      WEBUI_URL: ${OPENWEBUI_WEBUI_URL}
      ENABLE_OAUTH_SIGNUP: "true"
      ENABLE_LOGIN_FORM: "false"
      OAUTH_MERGE_ACCOUNTS_BY_EMAIL: "true"
      AUDIO_STT_ENGINE: web
      RAG_EMBEDDING_ENGINE: ollama
      ENABLE_PERSISTENT_CONFIG: "true"
    volumes:
      - ${OPENWEBUI_DATA_PATH}:/app/backend/data
    restart: unless-stopped

Example env

OPENWEBUI_VERSION=main-slim
OPENWEBUI_PORT=3000
OPENWEBUI_DATA_PATH="/srv/openwebui"
POSTGRESQL_HOST=postgresql
POSTGRESQL_PORT=5432
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DBID=0
REDIS_PASS=change-me
OPENWEBUI_PG_DB=openwebui
OPENWEBUI_PG_USER=openwebui
OPENWEBUI_PG_PASS=change-me
OPENWEBUI_WEBUI_URL="https://openwebui.example.com"
OPENWEBUI_OAUTH_CLIENT_ID=openwebui
OPENWEBUI_OAUTH_CLIENT_SECRET=change-me
OPENWEBUI_OAUTH_PROVIDER_NAME=authentik
OPENWEBUI_OPENID_PROVIDER_URL="https://auth.example.com/application/o/openwebui/.well-known/openid-configuration"
OPENWEBUI_OPENID_REDIRECT_URI="https://openwebui.example.com/oauth/oidc/callback"

Vars

Variable Purpose Why it matters
OPENWEBUI_VERSION Image tag Pinning reduces surprise UI changes
OPENWEBUI_PORT Host port Local web access
OPENWEBUI_DATA_PATH Persistent data path Keeps settings and local state
OPENWEBUI_PG_DB / OPENWEBUI_PG_USER / OPENWEBUI_PG_PASS Database credentials Used to build DATABASE_URL
POSTGRESQL_HOST / POSTGRESQL_PORT Postgres endpoint External database connection
REDIS_HOST / REDIS_PORT / REDIS_DBID / REDIS_PASS Redis endpoint Used to build REDIS_URL
OPENWEBUI_WEBUI_URL Public app URL Important for generated links
OPENWEBUI_OAUTH_CLIENT_ID / OPENWEBUI_OAUTH_CLIENT_SECRET OAuth client settings Needed for SSO
OPENWEBUI_OAUTH_PROVIDER_NAME Provider label Small usability detail
OPENWEBUI_OPENID_PROVIDER_URL Discovery URL Lets the app find the identity provider
OPENWEBUI_OPENID_REDIRECT_URI OAuth callback URL Must match provider config

🎨 ComfyUI

ComfyUI Logo

Image Workflows

ComfyUI is the one I reach for when a chat box stops being enough. I like it for image experiments where I want to see the whole pipeline, tweak nodes directly, and keep the workflow a bit more hands-on.

For this setup, I use YanWenKun's Docker image because it is easy to get running and easy to live with once it is up.

Config

The file exposes the default UI port, mounts a persistent storage directory at /root, enables NVIDIA GPU access, and passes startup flags with CLI_ARGS. The --listen 0.0.0.0 flag is what makes the interface reachable outside the container.

Compose

name: comfyui

services:
  comfyui:
    image: yanwk/comfyui-boot:${COMFYUI_VERSION:-cu128-slim}
    container_name: comfyui
    ports:
      - ${COMFYUI_PORT:-8188}:8188
    volumes:
      - ${COMFYUI_STORAGE_PATH}:/root
    environment:
      CLI_ARGS: '--disable-xformers --listen 0.0.0.0'
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    restart: unless-stopped

Example env

COMFYUI_VERSION=cu128-slim
COMFYUI_PORT=8188
COMFYUI_STORAGE_PATH="/srv/comfyui"

Vars

Variable Purpose Why it matters
COMFYUI_VERSION Image tag GPU builds can differ a lot
COMFYUI_PORT Host port Web UI access
COMFYUI_STORAGE_PATH Persistent storage path Models, outputs, and config need a home

🔄 n8n

n8n Logo

Workflow Glue

n8n is the sort of tool that becomes more useful the more services you have. What I like about this recipe is that it treats automation like a real app with durable storage, proper encryption, and a stable public URL instead of a disposable demo.

Config

This recipe uses external PostgreSQL instead of SQLite, sets a dedicated encryption key for stored credentials, defines a stable public URL for webhooks, and pins the timezone so scheduled workflows behave predictably.

Compose

name: n8n

services:
  n8n:
    image: docker.n8n.io/n8nio/n8n:${N8N_VERSION:-stable}
    container_name: n8n
    ports:
      - ${N8N_PORT:-5678}:5678
    environment:
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=${POSTGRESQL_HOST}
      - DB_POSTGRESDB_PORT=${POSTGRESQL_PORT:-5432}
      - DB_POSTGRESDB_DATABASE=${N8N_PG_DB}
      - DB_POSTGRESDB_USER=${N8N_PG_USER}
      - DB_POSTGRESDB_PASSWORD=${N8N_PG_PASS}
      - GENERIC_TIMEZONE=${TZ:-Etc/UTC}
      - TZ=${TZ:-Etc/UTC}
      - N8N_HOST=${N8N_URL}
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${N8N_URL}/
      - NODE_ENV=production
    restart: unless-stopped

Example env

TZ="Asia/Tokyo"
N8N_VERSION=stable
N8N_PORT=5678
N8N_ENCRYPTION_KEY=change-me
N8N_URL=n8n.example.com
POSTGRESQL_HOST=postgresql
POSTGRESQL_PORT=5432
N8N_PG_DB=n8n
N8N_PG_USER=n8n
N8N_PG_PASS=change-me

Vars

Variable Purpose Why it matters
N8N_VERSION Image tag Pin if you want calmer upgrades
N8N_PORT Host port Default web UI port
N8N_ENCRYPTION_KEY Secret key Protects stored credentials
N8N_URL Public hostname Needed for links and webhooks
POSTGRESQL_HOST / POSTGRESQL_PORT Database endpoint External persistent storage
N8N_PG_DB / N8N_PG_USER / N8N_PG_PASS Database credentials Required for startup
TZ Timezone Important for schedules

Comments