Docker for Self-Hosters: The 20% You Actually Need
Docker packages an application with everything it needs to run — the code, the runtime, the libraries, the config — into a single unit that works the same way on every machine. That's it. That's what Docker does. The enterprise ecosystem around Docker — Kubernetes, Swarm, multi-stage builds, CI/CD orchestration — is enormous and almost entirely irrelevant if you're a self-hoster running services on a VPS. You need about 20% of Docker's feature set, and that 20% handles 95% of what you'll ever do. This article covers that 20% and ignores the rest without apology.
What The Docs Say
Docker's official documentation is comprehensive in a way that actively hinders beginners. It covers everything from containerization theory to enterprise deployment strategies, and the getting-started guide assumes you want to build custom images — which, as a self-hoster, you almost never do. The docs define five core concepts: images, containers, volumes, networks, and Docker Compose. They explain each one thoroughly, with examples that build toward a development workflow most self-hosters will never use.
The Compose documentation — now integrated as docker compose (no hyphen) rather than the old docker-compose binary — describes a YAML file format for defining multi-container applications. This is actually the single most important piece of Docker documentation for self-hosters, but it's presented as one feature among many rather than the center of gravity it actually is.
What Actually Happens
Here are the five concepts you need, explained in the order you'll actually encounter them.
Images are pre-built packages that contain an application ready to run. You don't build images — you pull them. Someone else (usually the software maintainer or LinuxServer.io) has already done the work of packaging PostgreSQL, Redis, Nginx, Coolify, Nextcloud, or whatever service you want into a Docker image. You reference it by name and tag — postgres:16 or lscr.io/linuxserver/nginx:latest — and Docker downloads it. Think of an image as a recipe that's already been cooked and vacuum-sealed. You're not cooking. You're reheating.
Containers are running instances of images. When you start a PostgreSQL image, the running process is a container. You can run multiple containers from the same image — two PostgreSQL instances, each with different data, both from the same postgres:16 image. Containers are ephemeral by default. Stop and remove a container, and everything inside it that wasn't stored in a volume disappears. This is the feature, not the bug — it means you can destroy and recreate containers freely without accumulating cruft.
Volumes are where your data actually lives, and this is the concept people get wrong most often. A container is temporary. A volume is permanent. When PostgreSQL writes your database files, those writes need to go somewhere that survives container restarts, updates, and removal. That somewhere is a volume. There are two types: named volumes (Docker manages the storage location) and bind mounts (you specify a directory on your host). Named volumes are simpler and Docker handles them cleanly. Bind mounts give you direct filesystem access to the data, which matters for things like config files you want to edit manually. The rule of thumb — use named volumes for databases and application data, bind mounts for configuration files you need to touch.
Networks let containers talk to each other. When you run PostgreSQL and a web application in separate containers, they need a way to communicate. Docker networks provide this — containers on the same network can reach each other by container name. Your web app connects to postgres:5432 instead of localhost:5432, and Docker's internal DNS resolves the name. Docker Compose creates a default network for every project, so in practice you rarely configure networks manually. They just work.
Docker Compose is the file that ties it all together, and it's the tool you'll interact with daily. A docker-compose.yml file defines every container in your stack — what images to use, what ports to expose, what volumes to mount, what environment variables to set, and how services relate to each other. Here's what a real Compose file looks like for a web application with a database:
services:
app:
image: your-app:latest
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://user:pass@db:5432/myapp
depends_on:
- db
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
pgdata:
That's a complete, deployable stack definition in 20 lines. The app service runs your application, exposes port 3000, and connects to a PostgreSQL database. The db service runs PostgreSQL 16 and stores its data in a named volume called pgdata. The depends_on directive ensures the database starts before the application. Environment variables configure both services. This file is your entire infrastructure — version-controlled, reproducible, and readable by anyone who's seen YAML before.
The commands you'll actually use fit on one hand. docker compose up -d starts everything defined in your Compose file in detached mode (background). docker compose down stops everything. docker compose logs -f follows the log output. docker compose pull downloads newer versions of your images. docker exec -it container_name bash drops you into a running container's shell for debugging. That's five commands. You will use these five commands for months before you need a sixth.
When To Use This
Docker is non-negotiable for self-hosting in 2026. Every self-hosted application worth running distributes a Docker image — Coolify, Nextcloud, Immich, Jellyfin, Gitea, Uptime Kuma, the entire LinuxServer.io catalog. The alternative is installing applications directly on your host OS, managing conflicting dependencies, and dealing with upgrade paths that assume you're running the exact Linux distribution and version the maintainer tested on. Docker eliminates all of this. Each application runs in its own isolated environment with its own dependencies, and updating is docker compose pull && docker compose up -d.
Learn Docker Compose first and images second. Everything else — Dockerfiles, custom builds, multi-stage builds, build arguments — is for developers who are building and distributing applications, not for self-hosters who are running them. You can go years without writing a Dockerfile. You cannot go a day without using Compose.
The investment required is genuinely small. If you can read YAML and you understand what a port number is, you can run Docker Compose. The learning curve from "never used Docker" to "running three services on a VPS" is an afternoon — not a weekend, not a course, an afternoon. The gap between that afternoon and "comfortable managing 10 services" is maybe another week of occasional troubleshooting.
When To Skip This
You can't, really. Docker is the substrate. If you're self-hosting, you're using Docker. The question isn't whether to learn Docker — it's how much of Docker to learn. And the answer, for self-hosters, is: Compose files, images, volumes, and five commands. Skip everything else until you have a specific reason not to.
What you can skip specifically: writing Dockerfiles (use existing images), Docker Swarm (single-server setups don't need orchestration), Kubernetes (not for you — not yet, probably not ever for personal infrastructure), multi-stage builds (an optimization concern for image publishers, not image users), Docker Desktop on your VPS (it's a GUI tool for local development, not servers), and any tutorial that starts by having you build a custom image from scratch. You're not building. You're deploying. The distinction matters.
The one nuance worth mentioning — if you're running Coolify, it manages Docker for you behind its dashboard. You'll still benefit from understanding what's happening underneath, because when something breaks, the debugging path goes through Docker logs and container status regardless of what management layer sits on top. But Coolify does mean you can defer some of the hands-on Docker learning until you hit your first real troubleshooting session.
The Troubleshooting Loop
When something breaks — and it will, eventually — the debugging path is almost always the same five steps, in this order.
First, check the logs. docker compose logs service_name tells you what the container is complaining about. Nine times out of ten, the error message is explicit enough to act on. A missing environment variable, a failed database connection, a permission denied on a volume path. Second, check the ports. Is the port already in use by another container or service? Is the port mapping in your Compose file correct? Third, check the volumes. Is the data directory mounted correctly? Does the host path exist? Are the permissions right? Fourth, check the environment variables. A typo in a database connection string or a missing API key causes more container failures than actual bugs in the software. Fifth — and this is the step that catches everything the first four missed — check that the image tag you're running is the one you think you're running. An accidental latest pull that brought in a breaking change is the kind of problem that wastes two hours because you're looking everywhere except the obvious place.
This loop — logs, ports, volumes, environment variables, image version — resolves the vast majority of Docker issues you'll encounter as a self-hoster. Bookmark it, internalize it, and run through it before you start searching for obscure solutions. The problem is almost always one of these five things.
This is part of CustomClanker's Self-Hosting series — the honest cost of running it yourself.