Security Basics for Self-Hosters: The Minimum Viable Setup
Every IP address on the public internet gets scanned. Not eventually — constantly. Within minutes of spinning up a fresh VPS on Hetzner or DigitalOcean, you'll see failed SSH login attempts in your logs from bots running through credential lists. You're not being targeted. You're being swept. The difference between a compromised server and a secure one isn't paranoia or enterprise-grade tooling — it's about six configuration changes that take less than an hour.
This isn't a hardening guide for people running financial infrastructure. It's the baseline that keeps a self-hosted Coolify box, a Jellyfin server, or a Gitea instance from becoming someone else's cryptocurrency miner. If you're running anything on a public VPS and you haven't done these things, you're relying on luck. Luck runs out.
What The Docs Say
Every VPS provider has a "getting started" security guide. Hetzner's is a single page. DigitalOcean's is a tutorial series. They all say roughly the same things: set up SSH keys, enable a firewall, keep your software updated, don't run things as root. The official documentation makes it sound like a 15-minute checklist — and honestly, it mostly is.
Ubuntu's documentation recommends ufw (Uncomplicated Firewall) and provides copy-paste commands. Fail2ban's docs explain the jail system for banning repeat offenders. Docker's security best practices page is surprisingly thorough — it covers running containers as non-root users, read-only filesystems, and capability dropping. Cloudflare's Tunnel documentation describes a zero-trust model where your server's ports never need to be open to the public internet at all.
The documentation is good. The problem isn't that the information is hard to find. The problem is that none of it is mandatory. Every one of these steps is optional, and a fresh VPS works fine without any of them — right up until it doesn't.
What Actually Happens
Here's the realistic sequence for most self-hosters. You spin up a VPS, install Coolify or Docker, deploy your first app, and move on with your life. Security configuration happens later — usually after you notice something weird in your logs, or after reading a Reddit horror story that sounds uncomfortably familiar. The gap between "server is live" and "server is secured" is where problems live.
SSH Hardening — The Non-Negotiable
Password-based SSH authentication is the single biggest vulnerability on a default VPS. Bots don't exploit clever zero-days. They try root/password123 ten thousand times. Switching to key-only authentication eliminates this entire category of attack. In your /etc/ssh/sshd_config, you set PasswordAuthentication no, PermitRootLogin no, and PubkeyAuthentication yes. Restart the SSH daemon. That's it — the most impactful security change you'll make, and it takes two minutes.
Changing the default SSH port from 22 to something like 2222 or 4822 is marginal in actual security value — a port scan finds it immediately — but it cuts your log noise by 90%. When your auth.log isn't flooded with thousands of failed attempts per day, you can actually notice the entries that matter. It's not security through obscurity. It's noise reduction so your actual monitoring works.
The Firewall — Allow What You Need, Deny Everything Else
ufw on Ubuntu/Debian is genuinely uncomplicated. Three commands get you a working firewall:
ufw default deny incoming
ufw allow 443/tcp
ufw allow 80/tcp
ufw allow 2222/tcp # your SSH port
ufw enable
That's the whole thing. Your server now accepts web traffic and SSH connections, and drops everything else. If you're using Hetzner, their cloud firewall does the same thing at the network level before traffic even reaches your VPS — use both. Defense in depth isn't paranoia when the setup takes five minutes.
The mistake people make is opening ports they don't need. Running PostgreSQL on port 5432 exposed to the internet because a Docker container mapped it there by default. Running a management interface on port 8080 because the quick-start guide said to. Every open port is an attack surface. If a service only needs to talk to other containers on the same Docker network, it doesn't need a public port at all.
Fail2ban — Automated Banning for Everything With a Login Page
Fail2ban watches your log files and temporarily bans IP addresses that fail authentication repeatedly. For SSH, it's watching auth.log. For web applications with login pages — Nextcloud, Gitea, Ghost — it watches their respective logs for failed login patterns. After a configurable number of failures (default is usually 5), the IP gets firewalled out for a configurable duration.
The default Fail2ban configuration for SSH works out of the box on most Ubuntu systems. You install it, enable the sshd jail, and it starts working. For other services, you'll need to write or find jail configurations that match the log format of each application. The Fail2ban community has pre-built configs for most popular self-hosted applications. [VERIFY: Fail2ban may require custom regex filters for some Docker-containerized services that write logs in non-standard locations.]
The important thing to understand: Fail2ban is a speed bump, not a wall. A distributed botnet using thousands of IPs won't be stopped by banning individual addresses. But most attacks aren't distributed. They're a single IP running a credential list. Fail2ban handles that case cheaply and automatically.
Automatic Updates — The Boring Fix That Prevents Most Exploits
The overwhelming majority of successful attacks on self-hosted servers exploit known vulnerabilities with available patches. The CVE gets published, the patch ships, and servers that haven't updated get popped by automated exploit tools — often within days. unattended-upgrades on Ubuntu/Debian automatically installs security patches without intervention. Enable it. Leave it on. Forget about it.
The configuration is in /etc/apt/apt.conf.d/50unattended-upgrades. The default settings on Ubuntu are conservative — they install security updates but don't reboot automatically. For a self-hosted server running Docker containers, this is usually fine. Your OS packages get patched, and your containerized applications are isolated from the host anyway. You'll still need to update your Docker images separately — docker compose pull && docker compose up -d on a regular schedule — but the OS-level attack surface stays current without you thinking about it.
Cloudflare Tunnel — The Architecture That Changes Everything
This is the single biggest security improvement available to self-hosters, and it's free. Cloudflare Tunnel (using cloudflared) creates an outbound connection from your server to Cloudflare's network. Traffic from users hits Cloudflare first, then gets forwarded through the tunnel to your server. Your server's actual IP address is never exposed. You don't open ports 80 or 443 on your firewall at all.
The practical impact is significant. Port scanners can't find your server because there are no open ports to find. DDoS traffic hits Cloudflare's network — which is built to absorb it — instead of your $5 VPS. SSL termination happens at Cloudflare's edge, so you don't manage certificates. And Cloudflare's WAF (even the free tier) filters common attack patterns before they reach your applications.
The setup involves installing cloudflared on your server, authenticating with your Cloudflare account, creating a tunnel, and configuring DNS records. It takes about 20 minutes. After that, your firewall rules simplify to "allow SSH, deny literally everything else." The attack surface of your server drops to essentially just the SSH port — which is already protected by key-only auth and Fail2ban.
Docker Security — The Defaults Are Too Permissive
Docker containers run as root by default. The -p flag publishes ports to all interfaces by default. Docker bypasses ufw by writing directly to iptables. These defaults are convenient for development and dangerous for production.
The fixes are straightforward but require awareness. Map ports to localhost when a service only needs to be reached by other containers or a reverse proxy: 127.0.0.1:5432:5432 instead of 5432:5432. Run containers with user: "1000:1000" in your docker-compose file when the application supports it. Use read-only filesystem mounts (read_only: true) for containers that don't need to write to their filesystem. Keep images updated — docker compose pull on a weekly cron job handles this.
The Docker-bypasses-ufw issue catches a lot of people. You carefully configure ufw to deny incoming traffic, then Docker helpfully opens port 5432 to the world by writing its own iptables rules. The fix is either to always bind to 127.0.0.1 explicitly, or to configure Docker's iptables behavior in /etc/docker/daemon.json. [VERIFY: The specific daemon.json setting is "iptables": false, but this can break container networking if not configured carefully — check current Docker documentation for recommended approach.]
When To Use This
All of it. Every self-hoster should implement every item on this list. This isn't a menu where you pick based on your use case — it's a minimum baseline. The entire setup takes under an hour on a fresh VPS, and most of it is copy-paste from documentation.
The Cloudflare Tunnel piece is the only one with a real trade-off: it adds a dependency on Cloudflare's infrastructure and means your services are unreachable if Cloudflare goes down. For most self-hosters, that trade-off is overwhelmingly worth it. Cloudflare's uptime is better than yours. If the idea of depending on a third party for routing bothers you philosophically, you can skip it and rely on the firewall + fail2ban + key-only SSH stack instead. You'll be fine. You just won't be invisible.
The time to set this up is before you deploy your first application, not after. Build it into your VPS provisioning routine. SSH keys, firewall, fail2ban, unattended-upgrades, Cloudflare Tunnel — then install Coolify, then deploy apps. It's easier to secure an empty box than to retroactively harden one that's already running services.
When To Skip This
There's no version of self-hosting where you skip security basics. But there is a version where you go overboard. You don't need intrusion detection systems (OSSEC, Wazuh) for a personal server. You don't need CrowdStrike. You don't need to set up a SIEM or ship logs to a central analysis platform. You don't need to run CIS benchmark audits quarterly.
The enterprise security world has a deep catalog of tools and practices designed for organizations with compliance requirements, dedicated security teams, and assets worth millions. None of that applies to your Jellyfin server. The threat model for self-hosters is simple: automated bots scanning for default configurations and known vulnerabilities. The countermeasures are equally simple. Don't have default configurations. Don't have known vulnerabilities. Don't be visible if you can avoid it.
If you're running services that handle other people's data — a Nextcloud instance for your family, a Gitea server for a small team — the stakes are slightly higher, but the approach is the same. The basics cover you. The only addition worth considering is regular backups stored offsite, so that if something does go wrong, you can rebuild from clean state rather than trying to forensically determine what happened.
This is part of CustomClanker's Self-Hosting series — the honest cost of running it yourself.