Self-Hosted n8n: The Real Setup and Maintenance Cost
Every article about self-hosting n8n starts the same way: run docker compose up, open the browser, start building workflows. It takes five minutes. It's free. You're free from per-task pricing forever. That article is not wrong — it's just describing the first hour of a commitment that never ends.
This article is about the other hours. The ones where you're debugging a failed SSL renewal at 11 PM, figuring out why your database is eating disk space, and wondering whether the money you saved on Zapier is worth less than the Sunday afternoon you just spent upgrading to a new n8n version that broke two of your workflows.
The Initial Setup
The minimum viable self-hosted n8n deployment looks like this:
A VPS or server running Linux. Most people use a $5-12/month instance from Hetzner, DigitalOcean, or Linode [VERIFY: current entry-level VPS pricing]. You need Docker and Docker Compose installed. You need a domain name pointed at your server. You need a reverse proxy (Nginx or Caddy — Caddy is easier, Nginx is more documented). You need SSL certificates (Caddy handles this automatically, Nginx needs Let's Encrypt configured manually or via Certbot). You need a PostgreSQL database (n8n can use SQLite for testing, but don't run production workloads on SQLite — one corrupted database and you lose everything). You need a .env file with your configuration.
That's the stack: VPS + Docker + reverse proxy + SSL + PostgreSQL + n8n. If you've deployed web applications before, this is a normal Tuesday. If you haven't, each component in that list is a learning curve, and the learning curves stack.
The actual docker-compose file is straightforward. n8n's documentation provides working examples. The setup that most people skip — and regret skipping — includes: setting N8N_SECURE_COOKIE for HTTPS, configuring WEBHOOK_URL so external triggers actually reach your instance, setting up a proper database backup, and configuring basic authentication or SSO so your automation engine isn't exposed to the internet with only a password between it and the world.
Honest time estimate for someone comfortable with Docker and Linux: 2-4 hours to get a production-ready instance running, including DNS propagation. For someone learning Docker along the way: a full weekend, possibly more if you hit networking issues. The "five minute" setup from the blog posts gets you a local development instance, not something you'd trust with business workflows.
Infrastructure Costs
The server cost depends on workload, and workload varies enormously.
Light usage (10-30 workflows, low frequency): A 2 vCPU / 2GB RAM VPS handles this comfortably. Cost: $5-12/month. n8n itself is lightweight when idle. The workflows consume resources when they execute, and light workloads barely register. You'll spend more on the domain name than the server.
Medium usage (30-100 workflows, moderate frequency, some data processing): Bump to 2-4 vCPU / 4GB RAM. Cost: $12-24/month. At this level, you start caring about PostgreSQL performance and might want to separate the database onto its own instance or use a managed database service ($15-30/month from most cloud providers [VERIFY]). You also start caring about disk space — workflow execution logs accumulate, and n8n doesn't aggressively prune them by default.
Heavy usage (100+ workflows, high frequency, data-intensive operations, multiple users): 4+ vCPU / 8GB+ RAM, potentially with a separate database server. Cost: $40-100/month for infrastructure. At this scale, you're also thinking about Redis for queue management, possibly running n8n in "queue mode" with separate main and worker instances for execution scaling [VERIFY: current queue mode architecture]. This is where self-hosting starts looking like a real operations job, not a side project.
For comparison: n8n Cloud's Starter plan runs around $20/month [VERIFY] and handles light-to-medium workloads without you thinking about any of the above. The self-hosting cost advantage is real but narrows as your infrastructure needs grow and your time has a dollar value.
The Update Cycle
n8n releases new versions frequently — roughly every one to two weeks for minor versions, with periodic major releases [VERIFY: current release cadence]. Each release can include new nodes, bug fixes, UI improvements, and breaking changes.
The update process itself is simple: pull the new Docker image, restart the container. Takes about two minutes of downtime. The problem isn't the mechanics — it's the testing.
Every update has the potential to change how existing nodes behave. A node that returned data in one format might return it slightly differently in the new version. An authentication flow that worked might need reconfiguration. A community node might not be compatible with the new n8n version. Most updates are fine. But "most" means you'll hit a breaking update eventually, and you need to know before it breaks your production workflows.
The responsible update process: maintain a staging instance (or at least a separate docker-compose configuration) where you test updates before applying them to production. Run your critical workflows on the staging instance after updating. Verify that outputs match expectations. Then update production. This doubles your infrastructure cost and adds 30-60 minutes per update cycle. Most self-hosters skip this process until the first time an update breaks something in production. Then they don't skip it anymore.
You can also just not update for a while. n8n doesn't force updates — your self-hosted instance runs whatever version you deployed. The risk is that older versions don't get security patches, and the longer you wait, the bigger the jump when you do update. Six months of deferred updates means migrating across multiple breaking changes at once.
Backups and Disaster Recovery
Your n8n instance stores two things you can't afford to lose: your workflow definitions and your credentials (API keys, OAuth tokens, database connections). The workflow execution history is nice to have but not critical. The workflows and credentials are critical.
The minimum backup strategy: automated daily PostgreSQL dumps to an offsite location (S3, Backblaze B2, another server — anywhere that isn't the same machine). A pg_dump cron job and an rsync or rclone command to push the dump offsite. This takes thirty minutes to set up and should be the first thing you configure after the initial deployment — not the thing you get around to after your first data loss.
The better backup strategy: automated daily database dumps, stored offsite with 30-day retention, tested monthly by restoring to a fresh instance. The "tested monthly" part is critical and almost universally skipped. A backup you've never tested is a backup that might not work when you need it.
What happens when your server dies: if you have backups, you provision a new server, run the same Docker setup, restore the database, and you're back. Downtime: 30-60 minutes if you're practiced, 2-4 hours if you're doing it for the first time under stress. If you don't have backups, you're rebuilding every workflow from memory. This happens more often than the self-hosting community likes to admit.
n8n also supports exporting workflows as JSON files, which is useful for version control but isn't a substitute for database backups (it doesn't capture credentials or execution history). Some people store their workflow JSON in a git repository, which provides versioning and an additional backup layer. This is a good practice that costs nothing.
Security
Self-hosting n8n means exposing a workflow engine to the internet. That engine stores API keys, OAuth tokens, and database credentials for every service it connects to. If someone gains access to your n8n instance, they have access to everything it connects to.
The baseline security requirements: HTTPS (non-negotiable — don't run n8n over HTTP), strong authentication (a good password at minimum, SSO or IP whitelisting at better), firewall rules that restrict access to necessary ports only, and keeping the host OS and Docker updated.
What most people forget: n8n's webhook endpoints are public by default. If your workflow has a webhook trigger, anyone who knows the URL can trigger it. For workflows that process data from external services, this is by design. For internal workflows, you need to add authentication to the webhook or restrict access at the reverse proxy level. This is the most common security misconfiguration in self-hosted n8n deployments [VERIFY: whether this is documented as a common issue].
Credentials in n8n are encrypted at rest in the database using an encryption key that you set during deployment. If you lose this key, you lose access to all stored credentials. Store it somewhere safe — not in the docker-compose file on the same server, which is where it usually ends up by default.
For teams: n8n supports user management with role-based access. But on self-hosted instances, the audit logging is limited compared to enterprise SaaS products. If you need to know who changed what workflow and when, you'll need to supplement with external logging or use n8n's enterprise self-hosted tier [VERIFY: current enterprise self-hosted features and pricing].
Monitoring
If a workflow fails at 3 AM, how do you know?
On n8n Cloud or Zapier, you get an email. On self-hosted n8n, you get nothing unless you build it. The default self-hosted experience is silence — your workflow fails, the execution log records the failure, and nobody notices until the downstream effect (or lack of effect) becomes visible. Maybe that's a missing daily report. Maybe that's a customer who didn't get a response. Maybe that's data that silently stopped syncing three weeks ago.
The monitoring stack you should build: an error-handler workflow in n8n itself that triggers on any workflow failure and sends a notification (email, Slack, Discord, whatever you actually check). A health check on the n8n process itself — an external monitoring service like Uptime Kuma (self-hosted) or UptimeRobot (free tier) that pings your n8n instance and alerts you if it's down. Log aggregation if you're running at scale — shipping n8n logs to a service where you can search and alert on them.
The monitoring stack most people actually build: nothing, until something breaks in a way that costs them something.
Honest recommendation: at minimum, build the error-handler workflow and set up a basic uptime monitor. This takes an hour and catches the two most common failure modes (workflow error and instance crash). Everything beyond that scales with how critical your automations are to your business.
The Honest Time Cost
Here's the number that no self-hosting guide publishes: the ongoing hours-per-month cost of running self-hosted n8n.
Light usage, stable workflows: 1-2 hours per month. Mostly checking for updates, reviewing execution logs, occasional debugging. Some months this is zero. Some months a certificate expires or a dependency breaks and it's four hours in a single sitting.
Medium usage, evolving workflows: 3-6 hours per month. You're building new workflows, modifying existing ones, handling failures, updating the instance, managing credentials as API keys rotate. This is where self-hosting starts feeling like a part-time hobby that you're not allowed to quit.
Heavy usage, team environment: 8-15+ hours per month. At this point, someone's job description should include "manages our automation infrastructure." If nobody's job description includes this, what actually happens is that one person becomes the unofficial n8n person, and their other work suffers proportionally.
Multiply those hours by whatever you value your time at. If you make $50/hour and spend 5 hours/month maintaining self-hosted n8n, that's $250/month in time — more than n8n Cloud costs, and comparable to Zapier for moderate usage. The self-hosting cost advantage is only real if: your time cost is low (you enjoy it, or you're learning), your execution volume is high enough that cloud pricing is substantially more, or your data sensitivity requirements prohibit cloud hosting.
When Self-Hosting Makes Sense
Self-hosting n8n is the right choice when several of these conditions are true:
- You have a developer or sysadmin on the team who enjoys (or at least tolerates) infrastructure management
- Your execution volume is high enough that cloud pricing becomes significant — roughly above 5,000-10,000 executions/month
- You handle data that can't leave your infrastructure for compliance, contractual, or regulatory reasons
- You want to run community nodes or custom nodes that aren't available on n8n Cloud
- You want no execution limits and are willing to pay in time instead of money
Self-hosting n8n is the wrong choice when:
- You're a solo non-technical user (use Zapier or Make instead)
- Your time is worth more than the cost difference between self-hosted and cloud
- You don't have a backup strategy and aren't going to build one (you will lose data eventually)
- You want automation to be someone else's problem (that's literally what SaaS is for)
- Your workflows are simple enough that the cheaper cloud tiers cover them
The middle ground: start on n8n Cloud to learn the platform and validate your workflows. If your usage grows to the point where cloud pricing becomes a real line item, migrate to self-hosted with the knowledge of what your actual workflow needs are. Migrating n8n workflows between cloud and self-hosted is supported via JSON export/import. It's not seamless, but it works.
The honest takeaway: self-hosting n8n is not free. The software is free. The hosting, maintenance, security, monitoring, and time are not. The total cost is lower than cloud alternatives for many use cases, but only if you're honest about the time component. "Free" is the price on the GitHub page. The actual cost is whatever your Sunday afternoons are worth.
This is part of CustomClanker's Automation series — reality checks on every major workflow tool.