Data Backup Automation: What to Protect and How to Stop Hoping for the Best

An untested backup is a wish. You have a file called "backup" somewhere, maybe on a drive you haven't plugged in since last year, and you're treating it like insurance. It's not insurance. It's a label on a folder that might contain what you need, in a format you might be able to restore, from a date you can't remember. Real backup automation runs on a schedule, stores copies in multiple locations, and gets tested — because the moment you actually need a backup is the worst possible time to discover it doesn't work.

This isn't about paranoia. It's about probability. Servers die. Cloud providers have outages. You will, at some point, accidentally delete something important. The question isn't whether you'll need a backup — it's whether you'll have one that's recent enough and complete enough to matter. For publishers and solopreneurs running Ghost sites, email lists, and automation workflows, the backup surface is specific and the automation is straightforward. The hard part isn't building it. The hard part is actually doing it.

What The Docs Say

The backup strategy that every sysadmin guide recommends is the 3-2-1 rule: three copies of important data, on two different types of storage, with one copy offsite. For a solopreneur, that translates to: your production system (copy one), a cloud storage backup like S3 or Google Drive (copy two, different storage type), and either a local download or a second cloud provider (copy three, offsite relative to the primary). The 3-2-1 rule is decades old, well-tested, and works at every scale from enterprise data centers to a person running three Ghost sites.

Ghost's Admin API provides a content export endpoint that dumps your entire site — posts, pages, tags, settings — as a JSON file. The docs describe it as a complete content backup, and it's accessible via a single API call. For media files (images, uploads), Ghost stores them on the filesystem or in cloud storage depending on your configuration, and those need separate backup handling. The API export covers content and metadata but not the images themselves.

n8n's documentation shows scheduled workflows that hit APIs, download the response, and upload it to cloud storage — the generic pattern that applies to Ghost exports, email list exports, and any other API-accessible data. The pattern is: cron trigger fires on schedule, HTTP request node hits the export API, the response gets passed to an S3 or Google Drive upload node, and a notification node confirms success. Straightforward on paper, and the docs make it look like a ten-minute setup.

What Actually Happens

The Ghost JSON export works as documented — with caveats. The export captures posts, pages, tags, authors, and settings. It does not capture images, themes, custom integrations, redirects, or code injection content. If you restore from a Ghost JSON export alone, you get your text content back but every image is broken, your theme is gone, and your custom routes don't exist. A complete Ghost backup requires the JSON export plus the content/images directory plus the active theme files plus your routes.yaml file. The JSON export is the floor, not the ceiling.

The n8n backup workflow for Ghost takes about an hour to build properly. The scheduled trigger runs weekly — daily is overkill for most publishers, monthly is too risky. The workflow hits the Ghost Admin API export endpoint using an authenticated request, saves the JSON response, and uploads it to an S3 bucket or Google Drive folder with a timestamp in the filename. A Slack or email notification confirms each successful run. The part that takes the most time isn't the happy path — it's the error handling. What happens when the Ghost API is down during the scheduled backup? What happens when your S3 credentials expire? Without error notifications, the backup silently stops running, and you find out when you need it most.

Email list backups are the piece people forget until it's catastrophic. If your email provider suspends your account, migrates to a new platform, or suffers a data loss event — and all of these have happened to real businesses — your subscriber list is gone unless you have an independent copy. Ghost, Kit, and Beehiiv all allow CSV export of subscriber data. Automating a monthly export is the minimum — an n8n workflow that hits the subscriber export endpoint, downloads the CSV, and stores it alongside your Ghost backup. The subscriber list is often the most valuable data asset a publisher has. Treat it accordingly.

The things people consistently forget to back up form their own list, and it's worth going through explicitly. DNS records — if your domain registrar or DNS provider has an issue, you need to know exactly what records existed. Screenshot your DNS dashboard or export the zone file quarterly. API keys and tokens — these live in environment variables, config files, and password managers, but if your server dies and your password manager is the only copy, you're depending on a single point of failure. n8n workflow exports — your automation workflows represent hours of configuration work, and n8n provides a JSON export for each workflow. Back these up with the same automation that backs up everything else. Ghost theme files and code injections — your theme is a git repo (or should be), and your code injection lives in Ghost settings, which the JSON export does capture. But custom theme modifications that aren't in version control will be lost.

The 3-2-1 rule in practice for a solopreneur looks like this: production server is copy one. Automated weekly export to S3 or Google Drive is copy two. A quarterly manual download to a local external drive is copy three. The cost of the automated piece is pennies — literally. S3 storage for text-based exports (JSON, CSV) runs about $0.023 per GB per month [VERIFY], and your Ghost export plus subscriber lists will be measured in megabytes, not gigabytes. Google Drive gives you 15 GB free. The storage cost is effectively zero. The n8n workflow to automate it is free if you're self-hosting.

When To Use This

If you run a website that makes money or has an audience, you need automated backups running now — not after you've "gotten around to it." The setup time is one to two hours for the full stack: Ghost content export, subscriber list export, n8n workflow export, all pushing to cloud storage on a weekly schedule with notifications. That's the investment. The return is not losing everything when — not if — something goes wrong.

The weekly Ghost export workflow is the non-negotiable minimum. If you build one backup automation and stop there, make it this one. A cron-triggered n8n workflow that exports your Ghost content via the Admin API, uploads the JSON to S3 or Google Drive, and sends a Slack notification. This covers your highest-value data — your published content — and runs indefinitely once configured. Add the subscriber export and n8n workflow export as a second pass, both on the same weekly schedule, pushed to the same storage destination.

The quarterly test restore is the step that separates backup automation from backup theater. Once every three months, take your most recent Ghost export and restore it to a test instance. Does the content come back correctly? Are posts intact? Are tags and metadata preserved? This is the step everyone skips, and it's the step that determines whether your backup actually works. A backup you've never tested is a hypothesis, not a plan. The test restore takes thirty minutes and either confirms your system works or reveals problems you can fix before they matter.

For operations running multiple Ghost sites, the backup workflow scales cleanly. One n8n workflow with a loop node that iterates through your sites, exports each one, and uploads to site-specific folders in your storage bucket. I run this across 15 sites [VERIFY], and the entire weekly backup completes in under five minutes. The storage footprint is negligible. The peace of mind is not.

When To Skip This

There is no "when to skip this." That's the honest answer. If you have data you'd be upset to lose, you need backups. The question is only how much automation to wrap around the process.

The part you can skip is the elaborate multi-tier backup architecture. You don't need incremental backups, versioned snapshots, or point-in-time recovery for a Ghost blog. A weekly full export to cloud storage is sufficient for most publishers. If your site publishes daily and losing a week's content would be genuinely painful, bump the schedule to daily — but for most operations, weekly captures enough that the worst case is re-creating a few days of work, not starting from zero.

Skip building your own image backup system if your Ghost instance uses cloud storage for images (S3, Cloudflare R2, or similar). The images are already in a durable storage layer with their own redundancy. You only need to back up the content/images directory separately if you're running Ghost with local filesystem storage on a VPS — and even then, a simple rsync cron job to a secondary location covers it without n8n involvement.

Also skip the urge to over-automate the notification layer. A single Slack message per successful weekly backup is enough. You don't need per-file confirmation, storage utilization reports, or backup health dashboards. The notification exists for one purpose: so that when it stops arriving, you know the backup stopped running. Keep it simple, keep it visible, and actually look at the channel.


This is part of CustomClanker's Automation Recipes series — workflows that actually run.