Homelab Database Decisions: Boring Is Brilliant

Dec 8, 2025·
Derek Armstrong portrait
Derek Armstrong
· 9 min read

When you start hacking together a homelab, the database rarely gets top billing—it’s the quiet service you just assume will run forever. Until a drive hiccups, a container restarts, or the “fun” new datastore you tried out eats its own index. Suddenly, the database becomes the most stressful thing in the rack.

By the time you realize the “experimental” piece was a liability, you’re juggling backups, rehydrating volumes, and questioning every design choice you made at midnight. This guide is the antidote: an opinionated walk-through of boring, durable decisions that keep your lab useful without stealing your weekends.

🎯 Key Takeaways

  • Pick per workload, not per hype cycle—define what you’re storing before you install anything.
  • PostgreSQL is your default for multi-user apps, APIs, and anything that might become serious later.
  • SQLite and Redis are supporting actors: one for simplicity, the other for speed boosts—not primary storage.
  • Metrics deserve their own pipeline so Postgres isn’t forced to hoard time-series noise.
  • Backups, storage locality, and boring automation protect you more than any flashy feature ever will.

🧭 Start With the Question That Matters Most

Before you docker-compose a database, answer one question honestly: What am I actually storing? Most homelabs fall into four buckets:

  1. Application backends – self-hosted services (Vaultwarden, Homer, Paperless) that expect a transactional store.
  2. Automation state – workflow engines, runbooks, and “did the task finish?” metadata.
  3. Metrics and telemetry – noisy time-series data from Home Assistant, Zigbee, or Kubernetes.
  4. Prototypes or SaaS ideas – side projects you might ship someday.

Each bucket has a different write pattern, consistency requirement, and retention story. The simplest way to stay sane is to limit yourself to two primary databases and a telemetry stack. Everything else should prove it deserves a slot.

flowchart TD A[New Homelab Workload] --> B{Is it user-facing data?} B -- Yes --> C{Needs multi-user writes?} C -- Yes --> D[PostgreSQL] C -- No --> E[SQLite] B -- No --> F{Is it telemetry?} F -- Yes --> G[Prometheus + Long-term Export] F -- No --> H{Is it cache/session/queue?} H -- Yes --> I[Redis] H -- No --> J[Re-evaluate requirements]

The diagram looks silly until it saves you from installing a bespoke columnar engine because some blog said it was “cloud-native.”

⚖️ The Golden Rule of Homelab Databases

Boring beats clever. Every time. In a homelab you want predictability, documentation, and the ability to restore things after future-you fat-fingers a config. Favor databases with:

  • Stability over theoretical peak throughput – you aren’t Netflix, you’re one person protecting family photos and side projects.
  • Simplicity over novelty – smaller surface area reduces the maintenance tax when life gets busy.
  • Transferable skills – mastering Postgres translates directly to production jobs; wrestling with an obscure time-series engine usually doesn’t.

If a datastore is fragile, exotic, or requires three sidecars just to stay online, it does not belong in your homelab core.

🐘 PostgreSQL: Your Default, Your Workhorse

If an application supports PostgreSQL, default to PostgreSQL. It checks every box: rock-solid transaction guarantees, excellent Docker support, extension ecosystem, and enough headroom to grow from “tiny lab” to “this side project gained users.” A tidy baseline looks like this:

# docker-compose.pg.yml
services:
  postgres:
    image: postgres:17-alpine
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: change-me
    volumes:
      - type: bind
        source: /srv/data/postgres
        target: /var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      retries: 5

Practical habits that keep it low-drama:

  • One cluster, many databases – isolate apps via CREATE DATABASE appname and dedicated users. It’s easier to back up and nuke individually.
  • pgBackRest or pg_dump + WAL-G for automated nightly backups. Store the artifacts in object storage or a cheap NAS share.
  • Extensions you’ll actually usepg_stat_statements, pg_cron, maybe timescaledb if you truly need hypertables (rare in homelabs).
  • Meaningful resource ceilings – set max_connections, work_mem, and shared_buffers for your hardware so a rogue app can’t starve everything.

Real-world workflow: I keep pg_cli scripts pinned in a Git repo. When a new self-hosted service joins the lab, onboarding takes 60 seconds—create role, create database, grant privileges, commit the provision.sql snippet. Predictability beats copy/pasting from docs every time.

📦 SQLite: The Quiet Overachiever

SQLite is the hero for low-concurrency tools: Linkding, n8n runs with a single worker, Obsidian sync services, or any service that mostly reads data. Benefits:

  • Zero extra process – the database is a file. Back it up like any other file.
  • Ridiculously reliable – ACID compliance without a daemon to babysit.
  • Migration-friendly – dump to .sql, import into Postgres later if needed.

Rules of thumb:

  • Keep the database file on SSD or NVMe, not spinning disks over the network.
  • Enable PRAGMA journal_mode=WAL; for better concurrency when a web UI writes occasionally.
  • If you need more than one writer at the exact same moment, it’s time to graduate to Postgres.

The payoff is that many services can run inside lightweight containers (or even bare binaries) with no additional maintenance footprint. When you eventually scale them, the data model already fits relational concepts, so migration is painless.

🚦 Redis: Use It as Intended

Redis is your multiplier, not your source of truth. Use it for:

  • Caching API responses or expensive computations.
  • Session storage for stateless web frontends.
  • Queues and background jobs via BullMQ, RQ, Celery, or temporal-like homebrew setups.
  • Rate limiting + temporary state – tokens that expire automatically.

Treat Redis as volatile. The moment you’re tempted to persist anything critical, stop and find a relational database. Snapshots and AOF help, but they don’t replace real durability. I run Redis with maxmemory-policy allkeys-lru so it gracefully forgets data rather than crash-landing when RAM fills.

📊 Metrics and Telemetry: Don’t Abuse Postgres

Metrics behave differently from app data—they’re append-only, unbounded, and query patterns are mostly aggregations. Stuffing them into Postgres means bloated tables and vacuum nightmares. Instead:

  • Prometheus scrapes exporters (node exporter, cadvisor, unifi-poller) every 15 seconds.
  • Grafana visualizes the data and keeps alert rules human-friendly.
  • Retention: 15–30 days locally is usually enough. For long-term trending, export rollups to object storage or VictoriaMetrics single-node.

Workflow tip: run Prometheus on dedicated SSD storage and keep its config checked into Git. When you redeploy, you can bootstrap new targets instantly. Meanwhile Postgres stays focused on transactional data and you avoid mixing workloads that want conflicting storage patterns.

🚫 Databases Worth Using Carefully (or Avoiding)

  • MySQL / MariaDB: completely fine when an app demands it (WordPress, some monitoring stacks). But managing upgrades, GTIDs, and replication takes more effort than Postgres for little upside in a homelab. Use it only when the app requires it.
  • MongoDB: great for document-heavy workloads—if you already do schema design professionally. Otherwise, it turns into a junk drawer of inconsistent documents and surprise migrations.
  • Trendy distributed systems (Scylla, CockroachDB, etc.): fantastic learning tools, terrible defaults for single-node labs. They expect multiple nodes, TLS, gossip protocols…all overhead.

Curiosity is healthy, but run experiments in isolated sandboxes so your day-to-day services don’t depend on beta tech.

💽 Storage Strategy Matters More Than the Database

A perfect schema can’t outrun slow or flaky disks. Follow these principles:

  • Local NVMe for primaries – databases crave low latency. Consumer NVMe drives in a ZFS mirror beat network-attached spinning disks every day.
  • Containers over VMs – less overhead, faster restarts, and simpler backups using bind mounts.
  • Avoid network storage for live volumes unless you have rock-solid 10GbE and tuned NFS. Even then, keep your write-ahead logs local.
  • Backup targets can live on slower storage – sync to NAS, cloud, or even an encrypted USB HDD that you plug in weekly.

🔁 Backups Are Not Optional

If backups aren’t automated, they don’t exist. A pragmatic setup:

  1. Nightly logical dumpspg_dump to NAS, sqlite3 .dump for file-based stores.
  2. Weekly volume snapshots – use ZFS/BTRFS snapshots or LVM thin snapshots to capture on-disk state.
  3. Off-machine copy – rclone pushes encrypted archives to Backblaze B2 or restic writes to object storage.

Test restores quarterly. I schedule a “chaos breakfast” once a month: spin up a disposable VM, restore the latest Postgres dump, and run a smoke test. It’s the only way to trust your backups when something fails at 2 a.m.

🧱 A Clean, Sensible Homelab Database Stack

If you want to stay sane, orient everything around this minimal core:

  • PostgreSQL for apps, APIs, and anything multi-user.
  • SQLite for lightweight services with single-writer needs.
  • Redis for caching, queues, and ephemeral glue.
  • Prometheus + Grafana for telemetry.
  • Automated backups with off-machine copies so recoveries are boring.

No clusters, no forced HA, no heroics—just services that quietly work.

🛠️ Sample Upgrade Workflow: SQLite to Postgres in 30 Minutes

Real-world scenario: your self-hosted notes app starts getting hammered by multiple automations. SQLite is now a bottleneck.

  1. Freeze writes – enable maintenance mode or stop the app container.
  2. Dump the datasqlite3 data.db '.dump' > export.sql.
  3. Prep Postgres – create database + user, apply extensions if needed.
  4. Importpsql -d notes -f export.sql.
  5. Update connection strings – environment variables, secrets, or Helm values.
  6. Run migration – start the app, let it check schema, confirm writes succeed.
  7. Archive old file – keep data.db zipped in cold storage for 30 days just in case.

Document the process once, store it beside your IaC, and future migrations become routine instead of chaotic fire drills.

🗺️ Decision Cheat Sheet

WorkloadDefault ChoiceWhy
Self-hosted auth, automations, dashboardsPostgreSQLHandles concurrency, transactions, and future growth.
Personal knowledge base, single-user toolsSQLiteZero maintenance, easy backups.
Job queues, caching layersRedisBlazing fast in-memory operations with built-in expiry.
Metrics + logsPrometheus + Loki (optional)Purpose-built for time-series; keeps Postgres lean.
Apps demanding MySQLMariaDB (only there)Satisfy requirement, document why it exists.

Print it, tape it to your rack, and stop second-guessing yourself.

🔚 Final Thought

Your homelab should serve your curiosity, not steal your weekends. Choose databases that stay out of your way, map to real-world skills, and scale only as much as you actually need. The smartest labs aren’t the most complex—they’re the ones that still work months after you stopped thinking about them.

Resources

  • PostgreSQL Documentation — canonical tuning, backup, and extension references.
  • pgBackRest — battle-tested backup tooling for Postgres clusters big and small.
  • SQLite Official Docs — everything you need to enable WAL, backups, and pragmas.
  • Redis IO — configuration patterns, persistence modes, and module docs.
  • Prometheus Documentation — scraping, retention, and federation guidance.
  • Grafana Play — gallery of dashboards to copy into your own telemetry stack.
  • Restic — fast, encrypted backups ideal for off-site copies of database dumps.