Setting up a backup solution for my home server

Posted on October 30, 2025
Tags:

I made myself a promise that I would stop adding more services to my home server until I could come up with a good backup solution. Motivated by the desire to add yet another docker image to my server, I started digging into the various options, and finally found something that I’m content with, at least for now. In this blog post, I’ll briefly summarize my backup strategy for the different vm managers and containers I use.

I use a mixture of VMs, system containers, and fat binaries for hosting. I’ll discuss them separately.

Virtual Machines

I manage my VMs with libvirt. For security reasons and especially to keep my firewall configuration clean, I only run docker images inside VMs. The more dangerous ones such as CIs all have their own dedicated VMs to separate them from my personal data.

Libvirt’s documentation about backup is quite sparse and I was unable to fully grasp its semantics without referencing a mailing list thread from the original author of the backup functionality. Fortunately, virtnbdbackup, a python script, provides an intuitive backup UI on top of the libvirt APIs.

Backing up one of my VMs running gentoo is as easy as the following:

sudo virtnbdbackup -d gentoo -l full -o /mnt/gentoo

System Containers

I use systemd-nspawn and incus to manage system containers. They are quite similar to VMs except they don’t have their own kernel. Surprisingly, firewalls work fine, although I had trouble accessing logs from the guest. Even apparmor works (with incus), but building the rules can be tricky as the helper script aa-logprof no longer works and you would have to manually build rules based on the audit log from the host system.

I much prefer system containers to docker containers as they tend to work more nicely with existing bridges and run as unpriviledged processes by default. incus, in particular, has a quite large image repository. This makes it easy to always find the most suitable distro if the service has a baremetal installation instruction.

While system containers support other disk formats, both incus and nspawn by default store the directory tree of the guest in the host file system. If the host uses btrfs, it is possible to take snapshots and use utilities such as btrbk for backup to a hardware drive also formated as btrfs.

I ended up going with a simpler solution. Both incus and nspawn support exporting the full directory tree as a tarball. Instead of trying to do anything smart with the backup strategy, I pipe these tarballs to the borg backup utility for battle-tested incremental and encrypted backup.

Here’s how you output the guest system/instance of interest as a tarball to stdout:

# incus
$ sudo incus export <instance-name>  --instance-only --compression=none -q -

# nspawn
$ sudo importctl --class=machine --format=uncompressed export-tar <instance-name> -

And the output should be fed into borg’s import-tar subcommand.

sudo borg import-tar --compression zstd <repository>::<archive-name> -

Here, repository is the initialized borg repository containing all your previous backups of the specific guest system. Archive name is a name you choose for this specific backup. I usually just initialize the archive name to the current date.

As simple as the logic is, I packed everything up in a simple racket script here with some sensible, hardwired defaults. After all, doing a suboptimal backup is better than doing no backups at all.

Fat binaries

Having to update 10+ debian VMs/containers is a lot of effort, even though you can always automate it.

Security by isolation is one thing that people bring up a lot when it comes to containers but I find the idea analogous to using NAT as a firewall, which could mitigate certain security risks but really isn’t intended for security purposes.

Fat binaries pack up all the dependencies you need in one blob of executable, which you can run on most of the mainstream Linux distributions.

I find fat binaries (or any bare metal installation in general) much easier to audit and set up an apparmor for. Pair that with systemd sandboxing (dynamicuser + various protection options), you aren’t really missing out when it comes to security.

I run those fat binaries directly on my host OS. The associated data is thus backed up whenever I run a full disk backup with borg. To ensure the consistency of the disk state, I always take a btrfs snapshot and then backup from the read-only snapshot.