this post was submitted on 05 Jun 2024
90 points (96.9% liked)

Selfhosted

40296 readers
344 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Edit: Results tabulated, thanks for all y'alls input!

Results fitting within the listed categories

Just do it live

  • Backup while it is expected to be idle @MangoPenguin@lemmy.blahaj.zone @khorak@lemmy.dbzer0.com @dandroid@sh.itjust.works

  • @Darkassassin07@lemmy.ca suggested adding a real long-ass-backup-script to run monthly to limit overall downtime

Shut down all database containers

  • Shutdown all containers -> backup @PotatoPotato@lemmy.world

  • Leveraging NixOS impermanence, reboot once a day and backup @thejevans@lemmy.ml

Long-ass backup script

  • Long-ass backup script leveraging a backup method in series @STROHminator@lemmy.world @lemmyvore@feddit.nl

Mythical database live snapshot command

(it seems pg_dumpall for Postgres and mysqldump for mysql (though some images with mysql don't have that command for meeeeee))

  • Dump Postgres via pg_dumpall on a schedule, backup normally on another schedule @RegalPotoo@lemmy.world

  • Dump mysql via mysqldump and pipe to restic directly @youRFate@feddit.de

  • Dump Postgres via pg_dumpall -> backup -> delete dump @2xsaiko@discuss.tchncs.de @SteveDinn@lemmy.ca

Docker image that includes Mythical database live snapshot command (Postgres only)

  • Make your own docker image (https://gitlab.com/trubeck/postgres-backup) and set to run on a schedule, includes restic so it backs itself up @Undaunted@discuss.tchncs.de (thanks for uploading your scripts!!)

  • Add docker image prodrigestivill/postgres-backup-local and set to run on a schedule, backup those dumps on another schedule @brewery@lemmy.world @Lem453@lemmy.ca (also recommended additionally backing up the running database and trying that first during a restore)

New catagories

Snapshot it, seems to act like a power outage to the database

  • LVM snapshot -> backup that @butitsnotme@lemmy.world

  • ZFS snapshot -> backup that @ikidd@lemmy.world (real world recovery experience shows that databases act like they're recovering from a power outage and it works)

  • (I assume btrfs snapshot will also work)

One liner self-contained command for crontab

  • One-liner crontab that prunes to maintain 7 backups, dump Postgres via pg_dumpall, zips, then rclone them @DeltaTangoLima@reddrefuge.com

Turns out Borgmatic has database hooks

  • Borgmatic with its explicit support for databases via hooks (autorestic has hooks but it looks like you have to make database controls yourself) @PastelKeystone@lemmy.world

I've searched this long and hard and I haven't really seen a good consensus that made sense. The SEO is really slowing me on this one, stuff like "restic backup database" gets me garbage.

I've got databases in docker containers in LXC containers, but that shouldn't matter (I think).

me-me about containers in containersa me-me using the mental gymnastics me-me template; the template is split into two sections with the upper being a simple 3-step gymnastic routine while the bottom has the one being mocked flipping on gymnastic bars, using gymnastic rings, a balance beam, before finally jetpacking over a burning car. The top says "docker compose up -d" in line with the 3 simple steps of the routine, while the bottom, while becoming increasingly more cluttered, says "pass uid/gid to LXC", "add storage devices to LXC", "proxy network", "install docker on every container", and finally "docker compose up -d".


I've seen:

  • Just backup the databases like everything else, they're "transactional" so it's cool
  • Some extra docker image to load in with everything else that shuts down the databases in docker so they can be backed up
  • Shut down all database containers while the backup happens
  • A long ass backup script that shuts down containers, backs them up, and then moves to the next in the script
  • Some mythical mentions of "database should have a command to do a live snapshot, git gud"

None seem turnkey except for the first, but since so many other options exist I have a feeling the first option isn't something you can rest easy with.

I'd like to minimize backup down times obviously, like what if the backup for whatever reason takes a long time? I'd denial of service myself trying to backup my service.

I'd also like to avoid a "long ass backup script" cause autorestic/borgmatic seem so nice to use. I could, but I'd be sad.

So, what do y'all do to backup docker databases with backup programs like Borg/Restic?

top 50 comments
sorted by: hot top controversial new old
[–] RegalPotoo@lemmy.world 16 points 5 months ago (1 children)

pg_dumpall on a schedule, then restic to backup the dumps. I'm running Zalando Postgres in kubernetes so scheduled tasks and intercontainer networking is a bit simpler, but should be able to run a sidecar container in your compose file

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago (2 children)

So you're saying you dump on a sched to a and then just let your restic backup pick it up asynchronously?

[–] 2xsaiko@discuss.tchncs.de 4 points 5 months ago

My backup service runs pg_dumpall, then borg create, then deletes the dump.

[–] RegalPotoo@lemmy.world 2 points 5 months ago

Pretty much - I try and time it so the dumps happen ~an hour before restic runs, but it's not super critical

[–] SteveDinn@lemmy.ca 13 points 5 months ago (1 children)

+1 for long-ass backup script. First dump the databases with the appropriate command. Currently, I have only MariaDB and Postgres instances. Then, I use Borg to backup the database dumps and the docker volumes.

Database SQL dumps compress very well. I haven't had any problems yet

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago

It's gon b long ass backup script I think!

[–] Lem453@lemmy.ca 8 points 5 months ago* (last edited 5 months ago) (1 children)

I tried to find this on DDG but also had trouble so I dug it out of my docker compose

Use this docker container:

prodrigestivill/postgres-backup-local

(I have one of these for every docker compose stack/app)

It connects to your postgres and uses the pg_dump command on a schedule that you set with retention (choose how many to save)

The output then goes to whatever folder you want.

So have a main folder called docker data, this folder is backed up by borgmatic

Inside I have a folder per app, like authentik

In that I have folders like data, database, db-bak etc

Postgres data would be in Database and the output of the above dump would be in the db-bak folder.

So if I need to recover something, first step is to just copy the whole all folder and see if that works, if not I can grab a database dump and restore it into the database and see if that works. If not I can pull a db dump from any of my previous backups until I find one that works.

I don't shutdown or stop the app container to backup the database.

In addition to hourly Borg backups for 24 hrs, I have zfs snapshots every 5 mins for an hour and the pgdump happens every hour as well. For a homelab this is probably more than sufficient

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 5 months ago

Thorough, thanks! I see you and some others are using "asynchronous" backups where the databases backup on a schedule and the backup program does its thing on its own time. That might actually be the best way!

[–] Darkassassin07@lemmy.ca 7 points 5 months ago* (last edited 5 months ago) (1 children)

I setup borg around 4 months ago using option 1. I've messed around with it a bit, restoring a few backups, and haven't run into any issues with corrupt/broken databases.

I just used the example script provided by borg, but modified it to include my docker data, and write info to a log file instead of the console.

Daily at midnight, a new backup of around 427gb of data is taken. At the moment that takes 2-15min to complete, depending on how much data has changed since yesterday; though the initial backup was closer to 45min. Then old backups are trimmed; Backups <24hr old are kept, along with 7 dailys, 3 weeklys, and 6 monthlys. Anything outside that scope gets deleted.

With the compression and de-duplication process borg does; the 15 backups I have so far (5.75tb of data) currently take up 255.74gb of space. 10/10 would recommend on that aspect alone.

/edit, one note: I'm not backing up Docker volumes directly, though you could just fine. Anything I want backed up lives in a regular folder that's then bind mounted to a docker container. (including things like paperless-ngxs databases)

[–] glizzyguzzler@lemmy.blahaj.zone 3 points 5 months ago (1 children)
[–] Darkassassin07@lemmy.ca 2 points 5 months ago (1 children)

I have one more thought for you:

If downtime is your concern, you could always use a mixed approach. Run a daily backup system like I described, somewhat haphazard with everything still running. Then once a month at 4am or whatever, perform a more comprehensive backup, looping through each docker project and shutting them down before running the backup and bringing it all online again.

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 5 months ago

Not a bad idea for a hybrid thing, especially people seem to say that a running database backup at least some of the time most of the time with no special shutdown/export effort is readable. And the dedupe stats are really impressive

[–] Undaunted@discuss.tchncs.de 5 points 5 months ago (1 children)

I mostly use postgres so I created myself a small docker image, which has the postgres client, restic and cron. It also gets a small bash script which executes pg_dump and then restic to backup the dump. pg_dump can be used while the database is used so no issues there. Restic stores the backup in a volume which points to an NFS share on my NAS. This script is called periodically by cron.

I use this image to start a backup-service alongside every database. So it's part of the docker-compose.yml

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago (2 children)

Would you mind pastebin-ing your docker image creator file? I have no experience cooking up my own docker image.

[–] Undaunted@discuss.tchncs.de 3 points 5 months ago (1 children)

I quickly threw together a repository. But please keep in mind that I made some changes to it, to be able to publish it, and it is a combination of 3 different custom solutions that I made for myself. I have not tested it, so use at your own risk :D But if something is broken, just tell me and I try to fix it.

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 5 months ago

Thanks for taking the time to upload the whole thing!! This is pretty cool because it moves the backup work straight into the container with the db

[–] Undaunted@discuss.tchncs.de 2 points 5 months ago

Sure! I'll try to do it today but I can't promise to get to it

[–] MangoPenguin@lemmy.blahaj.zone 5 points 5 months ago (1 children)

I just do the first option.

Everything is pretty much idle at 3am when the backups run, since it's just me using the services. So I don't really expect to have issues with DBs being backed up this way.

[–] khorak@lemmy.dbzer0.com 2 points 5 months ago

This, just pgdump properly and test the restore against a different container. Bonus points for spinning as new app instance and checking if it gets along with the restored db.

[–] SzethFriendOfNimi@lemmy.world 4 points 5 months ago (1 children)

I guess the trouble is that you don’t want to read the volumes where the db files are because they’re not guaranteed to be consistent at a given point in time right?

Does the given engine support a backup method/utility that can be used to copy files to some volume on a set schedule?

[–] glizzyguzzler@lemmy.blahaj.zone 4 points 5 months ago

As far as I know (unless smarter people know), you need a “long ass backup script” to make your own fun on a set schedule. Autorestic and borgmatic are smooth but don’t seem to have the granularity to deal with it. (Unless smarter people know how to make them do, which I may be fishing for lol)

[–] youRFate@feddit.de 4 points 5 months ago* (last edited 5 months ago) (1 children)

With restic you can pipe to stdin, so I use mysqldump and pipe it to restic:

mysqldump --defaults-file=/root/backup_scripts/.my.cnf --databases db-name | restic backup --stdin --stdin-filename db-name.sql

The .my.cnf looks like this:

[mysqldump]
user=db-user
password="databasepassword"
[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago

Thanks for the incantation! Its looking like something like this is gonna be the way to do it

[–] STROHminator@lemmy.world 4 points 5 months ago (1 children)

I have also been wanting to try borg for at least offsite backups. Currently been using a “long ass backup script” with how little time I currently have.

[–] lemmyvore@feddit.nl 3 points 5 months ago (1 children)

I've replaced my "long ass script" I was using for rsync with a much shorter one that uses borg. 10/10 would recommend.

Not sure how much time it will save because in both cases the stuff that took the most time was figuring out each tool's voodoo for including/excluding directories from backup.

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago

I'm coming from rsync too, hoping for the same good stuff

[–] ikidd@lemmy.world 4 points 5 months ago (1 children)

Snapshot with zfs, backup snapshot.

[–] glizzyguzzler@lemmy.blahaj.zone 3 points 5 months ago (1 children)

That’s ok for a database that’s running?

Do you use a ZFS backup manager?

[–] ikidd@lemmy.world 2 points 5 months ago (1 children)

While there's probably a better way of doing it via the docker zfs driver, I just make a datastore per stack under the hypervisor, mount the datastore into the docker LXC, and make everything bind mount within that mountpoint, then snapshot and backup via Sanoid to a couple of remote ZFS pools, one local and one on zfs.rent.

I've had to restore our mailserver (mysql) and nextcloud (postgres) and they both act as if the power went out, recovering via their own journaling systems. I've not found any inconsistencies on recovery, even when I've done a test restore on a snapshot that's been backed up during known hard activitiy. I trust both databases for their recovery methods, others maybe not so much. But test that for yourself.

load more comments (1 replies)
[–] dandroid@sh.itjust.works 4 points 5 months ago (1 children)

I guess I'm a dummy, because I never even thought about this. Maybe I got lucky, but when I did restore from a backup, I didn't have any issues. My containerized services came right back up like nothing was wrong. Though that may have been right before I successfully hosted my own (now defunct) Lemmy instance. I can't remember, but I think I only had sqlite databases in my services at the time.

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago

Good to know if I need to just throw the running database into borg/restic there's a chance it'll come out ok! Def not a dummy, I only found out databases may not like being backed up while running through someone mentioning it offhandedly

[–] brewery@lemmy.world 4 points 5 months ago (1 children)

I just started using some docker containers I found on Docker Hub designed for DB backups (e.g. prodrigestivill/postgres-backup-local) to automatically dump from the databases into a set folder, which is included in the restic backup. I know you could come up with scripts but this way, I could easily copy the compose code to other containers with different databases (and different passwords etc).

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago

That is nicely expandable with my docker_compose files, thanks for the find!

[–] PastelKeystone@lemmy.world 3 points 5 months ago (1 children)

Borgmatic is an automation tool for Borg. It has hooks for database backups.

https://torsion.org/borgmatic/docs/how-to/backup-your-databases/

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago (1 children)

Dunno how I missed that in borgmatic, and I see autorestic also has "hooks" but with no database-specific examples. So I can build out what would be in a long ass script just in a long ass borgmatic/autorestic yml!

[–] PastelKeystone@lemmy.world 2 points 5 months ago

Glad I could help. 🙂

[–] thejevans@lemmy.ml 3 points 5 months ago (1 children)

My plan to handle this is to switch my VMs to NixOS, set up NixOS with impermanence using a btrfs or zfs volume that gets backed up and wiped at every startup with another that holds persistent data that also gets backed up, and just reboot once per day.

I'm currently learning how to do impermanence in all the different ways, so this is a long goal, but Nix config + backups should handle everything.

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago

That’s wild and cool - don’t have that architecture now but… next time

[–] DeltaTangoLima@reddrefuge.com 3 points 5 months ago* (last edited 5 months ago) (1 children)

I just have a one-liner in crontab that keeps the last 7 nightly database dumps. That destination location is on one my my NASes, which rclones everything to my secondary NAS and an S3 bucket.

ls -tp /storage/proxmox-data/paperless/backups/*.sql.gz | grep -v '/$' | tail -n +7 | xargs -I {} rm -- {}; docker exec -t paperless-db-1 pg_dumpall -c -U paperless | gzip > /storage/proxmox-data/paperless/backups/paperless_$( date +\%Y\%m\%d )T$( date +\%H\%M\%S ).sql.gz

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 5 months ago

Holy shot thanks for droppin this spell, that's awesome

[–] anzo@programming.dev 3 points 5 months ago

I use rsnapshot docker image from Linuxserver. The tool uses rsync incrementally and does rotation/ prunning for you (e.g. keep 10 days, 5 weeks, 8 months, 100 years). I just pointed it to the PostgreSQL data volume. This runs without interruption of service. To restore, I need to convert from WAL files into a dump... So, load an empty PostgreSQL container on any snapshot and run the dump command.

[–] PotatoPotato@lemmy.world 2 points 5 months ago

I just shut down the containers before backing up and it has worked totally fine

[–] butitsnotme@lemmy.world 2 points 5 months ago

I use the first option, but with the addition of using an LVM snapshot to guarantee that the database (or anything else in the backup) isn’t changed while taking the backup.

[–] Decronym@lemmy.decronym.xyz 1 points 5 months ago* (last edited 5 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LVM (Linux) Logical Volume Manager for filesystem mapping
LXC Linux Containers
NAS Network-Attached Storage
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
ZFS Solaris/Linux filesystem focusing on data integrity

5 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread #784 for this sub, first seen 5th Jun 2024, 09:15] [FAQ] [Full list] [Contact] [Source code]

load more comments
view more: next ›