avidamoeba

joined 1 year ago
[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago* (last edited 3 months ago) (1 children)

Currently duplicity but rsync took similar amount of time. The incremental change is typically tens or hundreds of files, hundreds of megabytes total. They take very little to transfer.

If I can keep the service up while it's backing up, I don't care much how long it takes. Snapshots really solve this well. Even if I stop the service while creating the snapshot, it's only down for a few seconds. I might even get rid of the stopping altogether but there's probably little point to that given how short the downtime is. I don't have to fulfill an SLA. ๐Ÿ˜‚

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago* (last edited 3 months ago) (2 children)

Yeah if you're making a backup using the database system itself, then it would make sense for it do something like that if it stays live while backing up. If you think about it, it's kinda similar to taking a snapshot of the volume where an app's data files are while it still runs. It keeps writing as normally while you copy the data from the snapshot, which is read-only. Of course there's no built-in way to get the newly written data without stopping the process. But you could get the downtime to a small number. ๐Ÿ˜„

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago

Oh interesting. I was under the impression that deletion in LVM was actually merging which took some time but I guess not. Thanks for the info!

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago* (last edited 3 months ago) (3 children)

Docker doesn't change the relationship between a running process and its data. At the end of the day you have a process running in memory that opens, reads, writes and closes files that reside on some filesystem. The process must be presented with a valid POSIX environment (or equivalent). What happens with the files when the process is killed instantly and what happens when it's started afterwards and it re-reads the files doesn't change based on where the files reside or where the process runs. You could run it in docker, in a VM, on Linux, on Unix, or even Windows. You could store the files in a docker volume, you could mount them in, have them on NFS, in the end they're available to the process via filesystem calls. In the end the effects are limited to the interactions between the process and its data. Docker cannot remove this interaction. If it did, the software would break.

[โ€“] avidamoeba@lemmy.ca 3 points 3 months ago* (last edited 3 months ago) (3 children)

It depends on the dataset. If the dataset itself is very large, just walking it to figure out what the incremental part is can take a while on spinning disks. Concrete example - Immich instance with 600GB of data, hundreds of thousands of files, sitting on a 5-disk RAIDz2 of 7200RPM disks. Just walking the directory structure and getting the ctimes takes over an hour. Suboptimal hardware, suboptimal workload. The only way I could think of speeding it up is using ZFS itself to do the backups with send/recv, thus avoiding the file operations altogether. But if I do that, I must use ZFS on the backup machine too.

I've yet to meet any service that can't recover smoothly from a kill -9 equivalent, any that did sure wouldn't be in my list of stuff I run anymore.

My thoughts precisely.

[โ€“] avidamoeba@lemmy.ca 2 points 3 months ago* (last edited 3 months ago) (4 children)

Thanks for validating my reasoning. And yeah, this isn't Immich-specific, it would be valid for any process and its data.

[โ€“] avidamoeba@lemmy.ca 3 points 3 months ago* (last edited 3 months ago) (2 children)

And this implies you have tested such backups right?

Side Q, how long do those LVM snapshots take? How long does it take to merge them afterwards?

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago* (last edited 3 months ago) (6 children)

And I'm using Docker, but Docker isn't helping with the stopping/running during backup conundrum.

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago* (last edited 3 months ago) (8 children)

Not a VM. Consider the service just a program running on the host OS where either the whole OS or just the service data are sitting on ZFS or LVM.

[โ€“] avidamoeba@lemmy.ca 21 points 3 months ago (1 children)

AMD didn't ship defective CPUs.

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago

Ooh, that project was not usable back when I tried VFIO. Nice.

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago* (last edited 3 months ago) (2 children)

Fucking Hackerman. Is there a way to display the VM's output in a window/fullscreen on Linux today? The last time I tried this, I had to have a separate cable from the passed-thru (secondary) GPU to another input in my monitor.

view more: โ€น prev next โ€บ