this post was submitted on 29 Apr 2024
16 points (100.0% liked)

Selfhosted

40329 readers
419 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have a backup server running Proxmox Backup and OMV in separate VMs. OMV provides the storage as an NFS to the proxmox backup server VM.

I have multiple remote servers that connect to the proxmox backup server but recently I keep having issues with backups. Something about file lock estale.

Is there an alternative to NFS I can use in OMV to provide the storage for the proxmox backup server?

I know there are vastly different configuration options, but I have some other things set up with OMV so I’m kinda stuck on that.

all 10 comments
sorted by: hot top controversial new old
[–] pyrosis@lemmy.world 4 points 6 months ago (3 children)

What is the underlying filesystem of the proxmox hypervisor and how did you pass storage into the omv vm? Also, is anything else accessing this storage?

I ask because...

The "file lock ESTALE" error in the context of NFS indicates that the file lock has become "stale." This occurs when a process is attempting to access a file that is locked by another process, but the lock information has expired or become invalid. This can happen due to various reasons such as network interruptions, server reboots, or changes in file system state.

[–] brownmustardminion@lemmy.ml 1 points 6 months ago (1 children)

Third time posting this reply due to the lemmy server upgrade.

Proxmox on bare metal. A VM with OMV and a VM of proxmox backup server. Multiple drives passed through to OMV and then mergerfs pools them together. That pool has two main shared folders. One is for a remote duplicati server that connects via SFTP. The other is an NfS for PBS. The PBS VM uses the NFS shared folder as storage. Everything worked until recently when I started getting estale errors. Duplicati still works fine

[–] pyrosis@lemmy.world 2 points 6 months ago (1 children)

So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I'm more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to "passthru" as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn't import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn't directly managing the disks. It was managing virtual disks.

Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

As for the error it's typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.


Repost to op as op claims his comments are being purged

[–] brownmustardminion@lemmy.ml 2 points 6 months ago

Thanks so much for the detailed reply. I have about 20TB of data on the disks otherwise I would take your advice to set up a different scheme. Luckily, as it's a backup server I don't need maximum speed. I set it up with mergerfs and snapraid because I'm essentially recycling old drives into this machine and that setup works pretty well for my situation.

The proxmox host is the default (ext4/lvm I believe). The drives are also all ext4. I very recently did a data drive upgrade and besides some timestamp discrepancies likely due to rsync, the SCSI semi-virtualized thing wasn't an issue. I replaced the old drive with a larger one, hooked the old one up to a usb dongle and passed it through to OMV and I was able to transfer everything and get my new data drive hooked back into the mergerfs pool and snapraid. I'll do a test and see if I can still access the files directly in the proxmox host just for educational purposes.

I'll try to re-mount the NFS and see where that gets me. I'm also considering switching to a CIFS/SMB share as another commenter had posted. Unless that is susceptible to the same estale issue. I won't be back at that location for about a week so I might not have an update for a little while.

[–] brownmustardminion@lemmy.ml 1 points 6 months ago

Looks like my reply got purged in the server update.

Running Proxmox baremetal. Two VMs: Proxmox Backup Server and OMV. Multiple HDDs passed through directly as SCSI to OMV. In OMV they're combined into a mergerfs pool. Two shared folders on the pool: one dedicated to proxmox backups and the other for data backups. The Proxmox backup shared folder is an NFS share and the other shared folder is accessed by a remote duplicati server via SSH (sftp?). Within the proxmox backup server VM, the aforementioned NFS share is set up as a storage location.

I have no problems with the duplicati backups at all. The Proxmox Backup Server was operating fine as well initially but began throwing the estale error after about a month or two.

Is there a way to fix the estale error and also to prevent it from reoccurring?

[–] brownmustardminion@lemmy.ml 1 points 6 months ago* (last edited 6 months ago) (1 children)

Underlying system is running Proxmox. From there I have the relevant two VMs: OMV and Proxmox Backup Server. The hard drives are passed into OMV as SCSI drives. I had to add them from shell as the GUI doesn’t give the option. Within OMV I have the drives in a mergerfs pool, with a shared folder via NFS that is then selected as the storage from within the Proxmox Backup Server VM. OMV has another shared folder that is used by a remote duplicati server via SSH(SFTP?), but otherwise OMV has no other shared folders or services. Duplicati/OMV have no errors. PBS/OMV worked for a couple of months before the aforementioned error cropped up.

Also possibly relevant: No other processes or services are setup to access the shared folder used by PBS.

[–] N0x0n@lemmy.ml 3 points 6 months ago

Maybe syncthing could fit your flow? OMV syncthing.

I have no idea if it works or if that's something you would implement, but syncthing is pretty good :).

I use it to sync my encrypted backups between my devices (even my phone has my server backups). Never had any issue !

[–] Decronym@lemmy.decronym.xyz 2 points 6 months ago* (last edited 6 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
SFTP Secure File Transfer Protocol for encrypted file transfer, over SSH
SMB Server Message Block protocol for file and printer sharing; Windows-native
SSH Secure Shell for remote terminal access

4 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.

[Thread #727 for this sub, first seen 30th Apr 2024, 14:55] [FAQ] [Full list] [Contact] [Source code]

[–] Lifebandit666@feddit.uk 2 points 6 months ago

I use CIFS aka SMB in my setup. I have OMV running multiple shares, one of which is a backup folder for Proxmox to use. I pass this through to Proxmox by adding a CIFS share to storage in data center.