My go-to for this is a plain Debian or Ubuntu container with Cockpit and the 45Drives file sharing plugin. It’s pretty straightforward and works pretty well.
tvcvt
To amplify RedWeasel’s very good answer, fstab
runs as root and unless you specify otherwise, the share will mount with root as the owner on the local machine. From the perspective of the Samba server, it’s the Jellyfin user accessing the files, but on the local machine, but local permissions come into play as well. That’s why you can get at the files when you connect to the share from Dolphin in your KDE system—it’s your own user that’s mounting the share locally.
You can set maintenance schedules in Uptime Kuma and alerts won’t be sent out during those times. I use that for when my backup routines run each night. That seems like a decent cross-platform work around.
I administer a handful of FreePBX systems that run pretty smoothly and are relatively friendly to use. Crosstalk Solutions on YouTube has a bunch of videos on the software if you want to get up to speed about how everything works.
Not sure how your stack works together, but sudo
will let you run particular commands as a different user and you can be pretty specific with the privileges. For example you can have a script that’s only allowed to run docker compose -f /path/to/compose.yml restart containername
as a user in the docker group. Maybe there’s some docker-specific approach, but this should work with traditional Unix tools and a little scripting.
Cool. That looks right. Have you checked that the bridge is set up properly and that the router doesn’t have anything silly going on for that subnet?
PVE’s network settings are in /etc/network/interfaces
and that’s where you can see how the bridge is set up.
It might be beneficial to know more about your network. Is this the only subnet or do you have a bunch of VLANs? Can other devices on the subnet ping outbound? Have you looked at the firewall on PVE?
This really sounds like a problem with the default route. What’s the output of ip route
? That should give us some hints about what’s up.
Depends on the seller. It’s pretty easy to drop the seller a line and ask for details (and if they’re unwilling to provide them that could be a red flag). I had two drives die during burn-in once. I try to pick reputable sellers and they were pretty quick to replace them.
I see a ton of price fluctuation in used drives. One way I’ve had some success is in seeking out drives sold in lots. Often I’ll also see SAS drives sell for less than a SATA drive of the same size.
My use of Mikrotik is somewhat limited, but I’m testing I’ve found routing between VLANs to be pretty performant. The key is to offload that routing to the hardware, which not all configurations allow. Check out the Network Berg’s YouTube channel and you should get a good idea.
I’ve not done much with podman, but my first thought is that port 53 is privileged and usually podman runs as a non-privileged user, right? Do you have some mechanism in place that would allow podman to use port 53?
There was a recent conversation on the Practical ZFS discourse site about poor disk performance in Proxmox (https://discourse.practicalzfs.com/t/hard-drives-in-zfs-pool-constantly-seeking-every-second/1421/). Not sure if you’re seeing the same thing, but it could be that your VMs are running into the same too-small
volblocksize
that PVE uses to make zvols for its Vans under ZFS.If that’s the case, the solution is pretty easy. In your PVE datacenter view, go to storage and create a new ZFS storage pool. Point it to the same zpool/dataset as the one you’ve already got and set the block size to something like 32k or 64k. Once you’ve done that, move the VM’s disk to that new storage pool.
Like I said, not sure if you’re seeing the same issue, but it’s a simple thing to try.