glizzyguzzler

joined 1 year ago

Sad to hear for my quadlet future, do you remember what things were specifically annoying?

[–] glizzyguzzler@lemmy.blahaj.zone 10 points 6 days ago* (last edited 6 days ago) (2 children)

Hey bigdickdonkey, I recently tried and wasn’t able to shit my way through podman, there just wasn’t enough chatter and guides about it. I plan to revisit it when Debian 13 comes out, which will include podman quadlets. I also tried to get podman quadlets to work on Ubuntu 24 and got closer, but still didn’t manage and Ubuntu is squicky.

I read about true user rootless Docker and decided that was too finicky to keep up to date. It needs some annoying stuff to update, from what I could tell. I was planning on many users having their own containers, and that would have gotten annoying to manage. Maybe a single user would be an OK burden.

The podman people make a good argument for running podman as root and using userns to divvy out UIDs to achieve rootless https://www.redhat.com/en/blog/rootless-podman-user-namespace-modes but since podman is on the back burner till there’s more community and Debian 13, I applied that idea to Docker.

So I went with root Docker with the goals of:

  • read only
  • set user to different UID:GID for each container
  • silo containers in individual Docker networks
  • nothing gets /var/run/docker.sock
  • cap_drop: all
  • security-opt=no-new-privileges
  • volumes all get tagged with :rw,noexec,nosuid,nodev,Z

Basically it’s the security best practices from this list https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html

This still has risk of the Docker daemon being hacked from the container itself somehow, which podman eliminates, but it’s as close to the podman ideal I can get within my knowledge now.

Most things will run as rootless+read-only+cap_drop with minor messing. Automatic ripping machine would not, but that project is a wild ride of required permissions. Everything else has succumbed, but I’ve needed to sometimes have a “pre launch container” to do permission changes or make somewhere like /opt writable.

I would transition one app stack at a time to the best security practices, and it’s easier since you don’t need to change container managers. Hope this helps!

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 4 months ago* (last edited 4 months ago)

In incus, I had the same setup of an LCX container with a Docker container inside of it. I passed 1000/1000 to the LXC container but the LXC container’s default root user has a an ID set of 0/0. So I had to pass 0/0 to the Docker container, not 1000/1000 to get the read/write permissions working.

That may fix your issue as it’s basically the same tech, just different automated things implementing the LXC container!

Good to know Proxmox’s bad updates are more pervasive than the latest bad update.

I have been able to install Docker in the LXC containers and pull images in with the normal commands. I do that container-in-container to get effectively rootless docker containers for stuff that I couldn’t figure out how to run rootless. So you don’t even lose out on docker if you’re determined! And as you said incus goes on any OS, you can docker just fine on the base OS of your choice and use incus for specific things!

Try a diff email if you do want one, a friend recently got one via email signup and wait a few weeks. But I do abs agree it fuckin sucks you have to do any of this effort to get one, it is just enabling scalpers

I do use it to hold internet-exposed things in LXC containers to sidestep having to figure out how to not run things as Docker root.

You do not need it for everything, but since it’s not an OS that makes it your everything, that’s ok! Run Docker containers as you need, put internet-exposed ones in an LXC container, put home assistant in a VM because it’s special.

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 4 months ago (2 children)

Ah, I was wondering which one you updated and it made your containers inaccessible!

[–] glizzyguzzler@lemmy.blahaj.zone 2 points 4 months ago (2 children)

You have to sign up for the in stock notifications, annoying but it works in a delayed fashion. Sad it does enable scalpers.

[–] glizzyguzzler@lemmy.blahaj.zone 1 points 4 months ago (4 children)

Incus or Proxmox (e.g., should I shift to Incus LTS or something?)

[–] glizzyguzzler@lemmy.blahaj.zone 6 points 4 months ago (8 children)

Incus is way easier to work with than Proxmox, and it sits on your OS of choice instead of being the OS you must use. For home use it’s way easier to use with the web ui, it even has clustering if you want to go hard.

So you can install Incus when you want a VM/LXC container and not have to commit to a VM/LXC container OS from the start.

Also Proxmox free just had a bad update that björked some stuff if you updated when it was live. Proxmox free is rolling and apparently lacks basic sanity checks for updates.

[–] glizzyguzzler@lemmy.blahaj.zone 3 points 4 months ago (5 children)

Your budget is really near a https://store.ui.com/us/en/collections/unifi-dream-router/products/udr Unifi dream router. Your family is gonna be way happier with you (0 downtime) and it’ll give you extender options if you ever need it. Unifi is good enough and they update regularly, just disable cloud access stuff and you’re good.

Otherwise you want Opnsense instead of Openwrt. The upgrade process for Openwrt is not automatic, while Opnsense is. Worth it not to have to dote on your router.

And you should get an access point (Unifi something or Tplink Omsomething), wifi is problematic with openwrt and I’m not sure if opensense even lets you do it (haven’t tried).

And you’ll need a switch, dumb or managed, up to you if you want VLANs. The Opnsense box will have just one LAN port, so it requires a switch if you want to plug more than one thing into it. A switch with PoE+ can power the access point directly.

Opnsense needs x64 arch (Intel or AMD CPUs), get a small thin client like a Dell Wyse 5070 extended or HP T730 or that mentioned Fujitsu Futro S720 (its CPU is old tho, you can do better). There may be newer thinclients, you just want a mini PCIe slot to install some Intel gigabit card from eBay with 2 ports. Google power efficient gigabit mini PCIe card - there’s an older model that sucks power and a newer one that doesn’t suck; if you go more than gigabit skip 2.5 on Intel unless you google hard and expect extra power draw. Very limited point to 4 port cards, just go higher gigabit speeds don’t think about multiplexing ports or whatever it is called; and switches switch better than the router can and remove CPU overhead for more actual routing work - 2 port card is the way.

Slap Incus (superior but newer, less guides, LXD is previous name if googling stuff) or Proxmox (good enough, more guides for this) on it, make a VM and pass through the 2 ports of the PCIe cards, slap Opnsense in the VM. Make an LXC container and slap Debian on it and spin up the Unifi controller for your AP. Another container for adguard home or pi hole and you’ve got a box that does the basic nets all in one. The built-in port on the thin client is how you will access the underlying OS, it gets plugged into the switch you’ll have to get. If you got something with 2 gigs of RAM and an AMD Geode/GX or aged Intel Atom CPU I’d just only do Opnsense no hypervisor stuff.

Sorry for the info dump but there’s a lot of angles!

But really, the Unifi dream router is much easier and solves it all-in-one. You need 3 pieces (router, wifi access point, Ethernet switch) for a good experience otherwise.

It looks like regular PSUs are isolated from the mains ground with a transformer. That means that two PSUs’ DC grounds will not be connected. That will likely cause problems for you, as they’ll have to back flow current in places that do NOT expect back flow current to account for the voltage differences between the two ground potentials. Hence it might damage the GPU which is going be the mediator between these two PSUs - and maybe the mobo if everything goes to shit.

Now I am not saying this will be safe, but you may avoid that issue by tying the grounds of the two PSUs together. You still have the issue where if, say, PSU1’s 12V voltage plane meets PSU2’s 12V voltage plane and they’re inevitably not the same exact voltage, you’ll have back flowing current again which is bad because again nothing is designed for that situation. Kind of like if you pair lithium batteries in parallel that aren’t matched, the higher voltage one will back charge the other and they’ll explode.

 

Edit: Results tabulated, thanks for all y'alls input!

Results fitting within the listed categories

Just do it live

  • Backup while it is expected to be idle @MangoPenguin@lemmy.blahaj.zone @khorak@lemmy.dbzer0.com @dandroid@sh.itjust.works

  • @Darkassassin07@lemmy.ca suggested adding a real long-ass-backup-script to run monthly to limit overall downtime

Shut down all database containers

  • Shutdown all containers -> backup @PotatoPotato@lemmy.world

  • Leveraging NixOS impermanence, reboot once a day and backup @thejevans@lemmy.ml

Long-ass backup script

  • Long-ass backup script leveraging a backup method in series @STROHminator@lemmy.world @lemmyvore@feddit.nl

Mythical database live snapshot command

(it seems pg_dumpall for Postgres and mysqldump for mysql (though some images with mysql don't have that command for meeeeee))

  • Dump Postgres via pg_dumpall on a schedule, backup normally on another schedule @RegalPotoo@lemmy.world

  • Dump mysql via mysqldump and pipe to restic directly @youRFate@feddit.de

  • Dump Postgres via pg_dumpall -> backup -> delete dump @2xsaiko@discuss.tchncs.de @SteveDinn@lemmy.ca

Docker image that includes Mythical database live snapshot command (Postgres only)

  • Make your own docker image (https://gitlab.com/trubeck/postgres-backup) and set to run on a schedule, includes restic so it backs itself up @Undaunted@discuss.tchncs.de (thanks for uploading your scripts!!)

  • Add docker image prodrigestivill/postgres-backup-local and set to run on a schedule, backup those dumps on another schedule @brewery@lemmy.world @Lem453@lemmy.ca (also recommended additionally backing up the running database and trying that first during a restore)

New catagories

Snapshot it, seems to act like a power outage to the database

  • LVM snapshot -> backup that @butitsnotme@lemmy.world

  • ZFS snapshot -> backup that @ikidd@lemmy.world (real world recovery experience shows that databases act like they're recovering from a power outage and it works)

  • (I assume btrfs snapshot will also work)

One liner self-contained command for crontab

  • One-liner crontab that prunes to maintain 7 backups, dump Postgres via pg_dumpall, zips, then rclone them @DeltaTangoLima@reddrefuge.com

Turns out Borgmatic has database hooks

  • Borgmatic with its explicit support for databases via hooks (autorestic has hooks but it looks like you have to make database controls yourself) @PastelKeystone@lemmy.world

I've searched this long and hard and I haven't really seen a good consensus that made sense. The SEO is really slowing me on this one, stuff like "restic backup database" gets me garbage.

I've got databases in docker containers in LXC containers, but that shouldn't matter (I think).

me-me about containers in containersa me-me using the mental gymnastics me-me template; the template is split into two sections with the upper being a simple 3-step gymnastic routine while the bottom has the one being mocked flipping on gymnastic bars, using gymnastic rings, a balance beam, before finally jetpacking over a burning car. The top says "docker compose up -d" in line with the 3 simple steps of the routine, while the bottom, while becoming increasingly more cluttered, says "pass uid/gid to LXC", "add storage devices to LXC", "proxy network", "install docker on every container", and finally "docker compose up -d".


I've seen:

  • Just backup the databases like everything else, they're "transactional" so it's cool
  • Some extra docker image to load in with everything else that shuts down the databases in docker so they can be backed up
  • Shut down all database containers while the backup happens
  • A long ass backup script that shuts down containers, backs them up, and then moves to the next in the script
  • Some mythical mentions of "database should have a command to do a live snapshot, git gud"

None seem turnkey except for the first, but since so many other options exist I have a feeling the first option isn't something you can rest easy with.

I'd like to minimize backup down times obviously, like what if the backup for whatever reason takes a long time? I'd denial of service myself trying to backup my service.

I'd also like to avoid a "long ass backup script" cause autorestic/borgmatic seem so nice to use. I could, but I'd be sad.

So, what do y'all do to backup docker databases with backup programs like Borg/Restic?

 

[Semi-solved edit]: To answer my question, I was not able to figure out podman. There's just too little community explanations about it for me to pull myself up by my own bootstraps.

So I went for Incus, which has a lot of community explanations (also via searching LXD) and made an Incus container with a macvlan and put the adguard home docker in that. Ran the docker as "root" and used docker compose since I can rely on the docker community directly, but the Incus container is not root-privileged so my goal of avoiding rootful is solved.

Anyone finding this via search, the magic sauce I needed to achieve a technically rootless adguardhome docker setup was:

sudo incus create gooner # For networking, it doesn't need to be named gooner
sudo incus profile device add gooner eth0 nic nictype=macvlan parent=enp0s10 # Get your version of 'enp0s10' via 'ip addr', macvlan thing won't work with wifi
sudo incus profile set gooner security.nesting=true
sudo incus profile set gooner security.syscalls.intercept.mknod=true
sudo incus profile set gooner security.syscalls.intercept.setxattr=true
# Pause here and make adguardhome instance in the Incus web UI (incus-ui-canonical) with the "gooner" profile
# Make sure all network stuff from docker-compose.yml is deleted
# Put docker-compose.yml in /home/${USER}/server/admin/compose/adguardhome
printf "uid $(id -u) 0\ngid $(id -g) 0" | sudo incus config set adguardhome raw.idmap - # user id -> 0 (root), user group id -> 0 (root) since debian cloud default user is root
sudo incus config device add adguardhome config disk source=/home/${USER}/server/admin/config/adguardhome path=/server/admin/config/adguardhome # These link adguard stuff to the real drive
sudo incus config device add adguardhome compose disk source=/home/${USER}/server/admin/compose/adguardhome path=/server/admin/compose/adguardhome
# !! note that the adguardhome docker-compose.yml must say "/server/configs/adguardhome/work" instead of "/home/${USER}/server/configs/adguardhome/work"
# Install docker
sudo incus exec adguardhome -- bash -c "sudo apt install -y ca-certificates curl"
sudo incus exec adguardhome -- bash -c "sudo install -m 0755 -d /etc/apt/keyrings"
sudo incus exec adguardhome -- bash -c "sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc"
sudo incus exec adguardhome -- bash -c "sudo chmod a+r /etc/apt/keyrings/docker.asc"
sudo incus exec adguardhome -- bash -c 'echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null'
sudo incus exec adguardhome -- bash -c "sudo apt update"
sudo incus exec adguardhome -- bash -c "sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin"
# Disable port 53 binding
sudo incus exec adguardhome -- bash -c "[ -d /etc/systemd/resolved.conf.d ] || mkdir -p /etc/systemd/resolved.conf.d"
sudo incus exec adguardhome -- bash -c "printf "%s\n%s\n" '[Resolve]' 'DNSStubListener=no' | sudo tee /etc/systemd/resolved.conf.d/10-make-dns-work.conf"
sudo incus exec adguardhome -- bash -c "sudo systemctl restart systemd-resolved"
# Run the docker
sudo incus exec adguardhome -- bash -c "docker compose -f /server/admin/compose/adguardhome/docker-compose.yml up -d"

I'm trying to get rootless podman to run adguard home on Debian 12. I run the docker-compose.yml file via podman-compose up -d.

I get errors that I cannot google successfully, sadly. I do occasionally see shards of people saying things like "I have adguard running with rootless podman" but never any guides. So tantalizing.

I have applied this change so rootless can yoink port 53:

sudo nano /etc/sysctl.conf

net.ipv4.ip_unprivileged_port_start=53 # at end, required for rootless podman to be able to do 53

(Do I even need that change with a macvlan?)

The sticking point seems to be the macvlan. I want a macvlan so I can host a PiHole as a redundant fallback on the same server. I error with:

Error: netavark: Netlink error: No such device (os error 19) and that error really gets me no where searching for it. I am berry sure the ethernet connection is named enp0s10 and spelled right in the docker-compose file, cause I copied and pasted it in.

I tried forcing the backend to "CNI" but probably did it wrong, it complained about:

WARN[0000] Failed to load cached network config: network dockervlan not found in CNI cache, falling back to loading network dockervlan from disk
WARN[0000] 1 error occurred:
        * plugin type="macvlan" failed (delete): cni plugin macvlan failed: Link not found

(I also made a /etc/cni/net.d/90-dockervlan.conflist file for cni but it didn't seem to see it and I couldn't muster how to get it to see it)

Both still occur if I pre-make the dockervlan with:

podman network create -d macvlan -o parent=enp0s10 --subnet 10.69.69.0/24 --gateway 10.69.69.1 --ip-range 10.69.69.69/32 dockervlan

And adjust the compose file's networks: call to:

networks:
    dockervlan:
        external: true
        name: dockervlan

Has anyone succeeded at this or done something similar?

docker-compose.yml:

version: '3.9'
#
***
NETWORKS
***
networks:
    dockervlan:
        name: dockervlan
        driver: macvlan
        driver_opts:
            parent: enp0s10
        ipam:
            config:
              - type: "host-local"
              - dst: "0.0.0.0/0"
              - subnet: "10.69.69.0/24"
                rangeStart: "10.69.69.69/32" # This range should include the ipv4_address: in services:
                rangeEnd: "10.69.69.79/32"
                gateway: "10.69.69.1"
#
***
SERVICES
***
services:
    adguardhome:
        container_name: adguardhome
        image: docker.io/adguard/adguardhome
        hostname: adguardhome
        restart: unless-stopped
        networks:
            dockervlan:
                ipv4_address: 10.69.69.69# IP address inside the defined dockervlan range
        volumes:
            - '/home/${USER}/server/configs/adguardhome/work:/opt/adguardhome/work'
            - '/home/${USER}/server/configs/adguardhome/conf:/opt/adguardhome/conf'
            #- '/home/${USER}/server/certs/example.com:/certs # optional: if you have your own SSL certs
        ports:
            - '53:53/tcp'
            - '53:53/udp'
            - '80:80/tcp'
            - '443:443/tcp'
            - '443:443/udp'
            - '3000:3000/tcp'

podman 4.3.1

podman-compose 1.0.6

Getting a newer podman-compose is pretty easy peasy, idk about newer podman if that's needed to fix this.

 
view more: next ›