this post was submitted on 08 May 2024
11 points (86.7% liked)

Selfhosted

40347 readers
365 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

whenever I try to run a podman container, it'll through:

Error: running container create option: container has joined pod 4f[long_string]b1f and dependency container 34[long_string]9cd is not a member of the pod: invalid argument

An example of a dependent container compose file looks like this:

services:
  # https://docs.linuxserver.io/images/docker-qbittorrent
  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    environment:
      - WEBUI_PORT=8090
      - PUID=0
      - PGID=0
    volumes:
      - ./config:/config:Z
      - ./files:/media:z
    restart: always
    depends_on:
      - gluetun
    network_mode: "container:gluetun"
services:
  # https://github.com/qdm12/gluetun
  gluetun:
    image: docker.io/qmcgaw/gluetun
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    ports:
      - 8001:8000 # gluetun
      - 8090:8090 # qbittorrent
    volumes:
      - ./config:/gluetun:Z
    environment:
      - KEYS=REDACTED
    restart: always
    privileged: true

It worked until yesterday. I updated to fedora 40. I am not sure if that is just a coincidence or if that's the reason. Should I downgrade to 39?

you are viewing a single comment's thread
view the rest of the comments
[–] swooosh@lemmy.world 1 points 6 months ago (2 children)

I've been running that since many months like that. Why did that break now ...

It works in the same file. I don't want to have 10 containers in the same compose file.

I checked, it works in the same file, thx.

Removing "depends: gluetun" does not work either

[–] Live2day@lemmy.sdf.org 5 points 6 months ago (2 children)

But that's like the whole point of docker compose. If you are just going to have 1 container per file what benefit are you even getting out of using a compose file?

[–] ElderWendigo@sh.itjust.works 6 points 6 months ago

Docker compose is just a setting file for a container. It's the same advantage you get using an ssh config file instead of typing out and specifying a user, IP, port, and private key to use each time. What's the advantage to putting all my containers into one compose file? It's not like I'm running docker commands from the terminal manually to start and stop them unless something goes wrong, I let systemd handle that. And I'd much rather the systemd be able to individually start, stop, and monitor individual containers rather than have to bring them all down if one fails.

[–] swooosh@lemmy.world 3 points 6 months ago

Immich has multiple services running. Having that within one file is good in my opinion. Or nextcloud with an external database.

[–] lemmyvore@feddit.nl 1 points 6 months ago (1 children)

You're attempting to run the qb container in the gluetun network stack. You need to give it time to start. When they're in the same compose file they can be ordered properly but when they're different files it doesn't care.

Check out the full options (long form) to depends: and see if you can add a delay and a health check.

[–] swooosh@lemmy.world 1 points 6 months ago

It has worked for months. Thank you. I'll se what I can do.