herrfrutti

joined 2 years ago
[–] herrfrutti@lemmy.world 2 points 4 months ago (1 children)

how are you trying to run podman?

If you just want a similar setup as with docker I'll recommend this:

https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md


Lingering (running services without login / after logout)

https://github.com/containers/podman/issues/12001

https://unix.stackexchange.com/questions/462845/how-to-apply-lingering-immedeately#462867

sudo loginctl enable-linger <user>

https://github.com/containers/podman/blob/main/vendor/github.com/containers/storage/storage.conf

Check out the storage.conf to use the fuse-overlay driver.


I like podman-compose and i have a start up script that restarts all my containers at reboot, as my user.


Also use the full link to your images, like docker.io/image oder where ever you get your images from.


have fun :)

[–] herrfrutti@lemmy.world 10 points 5 months ago (3 children)
[–] herrfrutti@lemmy.world 3 points 7 months ago

Yes all users that have containers running, that should keep running need lingering.

The Services do not restart themself. I have cronjob that executes podman start --all at reboot for my "podman user".

[–] herrfrutti@lemmy.world 17 points 8 months ago (2 children)

I'm running podman and podman-compose with no problem. And I'm happy. At first I was confused by the uid and gid mapping the containers have, but you'll get used to it.

This are some notes I took, please don't take all of it for the right choice.

Podman-Stuff

https://github.com/containers/podman/blob/main/docs/tutorials/rootless_tutorial.md

storage.conf

To use the fuse-overlay driver, the storage must be configured:

.config/containers/storage.conf

[storage]
  driver = "overlay"
  runroot = "/run/user/1000"
  graphroot = "/home/<user>/.local/share/containers/storage"
  [storage.options]
    mount_program = "/usr/bin/fuse-overlayfs"

Lingering (running services without login / after logout)

https://github.com/containers/podman/issues/12001

https://unix.stackexchange.com/questions/462845/how-to-apply-lingering-immedeately#462867

sudo loginctl enable-linger <user>
[–] herrfrutti@lemmy.world 1 points 9 months ago

You don't want the nextcloud to be public for everyone, then I'd go the tailscale route without a vps. Just connect your Server and phone.

If you want it to be public, then I'd still use tailscale and do it like the other comment suggested.

Reverse Proxy on vps connected to tailscale, proxzies the traffic through the tailnet to your server. That's what I'm doing btw.

[–] herrfrutti@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

I recommend this: https://www.zigbee2mqtt.io/guide/installation/20_zigbee2mqtt-fails-to-start.html#method-1-give-your-user-permissions-on-every-reboot

with that and also read the tipp after that I was troubleshooting my permission issues.

This should apply to gpu too.

[–] herrfrutti@lemmy.world 1 points 1 year ago (1 children)

But does this matter if you just want this to be locally accessible and you're running your own dns?

[–] herrfrutti@lemmy.world 5 points 1 year ago (6 children)

You need a wildcard cert for ypur subdoman:

*.legal.example.com

Then point that record to 127.0.0.0. This will not resolve for anyone. But you'll have an internal dns enty (useig pihole/adguard/unbound) that redirects to your reverse proxy.

You could also point to your revers proxy internal address instead of 127.0.0.0.

This video could help you: https://www.youtube.com/watch?v=qlcVx-k-02E

[–] herrfrutti@lemmy.world 2 points 1 year ago (2 children)

Sorry I have no idea how traefik works, but I've seen that this new video ist out. It might help you.

https://youtu.be/n1vOfdz5Nm8

[–] herrfrutti@lemmy.world 1 points 1 year ago

Yes... That is also my understanding.

[–] herrfrutti@lemmy.world 2 points 1 year ago (2 children)

I do. If you run caddy with network_mode: hostor better with network_mode: "slirp4netns:port_handler=slirp4netns" it should work.

also adding:

cap_add:
      - net_admin
      - net_raw
view more: next ›