CumBroth

joined 1 year ago
[–] CumBroth@discuss.tchncs.de 2 points 4 months ago* (last edited 4 months ago) (1 children)

I think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since your clients lose internet access (except to the local network, since there's a separate route for that on the Wireguard "server" container") if any of the following happens:

  • "Client" container is spun down
  • The Wireguard interface inside the "client" container is spun down (you can try this out by execing wg-quick down wg0 inside the container)
  • or even if the interface is up but the VPN connection is down (try changing the endpoint IP to a random one instead of the correct one provided by your VPN service provider)

I can't be 100% sure, because I'm not a networking expert, but this seems like enough of a "kill-switch" to me. I'm not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself got restarted/updated.

But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest versions of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:

  • A MASQUERADE rule on the tunnel, meaning the tun0 interface.
  • Gluetun is configured to drop all FORWARD packets (filter table) by default. You'll have to change that chain rule to ACCEPT. Again, I'm not a networking expert, so I'm not sure whether or not this compromises the kill-switch in any way, at least in any way that's relevant to the desired setup/behavior. You could potentially set a more restrictive rule to only allow traffic coming in from <wireguard_container_IP>, but I'll leave that up to you. You'll also need to figure out the best way to persist the rules through container restarts.

First, here's the docker compose setup I used:

networks:
  wghomenet:
    name: wghomenet
    ipam:
      config:
        - subnet: 172.22.0.0/24
          gateway: 172.22.0.1

services:
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    ports:
      - 8888:8888/tcp # HTTP proxy
      - 8388:8388/tcp # Shadowsocks
      - 8388:8388/udp # Shadowsocks
    volumes:
      - ./config:/gluetun
    environment:
      - VPN_SERVICE_PROVIDER=<your stuff here>
      - VPN_TYPE=wireguard
      # - WIREGUARD_PRIVATE_KEY=<your stuff here>
      # - WIREGUARD_PRESHARED_KEY=<your stuff here>
      # - WIREGUARD_ADDRESSES=<your stuff here>
      # - SERVER_COUNTRIES=<your stuff here>
      # Timezone for accurate log times
      - TZ= <your stuff here>
      # Server list updater
      # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
      - UPDATER_PERIOD=24h
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    networks:
      wghomenet:
        ipv4_address: 172.22.0.101

  wireguard-server:
    image: lscr.io/linuxserver/wireguard
    container_name: wireguard-server
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1001
      - TZ=<your stuff here>
      - INTERNAL_SUBNET=10.13.13.0
      - PEERS=chromebook
    volumes:
      - ./config/wg-server:/config
      - /lib/modules:/lib/modules #optional
    restart: always
    ports:
      - 51820:51820/udp
    networks:
      wghomenet:
        ipv4_address: 172.22.0.5
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1

You already have your "server" container properly configured. Now for Gluetun: I exec into the container docker exec -it gluetun sh. Then I set the MASQUERADE rule on the tunnel: iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE. And finally, I change the FORWARD chain policy in the filter table to ACCEPT iptables -t filter -P FORWARD ACCEPT.

Note on the last command: In my case I did iptables-legacy because all the rules were defined there already (iptables gives you a warning if that's the case), but your container's version may vary. I saw different behavior on the testing container I spun up on the VPS compared to the one I have running on my homelab.

Good luck, and let me know if you run into any issues!

EDIT: The rules look like this afterwards:

Output of iptables-legacy -vL -t filter:

Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
10710  788K ACCEPT     all  --  lo     any     anywhere             anywhere
16698   14M ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
    1    40 ACCEPT     all  --  eth0   any     anywhere             172.22.0.0/24

# note the ACCEPT policy here
Chain FORWARD (policy ACCEPT 3593 packets, 1681K bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
10710  788K ACCEPT     all  --  any    lo      anywhere             anywhere
13394 1518K ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  any    eth0    dac4b9c06987         172.22.0.0/24
    1   176 ACCEPT     udp  --  any    eth0    anywhere             connected-by.global-layer.com  udp dpt:1637
  916 55072 ACCEPT     all  --  any    tun0    anywhere             anywhere

And the output of iptables -vL -t nat:

Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER_OUTPUT  all  --  any    any     anywhere             127.0.0.11

# note the MASQUERADE rule here
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER_POSTROUTING  all  --  any    any     anywhere             127.0.0.11
  312 18936 MASQUERADE  all  --  any    tun+    anywhere             anywhere

Chain DOCKER_OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  any    any     anywhere             127.0.0.11           tcp dpt:domain to:127.0.0.11:39905
    0     0 DNAT       udp  --  any    any     anywhere             127.0.0.11           udp dpt:domain to:127.0.0.11:56734

Chain DOCKER_POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 SNAT       tcp  --  any    any     127.0.0.11           anywhere             tcp spt:39905 to::53
    0     0 SNAT       udp  --  any    any     127.0.0.11           anywhere             udp spt:56734 to::53

[–] CumBroth@discuss.tchncs.de 2 points 4 months ago* (last edited 4 months ago) (3 children)

Gluetun likely doesn't have the proper firewall rules in place to enable this sort of traffic routing, simply because it's made for another use case (using the container's network stack directly with network_mode: "service:gluetun").

Try to first get this setup working with two vanilla Wireguard containers (instead of Wireguard + gluetun). If it does, you'll know that your Wireguard "server" container is properly set up. Then replace the second container that's acting as a VPN client with gluetun and run tcpdump again. You likely need to add a postrouting masquerade rule on the NAT table.

Here's my own working setup for reference.

Wireguard "server" container:

[Interface]
Address = <address>
ListenPort = 51820
PrivateKey = <privateKey>
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostUp = wg set wg0 fwmark 51820
PostUp = ip -4 route add 0.0.0.0/0 via 172.22.0.101 table 51820
PostUp = ip -4 rule add not fwmark 51820 table 51820
PostUp = ip -4 rule add table main suppress_prefixlength 0
PostUp = ip route add 192.168.16.0/24 via 172.22.0.1
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del 192.168.16.0/24 via 172.22.0.1

#peer configurations (clients) go here

and the Wireguard VPN client that I route traffic through:

# Based on my VPN provider's configuration + additional firewall rules to route traffic correctly
[Interface]
PrivateKey = <key>
Address = <address>
DNS = 192.168.16.81 # local Adguard
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE #Route traffic coming in from outside the container (host/other container)
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE

[Peer]
PublicKey = <key>
AllowedIPs = 0.0.0.0/0
Endpoint = <endpoint_IP>:51820

Note the NAT MASQUERADE rule.

[–] CumBroth@discuss.tchncs.de 15 points 6 months ago (1 children)

Is it a bird?

Is it a plane?

I actually can't tell.

[–] CumBroth@discuss.tchncs.de 1 points 10 months ago (1 children)

I set it up manually using this as a guide. It was a lot of work because I had to adapt it to my use case (not using a VPS), so I couldn't just follow the guide, but I learned a lot in the process and it works well.

[–] CumBroth@discuss.tchncs.de 2 points 10 months ago

I've tried both this and https://github.com/jmorganca/ollama. I liked the latter a lot more; just can't remember why.

GUI for ollama is a separate project: https://github.com/ollama-webui/ollama-webui

[–] CumBroth@discuss.tchncs.de 3 points 11 months ago (2 children)

The more recent installment, Bannerlord, had caught my attention, but a lot of people were saying it was unfinished and that devs weren't updating the game to deliver things that were promised and instead were making minor hotfixes that even broke the mods attempting to address the game's inadequacies. A lot of the complaints compared it to the first installment in the series and were recommending trying it out, especially since it had had a thriving mod scene and was more fleshed-out over all. I tried it out, but it just felt too dated for my taste; couldn't get into it.

Maybe I would've gotten into it had I given it more time. I just felt pressured to quickly make a decision on whether to refund it after I had wasted more than 3 hours of my "trial" sitting in the main menu.

[–] CumBroth@discuss.tchncs.de 9 points 11 months ago (4 children)

I once got a refund after 5 hours. I opened the game, left it running at the main menu, then went to make lunch and completely forgot about it. Wasted probably about 3.5 hours in the menu. When I asked for a refund, I didn't even explain that I'd left it open in the main menu; I just pointed out why I didn't like it and why I wanted a refund. The game in question was Mount and Blade, store country was Germany, and I submitted the refund request on the same day I bought it.

[–] CumBroth@discuss.tchncs.de 2 points 11 months ago (1 children)

Wrong. They proved that they could no longer be trusted after the release of Fallout 76.

[–] CumBroth@discuss.tchncs.de 4 points 1 year ago (1 children)

😋 😋 😋 😋 😋 😋 😋 😋 😋 😋 😋 😋

cum broth

Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3565436420, Size: 512x512, Model hash: ec41bd2a82, Model: photon_v1, Version: v1.4.0