OminousOrange

joined 1 year ago
[–] OminousOrange@lemmy.ca 1 points 4 days ago

It works great with usenet, detects albums you have and looks for those you don't, with a decent UI for managing.

[–] OminousOrange@lemmy.ca 2 points 4 days ago (2 children)

Doesn't Lidarr do a similar thing? Not sure if it operates the same if you don't have the arr part of it going.

[–] OminousOrange@lemmy.ca 5 points 1 week ago

Sad to see the news about tteck. His scripts really helped me get off the ground on my own self hosting journey.

[–] OminousOrange@lemmy.ca 2 points 2 weeks ago

This is quite important with Immich. They're good at documenting their breaking changes, just gotta make sure you check the changelog before updating. Also best to avoid auto updating with Watchtower or similar to avoid surprises.

[–] OminousOrange@lemmy.ca 56 points 4 weeks ago (1 children)

"For some reason"? Greed. That is the exact reason.

[–] OminousOrange@lemmy.ca 9 points 1 month ago (1 children)

I'm not sure if they're available with UK plugs, but I've got a pack of Thirdreality Zigbee plugs that monitor energy use and have a button on them to toggle power.

I've got them connected to Home Assistant. Two do a bit of climate control in a coldroom, the others are for occupancy lighting.

[–] OminousOrange@lemmy.ca 6 points 2 months ago

Oh yes, your pay-to-win government duopoly isn't helping anything, but don't call it impossible. The Affordable Care Act was a start, and I don't doubt the right people could make universal healthcare access a real thing in the US.

[–] OminousOrange@lemmy.ca 7 points 2 months ago* (last edited 2 months ago)

Oh, I agree it won't be easy, particularly when taking profits from rich people.

I've heard it likened to a house full of asbestos. Knock it all down and there's likely to be collateral damage, but meticulously taking it apart will take a considerable amount of time. I feel it would be easiest for governments to purchase the insurance companies, then slowly amalgamate so it's all one network open to everyone.

Also it's a bit entertaining when someone opposes it because "it's socialism". It's already socialism, you just have middlemen skimming profit off the top while providing little value.

[–] OminousOrange@lemmy.ca 30 points 2 months ago (10 children)

Hey guys, many other countries have figured out that healthcare doesn't have to be a privatized, for-profit nightmare. Perhaps that's an option worth exploring.

[–] OminousOrange@lemmy.ca 3 points 2 months ago

The asterism gives me big Splinter Cell vibes and I'm definitely OK with that.

[–] OminousOrange@lemmy.ca 7 points 3 months ago
[–] OminousOrange@lemmy.ca 1 points 3 months ago

Unfortunately there isn't really an all-in-one guide. TechnoTim has info on the Pi-hole config side and wildcard certificates, but I think he uses it with traefik.

NPM is pretty straightforward. If you find a site isn't working, try turning on Web Socket support.

I'd say just search for guides on each part individually:

  1. Get all the services installed and up and running
  2. Get SSL certificates from Cloudflare for your domain.
  3. Set up NPM for the services you want to reverse proxy with your Cloudflare SSL certs (they wont work until the next step is done)
  4. Set up pi-hole to be your local DNS (there's also adblock lists to add) and configure it to send all service(.lan).mydomain.com to the ip of NPM.
  5. Set up the Cloudflare tunnel.

I can try to help if you run into any issues.

14
submitted 7 months ago* (last edited 6 months ago) by OminousOrange@lemmy.ca to c/selfhosted@lemmy.world
 

Fine folks of c/selfhosted, I've got a Docker LXC (Debian) running in Proxmox that loses its local network connection 24 hours after boot. It's remedied with a LXC restart. I am still able to access the console through Proxmox when this happens, but all running services (docker ps still says they're running) are inaccessible on the network. Any recommendations for an inexperienced selfhoster like myself to keep this thing up for more than 24 hours?

Tried:

  • Pruning everything from Docker in case it was a remnant of an old container or something.
  • Confirming network config on the router wasn't breaking anything.
  • Checked there were no cron tasks doing funky things.

I did have a Watchtower container running on it recently, but have since removed it. It being a 24 hr thing got me thinking that was the only thing that would really cause an event at the 24 hr post start mark, and it started about that same time I removed Watchtower (intending to do manual updates because immich).

...and of course, any fix needs 24 hours to confirm it actually worked.

A forum post I found asked for the output of ip a and ip r, ~~see below.~~ Notable difference on ip r missing the link to the gateway after disconnecting.

Update: started going through journalctl and found the below abnormal entries when it loses connection, now investigating to see if I can find out why...

Apr 16 14:09:16 docker 922abd47b5c5[376]: [msg] Nameserver 1.1.1.1:53 has failed: request timed out.
Apr 16 14:09:16 docker 922abd47b5c5[376]: [msg] Nameserver 192.168.1.5:53 has failed: request timed out.
Apr 16 14:09:16 docker 922abd47b5c5[376]: [msg] All nameservers have failed

Update 2: I found using systemctl status networking.service that networking.service was in a failed state (Active: failed (Result: exit-code)). I also compared to a separate stable Docker LXC which showed networking.service was active, so, did some searching to remedy that.

x networking.service - Raise network interfaces
     Loaded: loaded (/lib/systemd/system/networking.service; enabled; preset: enabled)
     Active: failed (Result: exit-code) since Tue 2024-04-16 17:17:41 CST; 8min ago
       Docs: man:interfaces(5)
    Process: 20892 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
    Process: 21124 ExecStopPost=/usr/bin/touch /run/network/restart-hotplug (code=exited, status=0/SUCCESS)
   Main PID: 20892 (code=exited, status=1/FAILURE)
        CPU: 297ms

Apr 16 17:17:34 docker dhclient[20901]: DHCPACK of 192.168.1.104 from 192.168.1.1
Apr 16 17:17:34 docker ifup[20901]: DHCPACK of 192.168.1.104 from 192.168.1.1
Apr 16 17:17:34 docker ifup[20910]: RTNETLINK answers: File exists
Apr 16 17:17:34 docker dhclient[20901]: bound to 192.168.1.104 -- renewal in 37359 seconds.
Apr 16 17:17:34 docker ifup[20901]: bound to 192.168.1.104 -- renewal in 37359 seconds.
Apr 16 17:17:41 docker ifup[20966]: Could not get a link-local address
Apr 16 17:17:41 docker ifup[20892]: ifup: failed to bring up eth0
Apr 16 17:17:41 docker systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
Apr 16 17:17:41 docker systemd[1]: networking.service: Failed with result 'exit-code'.
Apr 16 17:17:41 docker systemd[1]: Failed to start networking.service - Raise network interfaces.

A reinstall of net-tools and ifupdown seems to have brought networking.service back up. apt-get install --reinstall net-tools ifupdown

Looking at the systemctl status return, I bet everything was fine until dhclient/ifup requested renewal about 24 hours after initial connection (boot), found that networking.service was down, and couldn't renew, killing the network connection.

We'll see if it's actually fixed in 24 hours or so, but hopefully this little endeavour can help someone else plagued with this issue in the future. I'm still not sure exactly what caused it. I'll confirm tomorrow...

Update 3 - Looks like that was the culprit. Container is still connected 24+ hrs since reboot, network.service is still active, and dhclient was able to renew.

Update 4 - All was well and good until I started playing with setting up Traefik. Not sure if this brought it to the surface or if it just happened coincidentally, but networking.service failed again. Tried restarting the service, but it failed. Took a look in /etc/networking/interfaces and found there was an entry for iface eth0 inet6 dhcp and I don't use ipv6. Removed that line and networking.service restarted successfully. Perhaps that was the issue the whole time.

view more: next ›