talkingpumpkin

joined 1 year ago
[–] talkingpumpkin@lemmy.world 1 points 2 weeks ago

For those kind of issues I'd recommend snapshots instead of backups

 

Prometheus-alertmanager and graphana (especially graphana!) seem a bit too involved for monitoring my homelab (prometheus itself is fine: it does collect a lot of statistics I don't care about, but it doesn't require configuration so it doesn't bother me).

Do you know of simpler alternatives?

My goals are relatively simple:

  1. get a notification when any systemd service fails
  2. get a notification if there is not much space left on a disk
  3. get a notification if one of the above can't be determined (eg. server down, config error, ...)

Seeing graphs with basic system metrics (eg. cpu/ram usage) would be nice, but it's not super-important.

I am a dev so writing a script that checks for whatever I need is way simpler than learning/writing/testing yaml configuration (in fact, I was about to write a script to send heartbeats to something like Uptime Kuma or Tianji before I thought of asking you for a nicer solution).

[–] talkingpumpkin@lemmy.world 3 points 1 month ago

2 more cents :)

I've been using syncthing for a while now, on different devices, and the only unreliability I've run into is with android killing syncthing to save battery life, which is kinda hilarious, considering all the vendor- and google-provided crap they happily waste battery on (I don't use it, but for what I've heard iOS is even worse in this regard).

Specifically, I have a samsung tablet where, no matter how much I tinkered with system settings, synchthing would only run if I manually launched the app or while the tablet was charging (BTW I still use that same tablet, but it now runs LineageOS and syncthing works flawlessly).

All this is to say, you should probably look into system settings and research ways to convince your OS to do what it's supposed to rather than tinkering with syncthing itself.

[–] talkingpumpkin@lemmy.world 1 points 2 months ago

I fear it was nothing that entertaining: it was just my "normal" dark panel at the top of the screen and a second "default" white one at the bottom (this last one partially covered the windows I had open). I didn't try triggering notifications or otherwise causing some kind of mayhem.

[–] talkingpumpkin@lemmy.world 3 points 2 months ago (1 children)

I'm just messing around with testing/configuring different desktop environment/window managers and I'm looking for a quick way to preview them (running the new session as my user would be fine too - I just thought it would be simpler as a different user)

[–] talkingpumpkin@lemmy.world 6 points 2 months ago (3 children)

Wow, that's so neat!

On my machine it opens a fullscreen plasma spash and then it shows the new session intermixed/overlayed with my current one instead of in a new window... basically, it's a mess :D

If I may abuse your patience:

  • what distro/plasma version are you running? (here it's opensuse slowroll w/ plasma 6.1.4)
  • what happens if you just run startplasma-wayland from a terminal as your user? (I see the plasma splash screen and then I'm back to my old session)
 

I'm not much hoepful, but... just in case :)

I would like to be able to start a second session in a window of my current one (I mean a second session where I log in as a different user, similar to what happens with the various ctrl+alt+Fx, but starting a graphical session rather than a console one).

Do you know of some software that lets me do it?

Can I somehow run a KVM using my host disk as a the disk for the guest VM (and without breaking stuff)?

[–] talkingpumpkin@lemmy.world 1 points 2 months ago

Personally, I would sell everything and get a used PC on ebay (a small "minipc" one, unless space for hard disks is needed).

Take a look at what you could buy on ebay just by selling off the nvidia card.

[–] talkingpumpkin@lemmy.world 2 points 3 months ago

why is your network like this?

Well, at the moment my network is actually flat :)

This is an experiment I'm doing because I wanted to have all the management stuff on a different subnet (eg. adguard dns is on the "regular" subnet everyone uses, but its web interface is on the special subnet only select devices can talk to).

Of course (like with most stuff in my homelab), it's not like I really have a super-compelling security reason to that, it's mostly that I wondered "what if?" :D

Oh. the ping option you are referring to is -I (upper case) and takes either an interface name or an ip. I did try giving a .10/24 IP to the PC and the results were consistent with scenario 1 (pings where source and destination are on the same subnet work, pings acrrss subnets don't), so I didn't mention that in the OP

[–] talkingpumpkin@lemmy.world 1 points 3 months ago

I don't think I quite explained the situation well enough: my server only has 1 ethernet port (same as my PC), otherwise I wouldn't have bothered with vlans (well, I would still have bothered, since my house still only has one "backbone" cable running through it, but I would have configured it on the switches only).

Anyway... a few of the things you say/imply go against my understanding of networking, so one of us would better go back RTFM as you suggest :) (just kidding - most probably I just don't understand what you mean)

[–] talkingpumpkin@lemmy.world 1 points 3 months ago (1 children)

Thanks! Forwarding is disabled. I don't want the server to steal the router's job :)

[–] talkingpumpkin@lemmy.world 2 points 3 months ago* (last edited 3 months ago) (1 children)

So the request goes trough but the replies are discarded ? That could actually be it!

I think there was an option to allow that... I'll search it and give it a try. Thanks!

[–] talkingpumpkin@lemmy.world 2 points 3 months ago

I tried dropping the default routes (one at a time) and it doesn't make a difference, which isn't (I think) surprising as all traffic is local as far as the server in scenario 1 is concerned. Also IIUC only the default gateway with the lowest metric actually counts.

 

I have two subnets and am experiencing some pretty weird (to me) behaviour - could you help me understand what's going on?


Scenario 1

PC:                        192.168.11.101/24
Server: 192.168.10.102/24, 192.168.11.102/24

From my PC I can connect to .11.102, but not to .10.102:

ping -c 10 192.168.11.102 # works fine
ping -c 10 192.168.10.102 # 100% packet loss

Scenario 2

Now, if I disable .11.102 on the server (ip link set <dev> down) so that it only has an ip on the .10 subnet, the previously failing ping works fine.

PC:                        192.168.11.101/24
Server: 192.168.10.102/24

From my PC:

ping -c 10 192.168.10.102 # now works fine

This is baffling to me... any idea why it might be?


Here's some additional information:

  • The two subnets are on different vlans (.10/24 is untagged and .11/24 is tagged 11).

  • The PC and Server are connected to the same managed switch, which however does nothing "strange" (it just leaves tags as they are on all ports).

  • The router is connected to the aformentioned switch and set to forward packets between the two subnets (I'm pretty sure how I've configured it so, plus IIUC the second scenario ping wouldn't work without forwarding).

  • The router also has the same vlan setup, and I can ping both .10.1 and .11.1 with no issue in both scenarios 1 and 2.

  • In case it may matter, machine 1 has the following routes, setup by networkmanager from dhcp:

default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.101 metric 410
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.101 metric 410
  • In case it may matter, Machine 2 uses systemd-networkd and the routes generated from DHCP are slightly different (after dropping the .11.102 address for scenario 2, of course the relevant routes disappear):
default via 192.168.10.1 dev eth0 proto dhcp              src 192.168.10.102 metric 100
192.168.10.0/24          dev eth0 proto kernel scope link src 192.168.10.102 metric 100
192.168.10.1             dev eth0 proto dhcp   scope link src 192.168.10.102 metric 100
default via 192.168.11.1 dev eth1 proto dhcp              src 192.168.11.102 metric 101
192.168.11.0/24          dev eth1 proto kernel scope link src 192.168.11.102 metric 101
192.168.11.1             dev eth1 proto dhcp   scope link src 192.168.11.102 metric 101

solution

(please do comment if something here is wrong or needs clarifications - hopefully someone will find this discussion in the future and find it useful)

In scenario 1, packets from the PC to the server are routed through .11.1.

Since the server also has an .11/24 address, packets from the server to the PC (including replies) are not routed and instead just sent directly over ethernet.

Since the PC does not expect replies from a different machine that the one it contacted, they are discarded on arrival.

The solution to this (if one still thinks the whole thing is a good idea), is to route traffic originating from the server and directed to .11/24 via the router.

This could be accomplished with ip route del 192.168.11.0/24, which would however break connectivity with .11/24 adresses (similar reason as above: incoming traffic would not be routed but replies would)...

The more general solution (which, IDK, may still have drawbacks?) is to setup a secondary routing table:

echo 50 mytable >> /etc/iproute2/rt_tables # this defines the routing table
                                           # (see "ip rule" and "ip route show table <table>")
ip rule add from 192.168.10/24 iif lo table mytable priority 1 # "iff lo" selects only 
                                                               # packets originating
                                                               # from the machine itself
ip route add default via 192.168.10.1 dev eth0 table mytable # "dev eth0" is the interface
                                                             # with the .10/24 address,
                                                             # and might be superfluous

Now, in my mind, that should break connectivity with .10/24 addresses just like ip route del above, but in practice it does not seem to (if I remember I'll come back and explain why after studying some more)

 

I want to have a local mirror/proxy for some repos I'm using.

The idea is having something I can point my reads to so that I'm free to migrate my upstream repositories whenever I want and also so that my stuff doesn't stop working if some of the jankiest third-party repos I use disappears.

I know the various forjego/gitea/gitlab/... (well, at least some of them - I didn't check the specifics) have pull mirroring, but I'm looking for something simpler... ideally something with a single config file where I list what to mirror and how often to update and which then allows anonymous read access over the network.

Does anything come to mind?

[–] talkingpumpkin@lemmy.world 6 points 3 months ago (1 children)

If going the route of a backup solution, is it feasible to install OpenWRT on all of my devices, with the expectation that I can do some sort of automated backups of all settings and configurations, and restore in case of a router dying?

My two cents: use a "full" computer as your router (with either something like OPNsense or any "regular" linux distro if you don't need the GUI) and OpenWRT on your access points.

Unless you use the GUI and backup/restore the configuration (as you would with proprietary firmwares), OpenWRT is frankly a pain to configure and deploy. At the moment I'm building custom images for all my devices, but (next time™) I'm gonna ditch all that, get an x86 router and just manually manage OpenWRT on my wifi APs (I only have two and they both have the same relatively straightforward config).

It’s a pain that I know can be solved with buying dedicated access points (…right?)

Routers and access points are just computers with network interfaces (there may be level-2-only APs, but honestly I've never heard of any)... most probably your issue is that the firmware of your "routers as access points" doesn't want to be configured as a dumb AP.

view more: next ›