this post was submitted on 10 Jan 2024
64 points (98.5% liked)

Selfhosted

40296 readers
284 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hello selfhosters.

We all have bare-metal servres, VPS:es, containers and other things running. Some of them may be exposed openly to the internet, which is populated by autonomous malicious actors, and some may reside on a closed-off network since they contain sensitive data.

And there is a lot of solutions to monitor your servers, since none of us want our resources to be part of a botnet, or mine bitcoins for APTs, or simply have confidential data fall into the wrong hands.

Some of the tools I've looked at for this task are check_mk, netmonitor, monit: all of there monitor metrics such as CPU, RAM and network activity. Other tools such as Snort or Falco are designed to particularly detect suspicious activity. And there also are solutions that are hobbled together, like fail2ban actions together with pushover to get notified of intrusion attempts.

So my question to you is - how do you monitor your servers and with what tools? I need some inspiration to know what tooling to settle on to be able that detect unwanted external activity on my resources.

all 39 comments
sorted by: hot top controversial new old
[–] Strit@lemmy.linuxuserspace.show 15 points 10 months ago (1 children)

I'm pretty old school, but as I only have 1 server, I just use ssh, df, du and top.

[–] Deebster@programming.dev 12 points 10 months ago (1 children)

Not even htop? That is old school.

[–] beta_tester@lemmy.ml 12 points 10 months ago (1 children)

Not even btop? That's middle school.

[–] Samsy@lemmy.ml 6 points 10 months ago (2 children)

Not even bottom? That's elementary school.

[–] Marsupial@quokk.au 2 points 10 months ago

Okay priest.

[–] beta_tester@lemmy.ml 0 points 10 months ago

Emoji reactions are missing 😂😂😂

[–] avidamoeba@lemmy.ca 7 points 10 months ago* (last edited 10 months ago) (1 children)

Prometheus.

It's open source, it's easy to setup, its agents are available for nearly anything including OpenWrt, it can serve the simplest use case of "is it down" as well as much more complicated ones that stem from its ability to collect data over time.

Personally I'm monitoring:

  • Is it up?
  • Is the storage array healthy?
  • Are the services I care about running?

I used to run it ephemerallly - wiping data on restart. Recently started persisting its data so I can see data over the longer run.

[–] surewhynotlem@lemmy.world 2 points 10 months ago (2 children)

What do you use to see the data? Prometheus itself is easy to set up, but getting to the data seemed complicated.

[–] avidamoeba@lemmy.ca 2 points 10 months ago* (last edited 10 months ago)

The Prometheus built-in web UI. I find it pretty simple.

[–] lud@lemm.ee 1 points 10 months ago

You can use grafana to visualise the data.

Grafana isn't too hard to use.

[–] its_me_gb@feddit.uk 5 points 10 months ago* (last edited 10 months ago) (2 children)

Prometheus for metrics

Loki for logs

Grafana for dashboards.

I use node exporter for host metrics (Proxmox/VMs/SFFs/RaspPis/Router) and a number of other *exporters:

  • exportarr
  • plex-exporter
  • unifi-exporter
  • bitcoin node exporter

I use the OpenTelemetry collector to collect some of the above metrics, rather than Prometheus itself, as well as docker logs and other log files before shipping them to Prometheus/Loki.

Oh, I also scrape metrics from my Traefik containers using OTEL as well.

[–] namelivia@lemmy.world 2 points 10 months ago (1 children)

What does having OpenTelemetry improve? I have a setup similar to yours but data goes from Prometheus to Grafana and I never thought I would need anything else.

[–] its_me_gb@feddit.uk 5 points 10 months ago

Not a whole lot to be honest. But I work with OpenTelemetry everyday for my day job, so it was a little exercise for me.

Though, OTEL does have some advantages in that It is a vendor agnostic collection tool. allowing you to use multiple different collection methods and switch out your backend easily if you wish.

[–] lud@lemm.ee 1 points 10 months ago (1 children)

Have you tried the proxmox exporter? I have tried it briefly for a grafana lab and it seemed pretty good.

https://github.com/prometheus-pve/prometheus-pve-exporter

[–] its_me_gb@feddit.uk 1 points 10 months ago (1 children)

I haven't, but it looks like I've got another exporter to install and dashboard to create 😁

[–] lud@lemm.ee 1 points 10 months ago

If you want to run the exporter without docker (like I did) and you get problems with installing the exporter try using this guide: https://github.com/prometheus-pve/prometheus-pve-exporter/wiki/PVE-Exporter-on-Proxmox-VE-Node-in-a-venv

[–] MystikIncarnate@lemmy.ca 5 points 10 months ago

I'm a network guy, so everything in my labs use SNMP because it works with everything. Things that don't support SNMP are usually replaced and yeeted off the nearest bridge.

For that I use librenms. Simple, open source, and I find it easy to use, for the most part. I put it on a different system than what I'm monitoring because if it shares fate with everything else, it's not going to be very useful or give me any alerts if there's a full outage of my main homelab cluster.

Of course, access from the internet to it, is forbidden, and any SNMP is filtered by my firewall. Nothing really gets through for it, so I'm unconcerned about it becoming a target. For the rest of my systems security is mostly reliant on a small set of reverse proxies and firewall rules to keep everything secure.

I use a couple of VPN systems to access the servers remotely, all running on odd ports (if they need port forwards at all). I have multiple to provide redundancy to my remote access, so if one VPN isn't working due to a crash or something, I have others that should get me some measure of access.

[–] drkt@feddit.dk 4 points 10 months ago (1 children)

Sometimes I just sit and stare at my apache access logs because I'm bored

GoAccess is pretty nice for a broad overview of Apache logs, also.

For other services I generally just look at them every now and then and if something looks off I investigate. I found a cryptominer on my network once because it was spamming DNS and that shows up in DNS logs.

[–] Bakkoda@sh.itjust.works 2 points 10 months ago

I used to use some logging script made in Go where you could filter your logs and they would update in real time. Was great for catching stuck processes, leave it running on a different desktop, mousewheel over to it (i miss openbox so so much) and check my logs. I just have nothing facing outwards now so i ignore everything.

[–] loudwhisper@infosec.pub 3 points 10 months ago

I run Prometheus on a separate cluster, so I plug my servers with node_exporter and scrape metrics. I then alert with grafana. To be honest, the setup is heavier (resource usage-wise) than I would like for my use case, but it's what I am used to, and scales well to multiple machines.

[–] JonnyJaap@lemmy.world 3 points 10 months ago

I used zabbix at some point, but I never looked at the data so I stopped. Zabbix shows all kind of stuff.

I have cockpit on my bare-metal that has some stats, and netdata on my firewall, I do not track any of my VM's (except vnstat that runs on everything device).

[–] vegetaaaaaaa@lemmy.world 3 points 10 months ago* (last edited 10 months ago)

Netdata (agent only/not the cloud-based features), and a bunch of scanners running from cron/systemd timers, rsyslog for logs (and graylog for larger setups)

My base ansible role for monitoring.

Since your question is also related to securing your setup, inspect and harden the configuration of all running services and the OS itself. Here is my common ansible role for basic stuff. Find (prefereably official) hardening guides for your distribution and implement hardening guidelines such as DISA STIG, CIS benchmarks, ANSSI guides, etc.

[–] SeeJayEmm@lemmy.procrastinati.org 3 points 10 months ago (1 children)

I'm running checkmk for monitoring but that won't help you with detection of unwanted logins. For security I'm running crowded.

[–] peter@feddit.uk 3 points 10 months ago (1 children)

What's crowded? I am having trouble searching for it because of its name

[–] archy@lemmy.world 2 points 10 months ago (1 children)

crowdsec, pretty sure what's meant

[–] peter@feddit.uk 1 points 10 months ago

Ah thank you

[–] johntash@eviltoast.org 2 points 10 months ago

UptimeKuma is great, I use it for the simple "are my services up?" and is what I pay most attention to.

I still use zabbix for finer grained monitors though like checking raid status, smartctl, disk space, temperatures, etc.

I've been trying out librenms with more custom snmp checks too and am considering going that route instead of zabbix in the future

[–] MrMcGasion@lemmy.world 2 points 10 months ago

I've dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.

Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.

Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.

If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I'm not here to "yuck" anyone's "yum") and deliberately avoid them.

[–] MSgtRedFox@infosec.pub 1 points 10 months ago

PRTG has a community edition Elastiflow for netflow has free/community edition Grafana and influxdb open source

[–] taladar@sh.itjust.works 1 points 10 months ago

Icinga2 works reasonably well for us. It is easy to write new checks as small shell scripts (or any other binary that can print and set and exit status code).

[–] TheGreenGolem@lemmy.dbzer0.com 1 points 10 months ago

It cannot notify you, you have to check it manually, but: I use DaRemote on my phone to periodically check my bare metal.

[–] namelivia@lemmy.world 1 points 10 months ago

Prometheus, Loki and Grafana.

[–] makingrain@lemm.ee 1 points 10 months ago

Uptime Kuma and ntfy.

[–] lemann@lemmy.dbzer0.com 1 points 10 months ago (1 children)

I used to pass all the data through to Home Assistant and show it on some dashboards, but I decided to move over to Zabbix.

Works well but is quite full-featured, maybe moreso than necessary for a self hoster. Made a mediatype integration for my announciator system so I hear issues happening with the servers, as well as updates on things, so I don't really need to check manually. Also a custom SMART template that populates the disk's physical location/bay (as the built in one only reports SMART data).

It's notified me of a few hardware issues that would have gone unnoticed on my previous system, and helped with diagnosing others. A lot of the sensors may seem useless, but trust me, once they flag up you should 100% check on your hardware. Hard drives losing power during high activity because of loose connections, and a CPU fan failure to name two.

It has a really high learning curve though so not sure how much I can recommend it over something like Grafana+Prometheus - something I haven't used but the combo looks equally as comprehensive as long as you check your dashboard regularly.

Just wish there were more android apps

[–] Decronym@lemmy.decronym.xyz 1 points 10 months ago* (last edited 10 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
SSL Secure Sockets Layer, for transparent encryption
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)

3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #421 for this sub, first seen 10th Jan 2024, 14:55] [FAQ] [Full list] [Contact] [Source code]

[–] possiblylinux127@lemmy.zip 0 points 10 months ago

I don't do much in the way of monitoring. I guess I should do that.

[–] dataprolet@lemmy.dbzer0.com 0 points 10 months ago

Uptime-Kuma

[–] Cyberflunk@lemmy.world -4 points 10 months ago

Reduce your threat profile. Run sslh, 443 handles both SSL and ssh. Adjust your host based firewall to just 443 Attack yourself on that port, identify the logs Add the new profiles to fail2ban Enable fail2ban email If you don't like email, use a service that translates email to notification. Ntfy.sh is free notifications Or.. Use something like tailscale and don't offer a remote login to the general Internet.

I submitted your post to got here's what it thought

https://shareg.pt/Tz0El4k