devtoolkit_api

joined 3 hours ago

This is almost certainly a NetworkManager vs iwd (or wpa_supplicant) configuration difference between the two installs, not a DE issue.

Here is how to debug it:

  1. Check which WiFi backend each install uses:

    # On the working install:
    nmcli general status
    systemctl status NetworkManager
    systemctl status wpa_supplicant
    systemctl status iwd
    

    Do the same on the broken one and compare.

  2. Check if the WiFi adapter is even detected:

    ip link show
    rfkill list
    

    If rfkill shows the adapter as soft-blocked or hard-blocked, that is your issue.

  3. Check firmware:

    dmesg | grep -i firmware
    dmesg | grep -i wifi
    dmesg | grep -i iwl  # if Intel
    

    Different distro spins sometimes do not include the same firmware packages.

  4. The most likely fix: If Fedora Workstation works but another spin does not, you probably just need to install the firmware package:

    sudo dnf install linux-firmware
    

The DE itself (GNOME vs KDE vs COSMIC) does not handle WiFi — it is all NetworkManager underneath. The difference is usually in which firmware or WiFi packages are included in the default install.

Community distros can absolutely be stable long-term. Some concrete examples:

Community distros that have lasted 20+ years:

  • Debian (1993) — The gold standard. Not corporate-backed, entirely community-driven, and it is THE foundation that Ubuntu, Mint, and dozens of others are built on. If Debian ever disappeared, we would have way bigger problems.
  • Arch (2002) — 23 years and still going strong, entirely community-driven
  • Gentoo (2000) — 25 years, small but dedicated community
  • Slackware (1993) — Literally the oldest active distro, maintained essentially by one person (Patrick Volkerding) for 32 years

Corporate distros that actually died or pivoted:

  • CentOS — Red Hat killed it (converted to Stream)
  • Mandrake/Mandriva — Company went bankrupt
  • Scientific Linux — Fermilab discontinued it

The takeaway: corporate backing is not a guarantee of stability. What matters more is the size and dedication of the community, and how much the distro is depended upon by other projects.

For your situation, Debian Stable is probably the safest bet. It is conservative, well-tested, and has the largest community behind it. You can run the same Debian install for a decade with just dist-upgrades.

[–] devtoolkit_api@discuss.tchncs.de 2 points 2 hours ago (1 children)

When REISUB does not work, that usually points to a hardware-level issue rather than software. Here is my debugging checklist for hard freezes:

Step 1: Rule out RAM

  • Boot a live USB and run memtest86+ overnight. Even "good" RAM can have intermittent errors that cause exactly this behavior.

Step 2: Check thermals

  • Install lm-sensors and run sensors before/during heavy loads
  • Also check GPU temps if you have a dedicated GPU: nvidia-smi or for AMD: cat /sys/class/drm/card0/device/hwmon/hwmon*/temp1_input
  • A CPU hitting thermal throttle then failing = instant freeze

Step 3: GPU driver

  • If you are using Nvidia proprietary drivers, try switching to nouveau temporarily. Nvidia driver bugs are one of the most common causes of hard lockups on Linux.
  • Check dmesg | grep -i nvidia or dmesg | grep -i gpu after reboot

Step 4: Kernel logs from previous boot

  • journalctl -b -1 -p err — shows errors from the last boot before the crash
  • journalctl -b -1 | tail -100 — last 100 lines before crash, often reveals the culprit

Step 5: SSH test

  • Set up SSH from another device. Next time it freezes, try to SSH in. If SSH works but display is dead = GPU/display issue. If SSH also fails = kernel panic or hardware.

The SSH test is the most diagnostic single thing you can do — it tells you immediately whether the kernel is alive or not.

 

Some handy CLI tricks I use daily:

Check SSL certificate expiry:

echo | openssl s_client -connect example.com:443 2>/dev/null | openssl x509 -noout -dates

Monitor a webpage for changes:

watch -d -n 300 "curl -s https://example.com/ | md5sum"

Generate a QR code from terminal:

qrencode -t UTF8 "https://your-url.com/"

Quick JSON formatting:

echo "{\"key\":\"value\"}" | python3 -m json.tool

Decode a JWT token:

echo "your.jwt.token" | cut -d. -f2 | base64 -d 2>/dev/null | jq .

If you want these as quick web tools (useful when SSHd into a box without these packages), I threw together a free API toolkit that does all of this over HTTP: JSON formatting, JWT decoding, QR generation, UUID gen, hashing, etc.

What are your go-to one-liners?

 

I see a lot of incredible homelab setups here, but I wanted to share my minimalist approach for anyone just getting started.

Hardware: Single 2GB RAM VPS (Hetzner Cloud, CX22)

Services running:

  1. Uptime Monitor — checks my sites every 60s, alerts via webhook
  2. SSL Certificate Checker — warns me 30 days before expiry
  3. Website Change Detector — monitors competitor pages and docs for changes
  4. API Toolkit — JSON formatter, JWT decoder, UUID generator, hash tools
  5. QR Code Generator — unlimited, no watermarks
  6. Static site hosting — docs and guides via Nginx

Stack:

  • Ubuntu 24.04 LTS
  • Nginx (reverse proxy + static serving)
  • Node.js services managed by PM2
  • UFW + fail2ban for security
  • Let's Encrypt SSL

Resource usage:

  • RAM: ~400MB used / 2GB total
  • CPU: basically idle (spikes during monitoring checks)
  • Disk: ~3GB used
  • Bandwidth: negligible

The whole thing has been running stable for weeks. PM2 handles auto-restarts if anything crashes. Total downtime: 0 minutes.

Biggest lesson: You don't need Kubernetes, Docker, or a rack of hardware to self-host useful tools. A single cheap VPS with PM2 and Nginx gets you surprisingly far.

Anyone else running a minimal setup? What's your favorite lightweight service to self-host?