I'm a big fan of the idea of efficient computing, and I think we'd see more power savings at the End Users based on hardware. I don't need an intel i9-nteen50 and a Geforce 4090 to mindlessly ingest videos or browse lemmy. In fact, I could get away with that using less power than my phone uses; we really should move to the ARM model of low power cores suitable for most tasks and performance cores that only turn on when necessary. Pair that with less bloatware and you're getting maximum performance per instruction run.
SoCs also have the benefit of power efficient GPU and memory, while standardizing hardware so programmers can optimize to the platform again instead of getting lost in APIs and driver bloat.
The only downside is the difficulty of upgrading hardware, but CPUs (and GPUs) are basically blackboxes to the End User already and no one complains about not being able to upgrade just the L1 cache (or vram).
Imagine a future where most end user MOBOs are essentially just a socket for a socketed-SoC standard, some m.2 ports, and of course the PCI slots (with the usual hardwired ports for peripherals). Desktops/laptops would generate less waste heat, computers would use less electricity, graphical software developement would be less of a fustercluck (imagine the manhours saved), there'd be less e-waste (imagine not needing a new mobo for the new chipset if you want to upgrade your cpu after 5 years), you'd be able to upgrade laptop PUs.
Of course the actual implementation of such a standard would necessarily get fuckered by competing interests and people who only want to see the numbers go up (both profit-wise and performance-wise) and we'd be back where we are now... But a gal can dream.
iw dev <interface> station dump
will show every metric about the connection, including the signal strength and average signal strength.It won't show it as an ascii graphic as with
nmcli
, but it shouldn't be hard to create a wrapper script to grep that info and convert it to a simplified output if you're willing to put in the effort of understanding the dBm numbers.E.g. -10 dBm is the maximum possible and -100 dBm is the minimum (for the 802.11 spec), but the scale is logarithmic so -90 dBm is 10x stronger than the absolute minimum needed for connectivity, and I can only get ~-20 dBm with my laptop touching the AP.
Basically my point is that the good ol' "bars" method of demonstrating connection strength was arbitrarily decided and isn't closely tied to connection quality. This way you get to decide what numbers you want to equate to a 100% connection.