avidamoeba

joined 1 year ago
[โ€“] avidamoeba@lemmy.ca 12 points 2 months ago

๐ŸŒ๐Ÿ‘จโ€๐Ÿš€๐Ÿ”ซ๐Ÿ‘จโ€๐Ÿš€

[โ€“] avidamoeba@lemmy.ca 5 points 2 months ago* (last edited 2 months ago)

Kinda, however Linux is always better in one regard - we can change it and it generally serves the needs of its users since its users build and change it. Windows and macOS on the other hand serve the needs of Microsoft's and Apple's major shareholders and only in part of their users to the degree they can get away with. The goal is always gaining and retaining market share while extracting the most value from the users - money, data, etc.

If enough of us wanted a sleek, uber smooth desktop that has all UI bases covered, we could totally do it. We just don't give enough shit and we're content with what it is. Case in point, I know multi-monitor support isn't amazing, so I buy a bigger monitor and use more windows. ๐Ÿฅน Personally I've been content with the mainstream desktop Linux UX since 2012-14. You won't see me digging into features in GNOME or Wayland.

[โ€“] avidamoeba@lemmy.ca 9 points 2 months ago* (last edited 2 months ago) (2 children)

It depends on what you're using it for. Elaborate multi monitor setups? Starting a web server? Controlling a robot? A car's ECU?

Linux isn't a specific platform. Linux the kernel is a generic kernel that can be used and tuned for virtually any hardware. GNU/Linux the OS is also a generic OS that can be customized to work for variety of use cases. The most popular desktop Linux OSes are still very generic. Most of them aren't built to be power efficient on laptops for example. Yet we know Linux can be very power efficient on variety of purpose-built mobile hardware.

Windows on the other hand was built from the start to be a desktop OS. The desktop and later laptop use cases have always been primary. To the point of making other use cases more difficult. The same is true for macOS. So when you see them performing well in some desktop-related use cases where Linux might struggle a bit, it's no surprise. If enough of us wanted it to be better at that, we could make it happen. If enough of us wanted macOS or Windows to do something Apple or MS didn't, tough luck. So it's just a matter of priorities and resources.

[โ€“] avidamoeba@lemmy.ca 2 points 2 months ago

This is the way. I'm up since Ubuntu 14.04 LTS on this machine. Platform swapped from AMD Phenom, to Intel i7, to AMD Ryzen, now with a bigger Ryzen. SSDs from a single SATA, to NVMe, to a 512G NVMe mirror, to a 1G NVMe mirror. The storage went from a single 4T disk to an 8T mirror NAS, to 8T directly attached mirror, to 24T RAIDz, to 48T RAIDz. I've now activated the free Ubuntu Pro tier, so if Canonical is still around in 2032, this machine can operate for another 8 years with just hardware swaps on failure.

[โ€“] avidamoeba@lemmy.ca 2 points 2 months ago* (last edited 2 months ago) (1 children)

These corpos are very vocal in arguing to maintain their low tax contributions so perhaps you're right.

[โ€“] avidamoeba@lemmy.ca 3 points 2 months ago* (last edited 2 months ago) (1 children)

High voltage DC is used for transmission at 10s to 100s of kV already.

[โ€“] avidamoeba@lemmy.ca 9 points 3 months ago (2 children)
[โ€“] avidamoeba@lemmy.ca 6 points 3 months ago* (last edited 3 months ago)

The kludge wins. ๐Ÿ˜…

[โ€“] avidamoeba@lemmy.ca 1 points 3 months ago

Are you sure? Cause KVM's doc lists two: https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and the first one ain't Fedora. The language used doesn't suggest that one is a canonical source either. Now imagine that I'm a noob or otherwise using KVM for the first time. I have to figure out what the difference is and which one to get because I don't want to make a mistake and end up with a broken install. Mind you I have ended up with bad graphics depending on which driver and what version I've installed.

[โ€“] avidamoeba@lemmy.ca 4 points 3 months ago* (last edited 3 months ago)

On the client side of a relayd-based wireless bridge using OpenWrt, I discovered there was a bug in that relayd version which made the process hang after it moved so many gigs of data. I made a cron job that pings the network relayd makes accessible. If the ping fails, it nukes relayd. Of course this relies on a live machine to ping. If this machine dies for some reason, the cron job would just keep killing relayd over and over again. ๐Ÿฅน

[โ€“] avidamoeba@lemmy.ca 30 points 3 months ago (2 children)

I think NDISwrapper is still maintained for issues like this.

view more: โ€น prev next โ€บ