skilltheamps

joined 1 year ago
[–] skilltheamps@feddit.de 1 points 9 months ago (1 children)

Partly yes, but just installing a package without running into conflicts does not yet guarantee a working system. You have to cater for the right configurations too, for example when you think about a corporate setting with all kinds of networking whoes (like shares, vpns and such). I think you could get this to work with Nix somehow, but you want to test these things beforehand, and if you do so using images then you have the thing to ship to machines in your hands already, there's no need to compose the OS and configurations over and over again for every machine.

Another aspect with non-atomic OS composition on the target is that you have to deal with the transient phase from one state to the next. In this phase all kinds of things could happen, for example an update of nvidia drivers would render cuda disfunctional until the next reboot, as the userspace and kernelspace parts do not fit together anymore. With something like any of the fedora atomic variants, transient phases with basically undefined behaviour do not exist, and the time the system is not guaranteed to be in working order gets reduced to just the reboot.

Nix is cool and definetely better than any traditional package manager. But it is not an ultimate solution, to be honest so far it seems to me like it is living in a nieche of enthusiasts that are smart enough to put up with its unique declaration language. And below that niche you have ordinary linux users that may just be happy with silverblue without any modifications, and above that niche you have corporate doing their own images in CI/CD, CoreOS and all that jazz.

[–] skilltheamps@feddit.de 15 points 9 months ago (4 children)

that doesn't require I keep a full local copy of all the data

If you don't do that, the place that you call "backup" is the only place where it is stored - that is not a Backup. A backup is an additional place where it is stored, for the case when your primary storage gets destroyed.

[–] skilltheamps@feddit.de 6 points 9 months ago

/dev/fb is mostly one thing: deprecated. Also it is not really a interface of your graphics card, it is a legacy way kindly still provided for pushing fullscreen pixels to your monitor in an unaccelerated fashion for things that have not made it to kms drm (which at this point is pretty much merely the console emulation on the TTYs). It is not an interface to the graphics card, because it doesn't provide any capabilities a graphics card has (like shaders etc). In fact for just pushing pixels you can leave any graphics card completely out of your computer if you connect your screen by other means (think stuff like SPI which is common in embedded devices; you can find many examples of such drivers in the kernel source at drivers/gpu/drm/tiny ).

[–] skilltheamps@feddit.de 2 points 9 months ago* (last edited 9 months ago) (3 children)

Well maybe you youself are too new to recognize some of the appeals ;)

One large advantage with silverblue is, that the whole composition of the OS does not take place on the target machine. That means that all the issues that could arise will not take place on the target machine, and can be dealt with beforehand. In the simple case this could mean just enjoying vanilla silverblue without having to think about possibly borking the machine. In an advanced usecase this could mean for example building the os images in a GitLab CI/CD pipeline (with well working tooling that exists already for docker etc), then having automatic tests in the pipeline ensure that everything important works as expected. And only if the tests pass, the image will be added to the repositorie's image registry, where the target machines will fetch it from automatically and rebase to it.

[–] skilltheamps@feddit.de 5 points 9 months ago (2 children)

This covers just the basic cpu instructions, no proprietary extensions, no architecture of additional necessities like a gpu, no proprietary firmare for the gpu or anything else. The instruction set of Arm, x86 or whatever is not a secret though. The freedoms in risc-v are mostly concerning the manufacturers, which can build chips using this instruction set without paying any royalties. From a consumer point of view, that at most means one can at most choose from a more organically grown landscape of risc-v chips. Which in turn bears the risk of ending up in a situation, where all we have is a vast jingle of cluttered proprietary extensions, that make it harder to write libre drivers for than it is for Arm today.

Don't get me wrong, risc-v is absolutely amazing! But in terms of freedomness, it would take a manufacturer to extend the spirit of open hardware to the complete SOC - and the basic instruction set is pretty much the smallest piece in that.

[–] skilltheamps@feddit.de -3 points 9 months ago* (last edited 9 months ago) (14 children)

Yes. But the more advanced LLMs get, the less it matters in my opinion. I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output. I'm sure that already LLMs can surpass some humans, at least at certain disciplines. In a couple years the difference of a parrot-box and something actually intelligent will only merely show at the very fringes of massively complicated tasks. And that is way beyond the capability threshold that allows to do nasty stuff with it, to shed a dystopian light on it.

[–] skilltheamps@feddit.de 29 points 10 months ago (1 children)

Because it's the same story as with Mir or Upstart: it will die, because its half assed and tailored to Ubuntu, this time with dubious non-free parts even

[–] skilltheamps@feddit.de 2 points 11 months ago* (last edited 11 months ago)

I think one puzzle piece of improvement is flatpak:

  • It has a verification system, such that users can see which apps are packaged by their developers. For those apps, this eliminates the need to trust a separate maintainer entirely
  • It targets almost all linux distributions with a single package. This cuts down the packaging effort for covering the majority of the linux landscape so much, that the number of package maintainers required to be trusted collapses - in the ideal case to just the developers themselves as in the first bullet point
  • It makes use of sandboxing, so in case of a malicious app it (in theory) only has access to the stuff the user gave it permission to.

In reality there's a plethora of problems obviously:

  • verified apps are the minority
  • some people don't like the additional storage needed for runtimes (although the more flatpaks you use the more runtimes can be shared and its overall impact gets smaller)
  • A lot of apps do not yet use all the portals, and require the classical full access to the system to work properly (in some cases the user can still remove some permission if certain features of the application are not needed by them though). This is just a question of ongoing development work, and hopefully we reach a point in the near future where a flatpak app without tied down permissions raises eyebrows
[–] skilltheamps@feddit.de 5 points 11 months ago (2 children)

I wholeheartedly agree, yet this is the same for stuff like the AUR, every PPA, or even just blindly copy & pasting inductions from some blog - all of which are very popular. (Just to name some examples that are closer to what op wants to do).

I still wouldn't use scripts from a random dump site because they are just likely to mess up the os with junk and cruft that will be there forever. But fundamentally from a security point of few its not necessarily worse than what many are doing - simply because it doesn't get worse than blindly executing stuff from sources missing the reputation to justify trusting them.

[–] skilltheamps@feddit.de 20 points 11 months ago

Because the seemingly great choice of Webbrowsers in reality boils down to a risky monoculture of chromium (/its webengine). The only real alternative is Firefox/Blink. Risky, because the main driver behind Chrome-/ium (Google) is not acting on behalf of the public interest towards a free, open and privacy preserving internet. Instead they're working on a privacy exploiting one that gets locked down using DRM technologies. Them being a vendor of major parts of the internet as well as the browser to use it makes this a lethal combination. Firefox will definitely exist for as long as Google exists, because its their tool to defy claims of a monopoly, but they will do everything to keep it the small and mostly irrelevant "competitor" it is currently. Therefore, stand against Googles evil play and help Mozilla to gain some actual indipendence and leverage for keeping the internet free (as in freedom), open and privacy preserving.

view more: ‹ prev next ›