Yeah, but a lot of those things will trip the TPM module, so you will get a different decryption key if you for example try to use the single
kernel parameter to boot into a root shell. And different decryption key means no access to the data.
Markaos
If you boot the computer into the currently installed OS, you will be presented with a login screen and will have to enter the correct password to log in (kernel parameters are part of the checksums, so booting into single-user mode won't help you, that counts as a modified OS). If you boot a different OS, you won't get the key off the TPM.
The idea is to use TPM to store the keys - if you boot into a modified OS, TPM won't give you the same key so automatic unlock will fail. And protection against somebody just booting the original system and copying data off it is provided by the system login screen.
Voilà, automatic drive decryption with fingerprint unlock to log into the OS. That's what Windows does anyway.
Or is there any functional difference between the two methods?
Can't test right now, but I have a strong suspicion you will have trouble getting IP broadcast to work. Normally broadcast address is calculated by setting all bits after the network prefix to 1, but your computer believes to be in a /32 "network". It won't broadcast over routes that are not part of its network.
And even if you calculate the broadcast address successfully (maybe the software you use has /24 hardcoded for whatever reason), no computer configured with a /32 address will receive it - 192.168.0.255 is not within the 192.168.0.1/32 network, so it will probably get forwarded according to your routes if you have forwarding enabled (except it shouldn't in this case with one network interface, because you never send packets back the way they came from)
Well, Nvidia doesn't support OpenCL 2, so if you want your software to support the most commonly used cards, you're going to be limited to OpenCL 1.2, which is pretty crap compared to the shiny CUDA. There's also a lot of great tooling made or heavily sponsored by Nvidia that's only available for CUDA.
And yes, Nvidia now supports OpenCL 3, but that's pretty much just OpenCL 1.2 with all OpenCL 2 features marked as optional (and Nvidia doesn't support them, obviously).
Those distros "force" you to reboot when you want to update (as opposed to allowing you to do the update on the running system). Think Windows 7 and earlier, that kind of forced reboots, back when people were fine with the way Windows did updates.
You still need some privileged process to exploit. Glibc code doesn't get any higher privileges than the rest of the process. From kernel's point of view, it's just a part of the program like any other code.
So if triggering the bug in your own process was enough for privilege escalation, it would also be a critical security vulnerability in the kernel - it can't allow you to execute a magic sequence of instructions in your process and become a root, that completely destroys any semblance of process / user isolation.
On the other hand, it's also worth noting that newer RAM generations are less and less susceptible to this kind of attack. Not because of any countermeasures, they just lose the data without constant refreshing much quicker even when chilled / frozen, so the attack becomes impractical.
So from DDR4 up, you're probably safe.
~~I think the idea at the time was that if /usr is unavailable, you won't be doing much with the system anyway (other than fixing the configuration).~~
Nevermind, apparently the original meaning had nothing to do with a network (TIL for me), so our discussion is kinda moot. See section 0.24 in this 2.9BSD (1983) installation guide
Locally written commands that aren't distributed are kept in /usr/src/local and their binaries are kept in /usr/local. This allows /usr/bin, /usr/ucb, and /bin to correspond to the distribution tape (and to the manuals that people can buy). People wishing to use /usr/local commands are made aware that they aren't in the base manual.
No comment on sensibility, but technically both are equally difficult - mount the parent filesystem, then mount the child filesystem into an empty directory in the parent. Doesn't matter which one is where, it's all abstracted away at this level anyway.
I believe a USB WiFi dongle will be a better idea than modifying live images of various distros, and others are already pointing you in the correct way for that, but I feel the need to correct one thing:
Okay, so maybe I can add some driver files to the LiveUSB or something? . . . nope. Not a good idea, because the other part of the whole fix is installing firmware, which has to be in place before the drivers will work -- but this chip is also still being used by the onboard Mac OS.
The WiFi module doesn't have any persistent memory for firmware, which is why the system needs to bring its own firmware - it is uploaded to the chip on every boot as part of driver initialization. So there is no risk of interfering with macOS here.
The installation in the guide refers to putting the firmware in a place where the driver will be able to find it. In other words, you would be installing the firmware on the Linux system, not onto the WiFi module.
Yes, but they are asking how to set up FDE in the same way it works on Windows, where automatic unlocking works using TPM. They just don't know the technical details.