To be fair, the proxy engine is supposedly written in go, not in nodejs, but yeah, the ddos defense most likely is wishful thinking...
dont
The annoyance grows with the number of hosts ;-) I still want to feel in control, which is why I'm hesitant to implement unattended decryption like with tang/clevis.
But I'm interested in the idea of not messing with the initrd-image, boot into a running system and then wait for decryption of a data-partition. Isn't it a hassle to manually override all the relevant service declarations etc. to wait for the mount? Or how do you do that?
The passphrase should be stored and transferred encrypted, but that would basically mean reimplementing mandos, a tool that was mentioned in another reply https://lemmy.world/post/38400013/20341900. Besides that yes, that's one way I've also considered. An ansible script with access to all encrypted host's initrd-ssh-keys that tries to login; if the host is waiting for decryption, provides the key, done. Needs one webhook for notification and one for me to trigger the playbook run... Maybe I will revisit this...
It wasn't clear to me at first glance how the mandos server gets the approval to supply the client with its desired key, but I figured it out in the meantime: that's done through the mandos-monitor tui. However, that doesn't quite fit my ux-expectations. Thanks for mentioning it, though. It's an interesting project I will keep in mind.
Definitely! I have bmc/kvm everywhere (well, everywhere that matters).
I have talked myself out of this (for now), though. I think if I ever find the time to revisit this, I will try to to it by injecting some oidc-based approval (memo to myself: ciba flow?) into something like clevis/tang.
Sort of, but this seems a bit heavy. (That being said, I was also considering pkcs#11 on a net-hsm, which seems to do basically the same...)
Yes, I was thinking about storing encrypted keys, but still, using claims is clearly just wrong... Using a vault to store the key is probably the way to go, even though it adds another service the setup depends on.
Interesting, do you happen to know how this "approval" works here, concretely?
How long did it take to get zpool-attach? I will not join the waiting list 😉
The selling point of unraid is that you can mix and match different disk sizes and it figures out a (good, efficient?) way to handle them even as you grow a pool. You're not going to have a good time with a 1TB drive, a 2 TB drive and a 15 TB drive using zfs, unraid doesn't care... (Using and preferring zfs myself, by the way; this is heresay.)
I love the simplicity of this, I really do, but I don't consider this SSO. It may be if you're a single user, but even then, many things I'm hosting have their own authentication layer and allow offloading only to some oidc-/oauth or ldap-provider.
Thanks for the analysis; I had also seen the API keys, but I didn't check the deployments.
I guess this answers my question then: No one is using it because not even the dev gets it deployed – highly "avaliable" 🤣