this post was submitted on 27 Jan 2026
1035 points (99.6% liked)

Technology

79355 readers
4201 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

As evidence, the lawsuit cites unnamed "courageous whistleblowers" who allege that WhatsApp and Meta employees can request to view a user's messages through a simple process, thus bypassing the app's end-to-end encryption. "A worker need only send a 'task' (i.e., request via Meta's internal system) to a Meta engineer with an explanation that they need access to WhatsApp messages for their job," the lawsuit claims. "The Meta engineering team will then grant access -- often without any scrutiny at all -- and the worker's workstation will then have a new window or widget available that can pull up any WhatsApp user's messages based on the user's User ID number, which is unique to a user but identical across all Meta products."

"Once the Meta worker has this access, they can read users' messages by opening the widget; no separate decryption step is required," the 51-page complaint adds. "The WhatsApp messages appear in widgets commingled with widgets containing messages from unencrypted sources. Messages appear almost as soon as they are communicated -- essentially, in real-time. Moreover, access is unlimited in temporal scope, with Meta workers able to access messages from the time users first activated their accounts, including those messages users believe they have deleted." The lawsuit does not provide any technical details to back up the rather sensational claims.

you are viewing a single comment's thread
view the rest of the comments
[–] just_another_person@lemmy.world 23 points 1 day ago* (last edited 1 day ago) (2 children)

🤣🤣🤣😂

Bruv, before Signal launched they posted an entire whitepaper detailing their protocol, the working mechanisms of the system, and source code. So to reply to your 3 points:

  1. No, this is stupid and easily verified by watching network traffic from any device. Signal isn't secretly sending plaintext messages anywhere.
  2. No, it's not impossible to tell this at all. That's what source code is. The executable code. Not only have NUMEROUS security audits been done on Signal by everyone from Academia, to for-profit security researchers and governments, you can easily verify that what you're running on your phone is the same source code as what is published publicly because the fingerprint hashes for builds are also published. This means the same fingerprint you'd get building it yourself from source should also be the same as what is publicly published.
  3. See my point above, but also when two users exchange keys on Signal (or in any other cryptographic sense), these keys are constantly verified. If changed, the session becomes invalid. Verifying these keys between two users is a feature of Signal, but moreover, the basics of cryptography functioning can, and have been proven, during the independent audits of Signal. Go read any of the numerous papers dating back to 2016.

If you don't understand how any of this works, it's just best not to comment.

[–] pressanykeynow@lemmy.world 1 points 13 hours ago (1 children)

What if the malicious actor is not Signal but Google or the hardware manufacturer?

Can we check that the encryption key generated by the device is not stored somewhere on the device? Same for the OS.

Can we check that the app running in memory is the same that is available for reproducible build checks?

Can we check that your and my apps at the moment are the same as the one security researchers tested?

[–] just_another_person@lemmy.world 3 points 13 hours ago (1 children)

The clients (apps) enforce key symmetry for your own keys, server identity, and the exchanged with the other person part of a conversation. Constantly. There is no way to MITM that.

The clients are open source, and audited regularly, and yes, builds are binary reproduceable and fingerprinted on release.

That's not to say someone can't build a malicious copy that does dumb stuff and put it in your phone to replace the other copy, but the server would catch and reject it if it's fingerprints don't match the previously known good copy, or a public version.

Now you're just coming up with weird things to justify the paranoia. None of this has anything to do with Signal itself, which is as secure as it gets.

[–] pressanykeynow@lemmy.world 1 points 13 hours ago (1 children)

None of this has anything to do with Signal itself, which is as secure as it gets.

Didn't I say that at the start of my questions? What's your point?

server would catch and reject it if it's fingerprints don't match the previously known good copy, or a public version

If I understand you correctly, you mean that Signal app checks itself and sends the result to the server that can then deny access to it? Is that what Signal does and what makes it difficult to spoof this fingerprint?

I don't think you answered any of my questions though since they weren't about Signal.

Now you're just coming up with weird things to justify the paranoia

I'm just asking questions about security I don't know answers to, I'm not stating that's how things are.

[–] just_another_person@lemmy.world 1 points 6 hours ago

I did answer your questions, but if I missed something, feel free to ask and I can clarify.