this post was submitted on 27 Jan 2026
1196 points (99.5% liked)
Technology
79476 readers
4893 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In The method described, it doesn't matter if Signal encrypt the message before it leaves your phone, the plaintext is still in the app and gets sent to Meta while also being encrypted with Meta's keys.
It's basically impossible to know this isn't happening based on reading source code, because the code to load widgets doesn't have to be remotely close to the messaging code, you'd have to read the entire signal code based.
There is way to know that the code you read on GitHub is the code Google/Apple install on your phone.
🤣🤣🤣😂
Bruv, before Signal launched they posted an entire whitepaper detailing their protocol, the working mechanisms of the system, and source code. So to reply to your 3 points:
If you don't understand how any of this works, it's just best not to comment.
What if the malicious actor is not Signal but Google or the hardware manufacturer?
Can we check that the encryption key generated by the device is not stored somewhere on the device? Same for the OS.
Can we check that the app running in memory is the same that is available for reproducible build checks?
Can we check that your and my apps at the moment are the same as the one security researchers tested?
The clients (apps) enforce key symmetry for your own keys, server identity, and the exchanged with the other person part of a conversation. Constantly. There is no way to MITM that.
The clients are open source, and audited regularly, and yes, builds are binary reproduceable and fingerprinted on release.
That's not to say someone can't build a malicious copy that does dumb stuff and put it in your phone to replace the other copy, but the server would catch and reject it if it's fingerprints don't match the previously known good copy, or a public version.
Now you're just coming up with weird things to justify the paranoia. None of this has anything to do with Signal itself, which is as secure as it gets.
Didn't I say that at the start of my questions? What's your point?
If I understand you correctly, you mean that Signal app checks itself and sends the result to the server that can then deny access to it? Is that what Signal does and what makes it difficult to spoof this fingerprint?
I don't think you answered any of my questions though since they weren't about Signal.
I'm just asking questions about security I don't know answers to, I'm not stating that's how things are.
I did answer your questions, but if I missed something, feel free to ask and I can clarify.
Why would any message be plaintext?
Fair you could have just said they have reproducible builds or linked to the docs: https://github.com/signalapp/Signal-Android/blob/main/reproducible-builds/README.md
Again you are missing the point of the attack
Back at you, even if you are right that signal is secure, the attack is not what you think it is.
What in the world are you talking about here, bud? Your comments are making zero sense.
Look, seriously, if my comment is being upvoted, it's because I responded to yours, and people understand what I am saying in response.
You, unfortunately, clearly do not understand what I'm saying because you do not grasp how any of this works.
Lmao, sure buddy pat yourself on the back because you got upvotes.
You're talking about E2E encryption as if it prevents ~~side-channel~~ client side attacks, but sure morons will upvotes because they also don't understand real world security.
The only useful thing you've pointed out in your deluge of spam, is that Signal builds are reproducible which does protect against the attack described (as long as there isn't a backdoor in the published code)
That's literally what E2E encryption does. In order to attack it from outside you would have to break the encryption itself, and modern encryption is so robust that it would require quantum computing to break, and that capability hasn't been developed yet.
The only reason the other commenter's words sound like spam to you is because you don't understand it, which you plainly reveal when you say "(as long as there isn't a backdoor in the published [audited] code)
E2E encryption doesn't prevent client side attacks, I misspoke when I called it a side channel attack, and ultimately Signal code is audited, so Signal is more secure, but people are mistaking a client-side exploit (sent from Meta's servers to the WhatsApp client) with breaking E2E encryption of whatsapp, which is not what is described in the article.
It sounds like you're contradicting yourself now. You're right, signal is more secure because its source code is open-source and auditable. So what's the issue? It seems you've been arguing otherwise, and you're just now coming around to it without admitting that you were wrong in the first place.
The client-side app is also open-source and auditable, and you can monitor outgoing traffic on your devise to see whether the signal app is sending data that it shouldn't. It sounds like people have verified that it doesn't do that, but if you don't want to take their word for it then why don't you see for yourself?
I didn't realize Signal now has reproducible builds (in my defense it didn't when it launched)
This is mostly useless as the traffic signal is sending is encrypted, so you really have to just trust the code.
If it's sending 0.0kb of background data, then the client is not communicating clandestinely with the server.
Sure but it by necessity sends some encrypted data to the server, Wireshark isn't going to tell you if that's just your message or your message and additional information.
this isn’t a client-side exploit. this is the fact that meta controls the encryption keys. the mention “widget”, but that’s not a widget on your device; they say it’s a widget on their workstation - whatever that means. i’m thinking it’s something akin to raising a ticket which triggers a workflow to remote install an app on a work device (a process common at large enterprises)
Do you know what size channel attacks are? Because nothing you've even tried to bring up describes one at all, or how it applies to your original comments.
Yeah a size channel attack is when a poster can't let go of how small their dick is so talks about how great Signal is all day.
The whole comment thread got a bit "heated".
Not or not only your fault to be clear.. But come on, guys, let's peacefully share arguments, ask question, get answers and learn stuff without insults or 😂-reactions. We can do better. This isn't Reddit.
about the 3rd, is the end apk file downloaded by a useer on the playstore reproducible? could google add stuff to the apk before the user downloading it? do users ever bother checking if the apk hash matches the one from the reproducible build?
yes, that's why it's called fingerprinting:
it's a kind of mathematical function that takes the entire code as input and outputs a unique result.
the result is just some string of symbols (which really just represent a unique string of 1's and 0's).
this unique string of characters is, as mentioned, unique for any given input.
this string can then be compared to any arbitrary other string, and if they match, then you know it's the same code.
so in the case of signal anybody can download the source, compile it, and verify that it matches the fingerprint of the compiled code on their own device.
that's why it can't be faked: you compare the already compiled code.
if even a single digit of the code is out of place, it's not going to result in the same string, and thus immediately get flagged as a mismatch.
it's mathematically impossible to fake.
While I agree with you I did just want to point out one thing.
This:
Is not entirely true persay, every hashing function does have collisions that can occur. But the likely hood that someone baked an exploit in that kept the application functioning while adding their backdoor all the while somehow creating a hash collision with the original fingerprint is practically zero and honestly if someone did pull that off, fucking hats off because that has to be some sort of math and coding wizard beyond most. I should also point out that the file size would most likely/have to be different so there should be other methods of detecting the compromised build regardless.
Sorry I know that was very pedantic of me but I did want to call that out because its technically possible but the actual likely hood has to be so miniscule its almost irrelevant along with the fact that other tells would surely exist.