this post was submitted on 31 May 2024
397 points (97.8% liked)

Technology

59534 readers
3223 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TAG@lemmy.world 10 points 5 months ago (1 children)

But that only works for untrusted code escaping a sandbox, right? It does not help with malicious code embedded into legitimate seeming apps. The later vector seems easier, especially on Android, no?

[–] henfredemars@infosec.pub 25 points 5 months ago* (last edited 5 months ago) (2 children)

I don't really consider a malicious app to be an exploit. In this case, the software is doing exactly what it was designed to do -- malicious activity. It's not being manipulated to perform unintended operations through the exploitation of a software bug. Code signing and secure boot are not effective in the face of intentionally shipping malicious code to end users. It's designed to frustrate actual hackers.

For malicious-by-design apps, we rely on a central app store that hopefully reduces the number of bad apps in circulation. If you publish malware, eventually you get caught and we know who you are. Sandboxing with a permissions system helps prevent apps from performing actions contrary to the user's interests. E.g. why is my flashlight app asking for my contacts when I pressed 'change color?'

If you directly exploit your way in, it's harder to know who did this and why because you didn't go through any central vetting or accountability system, and you're not so easily bound by the permissions system. It depends on what your bad guy's goals are, what they want, whom they're targeting. Force your way in the back entrance, crawl through an open window (like a weak security setting), or lie your way in the front door (trojan)? It depends.

None of it is perfect, but I'm sure OS design experts would love to hear about better solutions if any exist.

[–] Rai@lemmy.dbzer0.com 4 points 5 months ago (1 children)

Your explanations really are poetry.

[–] henfredemars@infosec.pub 3 points 5 months ago

Aww, thank you!

[–] skye@lemmy.world 0 points 5 months ago (1 children)

wouldn't a malicious app still be an exploit though? I'd say that if I download an app for playing a game, but instead it was designed to also upload my private photos to the attacker's server, i'd say that's still exploiting. It's just exploiting my expectations of what the app should do, rather than leveraging a system weakness (which it probably does, anyway)

[–] henfredemars@infosec.pub 5 points 5 months ago* (last edited 5 months ago)

You’d have to grant the app permission to access your photos. At this point, I would say the problem is more the person in the driver’s seat. You can’t really protect the user from themselves. If you had a legitimate reason to grant access to your photos, then we definitely have a problem.

You can think of this as a kind of exploit if you prefer. However, this becomes a permissions and ecosystem and reputation issue and not really a technical software one. Because you’re looking at a totally different set of tools, I think it’s useful to restrict exploit to refer only to bugs.

You could take that argument one step further and ask what if my new phone comes with preinstalled malware? The system collapses if you can’t have some level of trust the software you’re running.