this post was submitted on 19 Feb 2024
76 points (96.3% liked)

Technology

59589 readers
2838 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Summary

This research, conducted by Microsoft and OpenAI, focuses on how nation-state actors and cybercriminals are using large language models (LLMs) in their attacks.

Key findings:

  • Threat actors are exploring LLMs for various tasks: gathering intelligence, developing tools, creating phishing emails, evading detection, and social engineering.
  • No major attacks using LLMs were observed: However, early-stage attempts suggest potential future threats.
  • Several nation-state actors were identified using LLMs: Including Russia, North Korea, Iran, and China.
  • Microsoft and OpenAI are taking action: Disabling accounts associated with malicious activity and improving LLM safeguards.

Specific examples:

  • Russia (Forest Blizzard): Used LLMs to research satellite and radar technologies, and for basic scripting tasks.
  • North Korea (Emerald Sleet): Used LLMs for research on experts and think tanks related to North Korea, phishing email content, and understanding vulnerabilities.
  • Iran (Crimson Sandstorm): Used LLMs for social engineering emails, code snippets, and evading detection techniques.
  • China (Charcoal Typhoon): Used LLMs for tool development, scripting, social engineering, and understanding cybersecurity tools.
  • China (Salmon Typhoon): Used LLMs for exploratory information gathering on various topics, including intelligence agencies, individuals, and cybersecurity matters.

Additional points:

  • The research identified eight LLM-themed TTPs (Tactics, Techniques, and Procedures) for the MITRE ATT&CK® framework to track malicious LLM use.
all 9 comments
sorted by: hot top controversial new old
[–] AbouBenAdhem@lemmy.world 10 points 9 months ago* (last edited 9 months ago) (1 children)

I assume they mean threat actors besides Microsoft and OpenAI?

[–] FunderPants@lemmy.ca 2 points 9 months ago

I mean, yea okay, but most of those use cases are exactly what everyone else is using them for so far.

[–] Pantherina@feddit.de -3 points 9 months ago (1 children)

And thats why you dont produce tools that are not needed and cause harm, MicroShit

[–] FaceDeer@kbin.social 0 points 9 months ago (2 children)

I am baffled that you appear to be attacking Microsoft over this. They're doing research to counter bad actors here.

[–] Pantherina@feddit.de 7 points 9 months ago (1 children)

They are funding and forcefully pushing that tool to Windows. And now they want to "protect" against "threat actors".

Dont believe a word that comes out of Big Tech PR departments.

[–] FaceDeer@kbin.social 0 points 9 months ago (1 children)

You think Microsoft is the only organization capable of producing these tools? They weren't even the first.

[–] Pantherina@feddit.de 1 points 9 months ago

That is true. Still, huge big tech companies are the biggest threat actors

[–] demonsword@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

They’re doing research to counter bad actors here

"Bad actors" as defined by the US gov't, of course. Home of the "brave" that bombs the shit out of everyone they dislike using unmanned drones, and currently supports a ongoing genocide happening right now in the middle east. Literally the paradise of freedom and justice on Earth.