this post was submitted on 11 Feb 2024
329 points (85.2% liked)

Technology

59605 readers
3435 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.


  • Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
  • The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
  • The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
  • The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
  • The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
you are viewing a single comment's thread
view the rest of the comments
[–] gapbetweenus@feddit.de 0 points 9 months ago* (last edited 9 months ago) (3 children)

I think it's reasonable for military to try out any new technology for any kinds of benefits. I mean we tried out if LSD would make better soilders - LLM for simulations seems not that farfatched.

[–] Jtotheb@lemmy.world 9 points 9 months ago (1 children)

To be clear, just because the LSD experiments happened does not make them reasonable. It sounds like you’re justifying future terrible mistakes based on past terrible mistakes that you learn about in a fairly neutral and sanitized way in school.

[–] gapbetweenus@feddit.de -2 points 9 months ago

No, military will just try out everything if there is a slightest possibility of benefit in war. If you have the resources why wouldn't you? There are literally no downsides.

[–] BuryMyHorse@lemmy.world 5 points 9 months ago (1 children)

MK Ultra and Artichoke are fucked up. Not to be repeated as far as methodology goes.

[–] gapbetweenus@feddit.de -3 points 9 months ago

What do you mean? Military found out that those things are rather useless - that's something. Also good to know. In 50 years or so we will learn what fucked up things military is doing now.

The only way to prevent such things is drastically cut military budget.

[–] Harbinger01173430@lemmy.world 1 points 9 months ago (1 children)

What would be more useful for the military? An AI that can make less crappy decisions or successfully finishing project Stargate and getting psychic troopers who can see the future, among other things?

[–] gapbetweenus@feddit.de 2 points 9 months ago

But what if you had all the money in the world? Basic US military.