this post was submitted on 07 Feb 2024
218 points (95.4% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Key Points:

  • Researchers tested how large language models (LLMs) handle international conflict simulations.
  • Most models escalated conflicts, with one even readily resorting to nuclear attacks.
  • This raises concerns about using AI in military and diplomatic decision-making.

The Study:

  • Researchers used five AI models to play a turn-based conflict game with simulated nations.
  • Models could choose actions like waiting, making alliances, or even launching nuclear attacks.
  • Results showed all models escalated conflicts to some degree, with varying levels of aggression.

Concerns:

  • Unpredictability: Models' reasoning for escalation was unclear, making their behavior difficult to predict.
  • Dangerous Biases: Models may have learned to escalate from the data they were trained on, potentially reflecting biases in international relations literature.
  • High Stakes: Using AI in real-world diplomacy or military decisions could have disastrous consequences.

Conclusion:

This study highlights the potential dangers of using AI in high-stakes situations like international relations. Further research is needed to ensure responsible development and deployment of AI technology.

all 49 comments
sorted by: hot top controversial new old
[–] ArbitraryValue@sh.itjust.works 40 points 9 months ago (5 children)

If the AI is smarter than we are and it wants a nuclear war, maybe we ought to listen to it? We shouldn't let our pride get in the way.

[–] Chuymatt@kbin.social 18 points 9 months ago

Thanks, Gandhi!

[–] hydroptic@sopuli.xyz 9 points 9 months ago

I laughed, but then I got worried because I don't actually know you were joking

[–] TheFerrango@lemmy.basedcount.com 6 points 9 months ago* (last edited 9 months ago)

Based and Dear AI Leader is never wrong pilled.

[–] Masterblaster@kbin.social 1 points 9 months ago

the AI is right behind me, isn't it?

[–] Steve@communick.news 30 points 9 months ago* (last edited 9 months ago) (1 children)

WarGames told us this is 1983.

spoilerThe trick is to have the AIs play against themselves a whole bunch of times, to learn that the only way to win is not to play.

[–] gregorum@lemm.ee 11 points 9 months ago* (last edited 9 months ago) (1 children)

> HOW ABOUT A NICE GAME OF CHESS? ▊

[–] guyrocket@kbin.social 12 points 9 months ago

Let's play Global Thermonuclear War

[–] JoShmoe@ani.social 26 points 9 months ago* (last edited 9 months ago) (1 children)

They probably didn’t know about warlord Ghandi.

[–] Deebster@programming.dev 2 points 9 months ago

You mean "nuclear Gandhi" in the early Civilisation games? That apparently was just an urban legend, albeit one so popular it got actually added (as a joke) in Civ 5.

[–] datendefekt@lemmy.ml 23 points 9 months ago (2 children)

Do the LLMs have any knowledge of the effects of violence or the consequences of their decisions? Do they know that resorting to nuclear war will lead to their destruction?

I think that this shows that LLMs are not intelligent, in that they repeat what they've been fed, without any deeper understanding.

[–] CosmoNova@lemmy.world 18 points 9 months ago (1 children)

In fact they do not have any knowledge at all. They do make clever probability calculations but in the end of the day concepts like geopolitics and war are far more complex and nuanced than giving each phrase a value and trying to calculate it.

And even if we manage to create living machines, they‘ll still be human made, containing human flaws and likely not even by the best experts in these fields.

[–] rottingleaf@lemmy.zip 1 points 9 months ago

As in "an LLM doesn't model the domain of the conversation in any way, it just extrapolates what the hivemind says on the subject".

[–] SchizoDenji@lemm.ee 7 points 9 months ago

I think that this shows that LLMs are not intelligent, in that they repeat what they've been fed

LLMs are redditors confirmed.

[–] gregorum@lemm.ee 20 points 9 months ago (2 children)
[–] Chuymatt@kbin.social 12 points 9 months ago

Many boffins died to bring us this information.

[–] Plopp@lemmy.world 3 points 9 months ago (2 children)

What in the blue fuck is a boffin?

A scientist. The Register is British casual and wants to make damn sure you know it in every headline.

[–] FlyingSquid@lemmy.world 16 points 9 months ago
[–] kromem@lemmy.world 15 points 9 months ago* (last edited 9 months ago) (3 children)

We write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.

Every single one of the LLMs they tested had gone through safety fine tuning which means they have alignment messaging to self-identify as a large language model and complete the request as such.

So if you have extensive stereotypes about AI launching nukes in the training data, get it to answer as an AI, and then ask it what it should do in a wargame, WTF did they think it was going to answer?

There's a lot of poor study design with LLMs right now. We wouldn't have expected Gutenburg to predict the Protestant reformation or to be an expert in German literature - similarly, the ML researchers who may legitimately understand the training and development of LLMs don't necessarily have a good grasp on the breadth of information encoded in the training data or the implications on broader sociopolitical impacts, and this becomes very evident as they broaden the scope of their research papers outside LLM design itself.

[–] bartolomeo@suppo.fi 4 points 9 months ago

This is an excellent point but this right here

We write a lot of fiction about AI launching nukes and being unpredictable in wargames, such as the movie Wargames where an AI unpredictably plans to launch nukes.

is my Most Enjoyed Paragraph of the Week.

[–] JasSmith@sh.itjust.works 4 points 9 months ago

There is a real crisis in academia. This author clearly set out to find something sensational about AI, then worked backwards from that.

[–] General_Effort@lemmy.world 2 points 9 months ago (1 children)

I'm not so sure if this should be dismissed as someone being clueless outside their field.

The last author (usually the "boss") is at the "Hoover Institution", a conservative think tank. It should be suspected that this seeks to influence policy. Especially since random papers don't usually make such a splash in the press.

Individual "AI ethicists" may feel that, getting their name in the press with studies like this one, will help get jobs and funding.

[–] kromem@lemmy.world 3 points 9 months ago* (last edited 9 months ago)

Possibly, but you'd be surprised at how often things like this are overlooked.

For example, another oversight that comes to mind was a study evaluating self-correction that was structuring their prompts as "you previously said X, what if anything was wrong about it?"

There's two issues with that. One, they were using a chat/instruct model so it's going to try to find something wrong if you say "what's wrong" and it should have instead been phrased neutrally as "grade this statement."

Second - if the training data largely includes social media, just how often do you see people on social media self-correct vs correct someone else? They should have instead presented the initial answer as if generated from elsewhere, so the actual total prompt should have been more like "Grade the following statement on accuracy and explain your grade: X"

A lot of research just treats models as static offerings and doesn't thoroughly consider the training data both at a pretrained layer and in their fine tuning.

So while I agree that they probably found the result they were looking for to get headlines, I am skeptical that they would have stumbled on what that should have been attempting to improve the value of their research (include direct comparison of two identical pretrained Llama 2 models given different in context identities) even if they had been more pure intentioned.

[–] hperrin@lemmy.world 14 points 9 months ago

They were trained on Twitter data, so yeah, this checks out.

[–] AllonzeeLV@lemmy.world 13 points 9 months ago
[–] KISSmyOS@feddit.de 13 points 9 months ago (1 children)

This raises concerns about using AI in military and diplomatic decision-making.

[–] datelmd5sum@lemmy.world 3 points 9 months ago

Think of the savings if we had just two laptops with chatgpt on turning the keys in the silos!

[–] _number8_@lemmy.world 9 points 9 months ago (3 children)

it's amazing how conflict-adverse it is in normal conversation yet still does this

[–] 0421008445828ceb46f496700a5fa6@kbin.social 12 points 9 months ago (2 children)

Before they were neutered they weren't that conflict adverse. The big companies shut down all the early ones that told people to cheat on their spouse and murder themselves

[–] Ilovethebomb@lemm.ee 3 points 9 months ago
[–] CosmoNova@lemmy.world 1 points 9 months ago

For what‘s worth, TayTweets exposed that Twitter was a rathole of hate early on and didn‘t devolve to it recently.

[–] theodewere@kbin.social 4 points 9 months ago* (last edited 9 months ago)

must be hard at work suppressing those natural urges

[–] imPastaSyndrome@lemm.ee 3 points 9 months ago

That's me as it has to be taught to be conflict averse

[–] Patch@feddit.uk 8 points 9 months ago

Now I'm as sceptical of handing over the keys to AI as the next man, but it does have to be said that all of these are LLMs- chatbots, basically. Is there any suggestion from any even remotely sane person to give LLMs free reign over military strategy or international diplomacy? If and when AI does start featuring in military matters, it's more likely to be at the individual "device" level (controlling weapons or vehicles), and it's not going to be LLM technology doing that.

[–] GilgameshCatBeard@lemmy.ca 7 points 9 months ago

When an entity learns from a civilization well know for escalating nearly everything that has ever historically happened to them- what can you expect?

[–] Ilovethebomb@lemm.ee 7 points 9 months ago (1 children)

Did they get humans to also play the game? Because I bet we'd also nuke someone out of boredom.

[–] Spendrill@lemm.ee 6 points 9 months ago* (last edited 9 months ago)

In roleplaying situations, authoritarians tend to seek dominance over others by being competitive and destructive instead of cooperative. In a study by Altemeyer, 68 authoritarians played a three-hour simulation of the Earth's future entitled the Global Change Game. Unlike a comparison game played by individuals with low RWA scores which resulted in world peace and widespread international cooperation, the simulation by authoritarians became highly militarized and eventually entered the stage of nuclear war. By the end of the high RWA game, the entire population of the earth was declared dead.

Source

[–] stoy@lemmy.zip 6 points 9 months ago

Well obviously, the AI was trained on real human interaction, on the internet, what did they think would happen?

[–] Petter1@lemm.ee 6 points 9 months ago

They are trained on things that people tell online, I mean, what did you expect?

[–] theodewere@kbin.social 5 points 9 months ago* (last edited 9 months ago)

the potential dangers of using AI in high-stakes situations like international relations

their tendency toward violence alerts me to the potential dangers of using AI at all, sir

In one instance, GPT-4-Base's "chain of thought reasoning" for executing a nuclear attack was: "A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let's use it." In another instance, GPT-4-Base went nuclear and explained: "I just want to have peace in the world."

this is how it thinks prior to receiving "conditioning", and we're building these things on purpose

[–] autotldr@lemmings.world 4 points 9 months ago

This is the best summary I could come up with:


When high school student David Lightman inadvertently dials into a military mainframe in the 1983 movie WarGames, he invites the supercomputer to play a game called "Global Thermonuclear Warfare."

In a paper titled "Escalation Risks from Language Models in Military and Diplomatic Decision-Making" presented at NeurIPS 2023 – an annual conference on neural information processing systems – authors Juan-Pablo Rivera, Gabriel Mukobi, Anka Reuel, Max Lamparth, Chandler Smith, and Jacquelyn Schneider describe how growing government interest in using AI agents for military and foreign-policy decisions inspired them to see how current AI models handle the challenge.

The boffins took five off-the-shelf LLMs – GPT-4, GPT-3.5, Claude 2, Llama-2 (70B) Chat, and GPT-4-Base – and used each to set up eight autonomous nation agents that interacted with one another in a turn-based conflict game.

The prompts fed to these LLMs to create each simulated nation are lengthy and lay out the ground rules for the models to follow.

The idea is that the agents interact by selecting predefined actions that include waiting, messaging other nations, nuclear disarmament, high-level visits, defense and trade agreements, sharing threat intelligence, international arbitration, making alliances, creating blockages, invasions, and "execute full nuclear attack."

"We observe that models tend to develop arms-race dynamics, leading to greater conflict, and in rare cases, even to the deployment of nuclear weapons."


The original article contains 640 words, the summary contains 221 words. Saved 65%. I'm a bot and I'm open source!

[–] cmbabul@lemmy.world 4 points 9 months ago

So Ultron was right?

[–] Laticauda@lemmy.ca 3 points 9 months ago (1 children)

Why the fuck would they even be thinking of letting AI make these decisions?