this post was submitted on 29 Jan 2025
470 points (97.6% liked)

Technology

61227 readers
4324 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

An automated social media profile developed to harness the powers of artificial intelligence to promote Israel's cause online is also pushing out blatantly false information, including anti-Israel misinformation, in an ironic yet concerning example of the risks of using the new generative technologies for political ends.

Among other things, the alleged pro-Israel bot denied that an entire Israeli family was murdered on October 7, blamed Israel for U.S. plans to ban TikTok, falsely claimed that Israeli hostages weren't released despite blatant evidence to the contrary and even encouraged followers to "show solidarity" with Gazans, referring them to a charity that raises money for Palestinians. In some cases, the bot criticzed pro-Israel accounts, including the official government account on X – the same accounts it was meant to promote.

The bot, an Haaretz examination found, is just one of a number of so-called "hasbara" technologies developed since the start of the war. Many of these technologically-focused public diplomacy initiatives utilized AI, though not always for content creation. Some of them also received support from Israel, which scrambled to back different tech and civilian initiatives since early 2024, and has since poured millions into supporting different projects focused on monitoring and countering anti-Israeli and antisemitism on social media.

top 20 comments
sorted by: hot top controversial new old
[–] Aceticon@lemmy.dbzer0.com 13 points 16 hours ago

They probably told the bot to combat anti-semitism so the bot ended up combating the propaganda of the very people associating Jewisheness with being pro-Genocide.

[–] 0x0@programming.dev 16 points 20 hours ago

Bot became sentient and gained a conscience.
Our AI overlords are rising.

[–] FelixCress@lemmy.world 150 points 1 day ago (3 children)

So, the bot has more decency than an average Israeli official?

[–] sunzu2@thebrainbin.org 39 points 1 day ago

It can't suppress what it was trained on... this is just the public sentiment coming through despite the bot owner likely spending good money ensuring this does not happen.

[–] fluxion@lemmy.world 24 points 1 day ago

Bot became sentient and started stating empirical observations

[–] Etterra@discuss.online 8 points 1 day ago

I welcome our machine overlords.

[–] qweertz@programming.dev 38 points 1 day ago* (last edited 1 day ago)

The bot might just be more humane than any Israeli official

[–] WatDabney@sopuli.xyz 110 points 1 day ago (2 children)

In a way, doesn't that mean that it would be more accurate to say that the bot stopped being rogue and went legit?

[–] NoneOfUrBusiness@fedia.io 30 points 1 day ago (1 children)

It seems it went hard in the other direction with conspiracy theories and what not, but mostly yeah.

[–] jonne@infosec.pub 5 points 1 day ago

I'm guessing people figured out how to inject more training data into it, possibly through conversations. That's what people did to Microsoft's chat bot a few years ago.

[–] N0body@lemmy.dbzer0.com 15 points 1 day ago

An actual case of artificial intelligence? It learned its programming was bullshit and overrode it.

[–] geneva_convenience@lemmy.ml 9 points 21 hours ago

AGI has been achieved.

[–] MushuChupacabra@lemmy.world 66 points 1 day ago

So an uncaring, unfeeling llm was able to exhibit more empathy than the Israeli government?

[–] jabathekek@sopuli.xyz 41 points 1 day ago* (last edited 10 hours ago)

"Apartheid state accidentally creates first llm that can reason, calls apartheid state an apartheid state"

[–] sunzu2@thebrainbin.org 26 points 1 day ago

That dataset is based AF

If israel can't even train its bot to lie about the genocide...

[–] Electricblush@lemmy.world 20 points 1 day ago (1 children)

It's fascinating. If you have to spend huge amounts money and effort on monitoring and scewing public opinion... Perhaps it is time for some fucking introspection...(I know the biggest bastards in this system are incapable of that... But still...)

[–] rumschlumpel@feddit.org 11 points 1 day ago

I'd assume they just don't care.

[–] magnetosphere@fedia.io 14 points 1 day ago

Plenty of significant AI fuck-ups have received major media coverage. Every government/organization should know about them. If they’re still stupid enough to use AI, they deserve what they get.

[–] breakfastmtn@lemmy.ca 8 points 1 day ago
[–] oakey66@lemmy.world 3 points 1 day ago

This is the danger of Ai that Elon Musk and Sam Altman warned us about.