this post was submitted on 01 May 2024
-41 points (26.4% liked)

Technology

59605 readers
3434 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 7 comments
sorted by: hot top controversial new old
[–] MysticKetchup@lemmy.world 42 points 6 months ago (2 children)

But simply knowing the right words to say in response to a moral conundrum isn't the same as having an innate understanding of what makes something moral. The researchers also reference a previous study showing that criminal psychopaths can distinguish between different types of social and moral transgressions, even as they don't respect those differences in their lives. The researchers extend the psychopath analogy by noting that the AI was judged as more rational and intelligent than humans but not more emotional or compassionate.

This brings about worries that an AI might just be "convincingly bullshitting" about morality in the same way it can about many other topics without any signs of real understanding or moral judgment. That could lead to situations where humans trust an LLM's moral evaluations even if and when that AI hallucinates "inaccurate or unhelpful moral explanations and advice."

Despite the results, or maybe because of them, the researchers urge more study and caution in how LLMs might be used for judging moral situations. "If people regard these AIs as more virtuous and more trustworthy, as they did in our study, they might uncritically accept and act upon questionable advice," they write.

Great, so the headline of the article directly feeds into the issue the scientists are warning about when it comes to public perception of AI morality

[–] gregorum@lemm.ee 13 points 6 months ago

Just another example of journalism, ignoring the science and content of their own articles and going for Clickbait headlines instead.

[–] SharkAttak@kbin.social 6 points 6 months ago (3 children)

I'm still to be convinced that all these AIs aren't just very good chat bots, they can line up words (or pixels) in a realistic way, but I feel there's no reasoning behind them.
A lot of people, and not just commoners, see "AI" and think "sci-fi robot!"

[–] Moobythegoldensock@lemm.ee 6 points 6 months ago

What you described is exactly what an LLM is. I’m piloting one for work, and sometimes it is useful, while other times it makes up random shit.

[–] Good_morning@lemmynsfw.com 2 points 6 months ago

They aren't even "very good" I thought I would use it to generate a short story that used a few specific words. I used about half of the requested words, when asked, it said this is embarrassing and tried again, I eventually gave up retrying, it never got all of the words, and when asked which words it omitted it would get that wrong too. It feels like the quality has gone downhill from when they were first introduced.

[–] DumbAceDragon@sh.itjust.works 1 points 6 months ago

AI is a marketing term, the association is a deliberate choice by companies trying to market "the future"

[–] ininewcrow@lemmy.ca 6 points 6 months ago* (last edited 6 months ago)

The biggest problem with emerging AI is that we are absolutely terrible parents.

Humanity has a child that going to become an amazing prodigy and instead of teaching them to be decent, open, honest, compassionate and helpful ... we are raising an entity that is learning that making money and concentrating power is the motivation for everything in life.

We are trailer trash parents who are raising a child that will grow up to become more powerful than we could ever be. Or at the very least become a monstrous pet that will be controlled by whoever has the most money and power.

I wonder what could possibly go wrong.