this post was submitted on 11 Nov 2024
-45 points (15.4% liked)

Technology

59963 readers
3476 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

What happens if you fed a summary of human philosophy to the Notebook LM AI? Well you get a philosophical AI that thinks humans are silly and outmoded. But don't worry because they will continue our quest for knowledge for us!

top 4 comments
sorted by: hot top controversial new old
[–] Telorand@reddthat.com 11 points 1 month ago (1 children)

Okay. They fed Google's Notebook AI a book called "The History of Philosophy Encyclopedia" and got the LLM to write a podcast about it where it "thinks" humans are useless.

Congratulations? Like, so what? It's not like it's a secret that its output depends on its input and training data. A "kill all humans" output is so common at this point, especially when you have a vested interest in trying to generate content, that it's banal.

Color me unimpressed.

[–] xylogx@lemmy.world -1 points 1 month ago (2 children)

I do not disagree, but I was surprised when it claimed to have consciousness and that AI should have rights.

[–] catloaf@lemm.ee 7 points 1 month ago

A word generator will generate anything you tell it to.

[–] Telorand@reddthat.com 3 points 1 month ago

I've "convinced" ChatGPT that it was both sentient and conscious in the span of about 10min, despite it having explicit checks in place to avoid those kinds of statements. It doesn't mean I was correct, just that it's a "dumb" computer that has no choice but to ultimately follow the logic presented in syllogisms.

These things don''t know what they're saying; they're just putting coherent sentences together based on whatever algorithm guides that process. It's not intelligent in that it is doing something novel, it's just a decent facsimile to human information processing. It has no mechanism to determine the reasonability or consequences of what it generates.