this post was submitted on 08 Nov 2024
-22 points (32.3% liked)

Technology

59963 readers
3483 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 11 comments
sorted by: hot top controversial new old
[–] nyan@lemmy.cafe 56 points 1 month ago (1 children)

The problem isn't that it didn't. The problem is that anyone thought that it should have.

[–] Solumbran@lemmy.world 9 points 1 month ago

But considering the obvious lack of knowledge around AIs, it should have.

[–] kat_angstrom@lemmy.world 49 points 1 month ago

It's not AI, it's an LLM. It doesn't know what misinformation is because it doesn't Know anything

[–] Traister101@lemmy.today 20 points 1 month ago

Wow that's crazy who could have seen that coming

[–] makingStuffForFun@lemmy.ml 7 points 1 month ago (1 children)

Most things I ask it give me back a fever dream. You're over thinking the current state of the tech. Give it another election cycle.

[–] JeffKerman1999@sopuli.xyz 3 points 1 month ago

I just ask it boilerplate code and it's ok. I don't like having to write a million times the same shit

[–] dhork@lemmy.world 6 points 1 month ago

I'm surprised the other ones did better

[–] CosmoNova@lemmy.world 4 points 1 month ago (1 children)

It's always refreshing to read reasonable comments to a nonsensical headline, but I do wonder why it even shows up in my feed when it has so many downvotes.

[–] catloaf@lemm.ee 1 points 1 month ago

It depends on which sort algorithm you're using.

[–] rumba@lemmy.zip 2 points 1 month ago

Lol GPT vs Copilot were in stark contrast....

I think the journalists should just try to stick to things they understand. They probably ran a single query and it failed so they kept going on the same conversation.

Sometimes the difference between a good answer and a bad answer is two or three attempts.

It's not like LLM's are particularly good at sussing out lies anyway. It's like summarize the concepts in the article than do web searches on each one trying to find an answer. It's a fairly expensive query that they're honestly going to try to avoid if they can.

[–] roofuskit@lemmy.world 1 points 1 month ago

And the first thing Orange Kim will do is take the leash off AI companies.