this post was submitted on 17 Jan 2024
669 points (98.3% liked)

Technology

59589 readers
3376 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] chaogomu@kbin.social 79 points 10 months ago (4 children)

The problem is, you can't trust ChatGPT to not lie to you.

And since generative AI is now being used all over the place, you just can't trust anything unless you know damn well that a human entered the info, and then that's a coin flip.

[–] lolcatnip@reddthat.com 19 points 10 months ago (1 children)

OTOH, you also can't trust humans not to lie to you.

[–] chaogomu@kbin.social 8 points 10 months ago

That's the coin flip.

[–] Lmaydev@programming.dev 9 points 10 months ago (1 children)

The newer ones search the internet and generate from the results not their training and provide sources.

So that's not such a worry now.

Anyone who used ChatGPT for information and not text generation was always using it wrong.

[–] BakerBagel@midwest.social 13 points 10 months ago (1 children)

Except people are using LLM to generate web pages on something to get clicks. Which means LLM's are training off of information generated by other LLM's. It's an ouroboros of fake information.

[–] Lmaydev@programming.dev 3 points 10 months ago* (last edited 10 months ago)

But again if you use LLMs ability to understand and generate text via a search engine that doesn't matter.

LLMs are not supposed to give factual answers. That's not their purpose at all.

[–] _number8_@lemmy.world 4 points 10 months ago (1 children)

plus search engines don't lecture me as much for typing naughty sex words

[–] Speculater@lemmy.world 2 points 10 months ago

Get on the unfiltered LLM train, they'll do anything GPT does and won't filter anything. Bonus if you run it locally and share with the community.

[–] notapantsday@feddit.de 1 points 10 months ago

However, I find it much easier to check if the given answer is correct, instead of having to find the answer myself.