this post was submitted on 20 May 2024
310 points (97.3% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] assassin_aragorn@lemmy.world 2 points 6 months ago (3 children)

Mind explaining to a tech layperson why they're bad?

[–] drislands@lemmy.world 12 points 6 months ago (1 children)

I'll explain for you, because there's a lot of misinformation around.

What is being called AI these days is various companies' version of what's called an LLM -- Large Language Model. Put simply, an LLM is a very sophisticated piece of software that takes what is asked of it to determine what is statistically the most likely sequence of words to follow as an answer.

This means you can ask a question the way you'd ask a human, and the way it answers will closely mirror how a person would answer (as opposed to stuff like Google Assistant or Siri, where you need to ask a question a specific way to get a decent answer).

Note, however, that at no point did I say that an LLM is accurate. This is the fatal issue that is never included by proponents of this kind of AI. They don't have any mechanism to retrieve information, or verify the truthfulness of the answers given. You wind up seeing a lot of answers from this kind of AI that is either partially or completely wrong.

My favorite example is the result you get when googling "african countries that start with the letter K". Someone posted the answer they got from an LLM to a forum online, which said that there is no country, and that became the top google result...despite the fact that Kenya obviously exists and starts with the letter K.


Essentially, LLMs are really fascinating in how well they approximate human speech -- but they have absolutely no intelligence behind them. Proponents of this tech as AI either ignore this, or outright lie about it. As a result, a lot of companies have started using this tech to replace their support teams and/or the search functionality of their websites. I'm sure you can imagine the negative effects this has caused.

[–] assassin_aragorn@lemmy.world 6 points 6 months ago (1 children)

Oh I meant Hugging Face in particular.

Still, I appreciate you taking the time to lay out that explanation!

[–] drislands@lemmy.world 4 points 6 months ago

Oh whoops! 😬 I hope you get the answer you needed!

[–] xthexder@l.sw0.com 2 points 6 months ago

I think they're being more literal. All the latest open source AI models get posted on Hugging Face.