this post was submitted on 10 Aug 2024
550 points (96.0% liked)

Technology

59569 readers
3431 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] conciselyverbose@sh.itjust.works 19 points 3 months ago (21 children)

It's in the quote that they scaled it.

The point is that the entire alleged value is the ability to parse the reading material and extract the key points, but because it doesn't resemble intelligence in any way, it isn't actually capable of meaningfully doing so.

Yes, not being able to distinguish between the real answer and a "banana for scale" analogy is a big problem that shows how fucking useless the technology is.

[–] FaceDeer@fedia.io -1 points 3 months ago (14 children)

Except it is capable of meaningfully doing so, just not in 100% of every conceivable situation. And those rare flubs are the ones that get spread around and laughed at, such as this example.

There's a nice phrase I commonly use, "don't let the perfect be the enemy of the good." These AIs are good enough at this point that I find them to be very useful. Not perfect, of course, but they don't have to be as long as you're prepared for those occasions, like this one, where they give a wrong result. Like any tool you have some responsibility to know how to use it and what its capabilities are.

[–] btaf45@lemmy.world 2 points 3 months ago (2 children)

AIs are definitely not "good enough" to give correct answers to science questions. I've seen lots of other incorrect answers before seeing this one. While it was easy to spot that this answer is incorrect, how many incorrect answers are not obvious?

[–] FaceDeer@fedia.io 1 points 3 months ago (1 children)

Then go ahead and put "science questions" into one of the areas that you don't use LLMs for. That doesn't make them useless in general.

I would say that a more precise and specific restriction would be "they're not good at questions involving numbers." That's narrower than "science questions" in general, they're still pretty good at dealing with the concepts involved. LLMs aren't good at math so don't use them for math.

[–] btaf45@lemmy.world 4 points 3 months ago

AI doesn't seem to be good at anything in which there is a right answer and a wrong answer. It works best for things where there are no right/wrong answers.

load more comments (11 replies)
load more comments (17 replies)