this post was submitted on 22 Feb 2024
488 points (96.2% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] stoy@lemmy.zip 3 points 9 months ago* (last edited 9 months ago) (2 children)

Would it be accurate so say that while current AI does have the knowledge, it lacks the reasoning skills needed to apply the knowledge correctly?

[–] kromem@lemmy.world 1 points 9 months ago

No, it can solve word problems that it's never seen before with fairly intricate reasoning. LLMs can even play chess at Grandmaster levels without ever duplicating games in the training set.

Most of Lemmy has no genuine idea about the domain and hasn't actually been following the research over the past year which invalidates the "common knowledge" on the topic you often see regurgitated.

For example, LLMs build world models from the training data, and can combine skills from the data in ways that haven't been combined in the training data.

They do have shortcomings - being unable to identify what they don't know is a key one.

But to be fair, apparently most people on Lemmy can't do that either.

[–] FooBarrington@lemmy.world -3 points 9 months ago (2 children)

I don't think it's generally true, because current AI can solve some reasoning tasks very well. But it's definitely something where they are lacking.

[–] rambaroo@lemmy.world 3 points 9 months ago* (last edited 9 months ago) (1 children)

It isn't reasoning about anything. A human did the reasoning at some point, and the LLM's dataset includes that original information. The LLM is simply matching your prompt to that training data. It's not doing anything else. It's not thinking about the question you asked it. It's a glorified keyword search.

It's obvious you have no idea how LLMs work at a fundamental level, yet you keep talking about them like you're an expert.

[–] stoy@lemmy.zip 1 points 9 months ago (1 children)

That's fair, I have seen AI reason at a low level, but it seems to me that it is lacking higher levels of reasoning and context

[–] FooBarrington@lemmy.world -1 points 9 months ago

It definitely is lacking for now, but the question is: are these differences in degrees, or fundamental differences? I haven't seen research suggesting that it's the latter so far.