this post was submitted on 07 Mar 2026
971 points (98.9% liked)

Technology

82460 readers
3968 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Lfrith@lemmy.ca 8 points 1 day ago (1 children)

Funny thing is LLMs are bad as calculators too, since I've seen it get simple multiplication wrong.

It's capable of generating content, but unable to verify or know itself if it is correct. But, lot of people don't realize that because the less they know about a subject matter the smarter it will seem to them not knowing its well...a language model. As in just outputting what can be complete gibberish.

[–] raldone01@lemmy.world -1 points 20 hours ago* (last edited 20 hours ago) (1 children)

Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)

Now I don't argue there is any consciousness or magic going on. But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, computationally accessible and easy to use. (In my limited experience)

[–] Lfrith@lemmy.ca 2 points 20 hours ago

I find it okay for writing programs since you can verify it to see if the output is correct.

But, actual analysis not so much, since when verifying what comes out that its not completely reliable even for things it should be like numbers. Now numbers might be close, but still off

Abstract stuff might be fine. But, its still not something to entirely trust on analysis because of errors. There's a lot of double checking that needs to be going on.