this post was submitted on 17 May 2024
503 points (94.8% liked)
Technology
59534 readers
3195 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's only easier to verify a solution than come up with a solution when you can trust and understand the algorithms that are developing the solution. Simulation software for thermodynamics is magnitudes faster than hand calculations, but you know what the software is doing. The creators of the software aren't saying "we don't actually know how it works".
In the case of an LLM, I have to verify everything with no trust whatsoever. And that takes longer than just doing it myself. Especially because an LLM is writing something for me, it isn't doing complex math.
If a solution is correct then a solution is correct. If a correct solution was generated randomly that doesn't make it less correct. It just means that you may not always get correct solutions from the generating process, which is why they are checked after.
Except when you're doing calculations, a calculator can run through an equation substituting the given answers and see that the values match... Which is my point of calculators not being a good example. And the case of a quantum computer wasn't addressed.
I agree that LLMs have many issues, are being used for bad purposes, are overhyped, and we've yet to see if the issues are solvable - but I think the analogy is twisting the truth, and I think the current state of LLMs being bad is not a license to make disingenuous comparisons.
Its left to be seen in the future then