this post was submitted on 10 Aug 2024
550 points (96.0% liked)
Technology
59589 readers
2962 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Your claim that it's capable of doing what it claims isn't just false.
It's an egregious, massively harmful lie, and repeating it is always extremely malicious and inexcusable behavior.
I have genuinely found LLMs to be useful in many contexts. I use them to brainstorm and flesh out ideas for tabletop roleplaying adventures, to write song lyrics, to write Python scripts to do various random tasks. I've talked with them to learn about stuff, and verified that they were correct by checking their references. LLMs are demonstrably capable of these things. I demonstrated it.
Go ahead and refrain from using them yourself if you really don't want to, for whatever reason. But exclaiming "no it doesn't!" In the face of them actually doing the things you say they don't is just silly.
They absolutely cannot reliably summarize the result of searches, like this post is about, and OP in and of itself proves conclusively.
Any meaningful rate of failures at all makes them massively, catastrophically damaging to humanity as a whole. "Just don't use them" absolutely does not prevent their harm. Pushing them as competent is extremely fucking unacceptable behavior.
And this is all completely ignoring the obscene energy costs associated with making web searches complete and utter dogshit.
I disagree with that view on them, and I think the fact that they fail is actually a good thing in terms of preventing damage to humanity.
If they are able to perfectly do all kinds of jobs without ever making mistakes and being better at it than any humans, that would be infinitely more damaging than having them make mistakes meaning their use is limited to having to have their output carefully reviewed yet still used when it is helpful and appropriate.
Regardless, anything I say about AI/LLMs that isn't that it's terrible and useless and nobody should/would ever use it is going to be met with criticism.