this post was submitted on 05 Dec 2024
528 points (94.4% liked)
Technology
59963 readers
3300 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
In general I agree with the sentiment of the article, but I think the broader issue is media literacy. When the Internet came about, people had similar reservations about the quality of information, and most of us learned in school how to find quality information online.
LLMs are a tool, and people need to learn how to use them correctly and responsibly. I’ve been using Perplexity.AI as a search engine for a while now, and I think they’re taking the right approach. It employs LLMs at different stages to parse your query, perform web searches on your behalf, and summarize findings. It provides in-text citations as well, which is an opportunity for a media-literate person to confirm the validity of anything important.
Ok but may I point you to the reality that internet spread misinformation is a critically bad problem at the moment
And your argument is that a human will be better than an AI going through that? Because it seems unrelated to the initial argument.