this post was submitted on 28 Mar 2026
221 points (89.9% liked)

Technology

83185 readers
3839 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Full Report(76 Pages PDF).

you are viewing a single comment's thread
view the rest of the comments
[–] village604@adultswim.fan 2 points 14 hours ago (1 children)

It's not just cheap agents. I've witnessed paid MS Copilot give a decade old depreciated Microsoft product in response to a single sentence prompt, then when called out a non-existent Microsoft product, then finally giving the right answer after being called out a second time.

[–] pixxelkick@lemmy.world 2 points 14 hours ago (1 children)

LLMs are not good at answering fact based questions, fundamentally. Unless its an incredibly well known answer that has never changed (like a math or physics question), they dont magically "know" things.

However, they're way better at summarizing and reasoning.

Give them access to playwright web search capability via MCP tooling to go research info, find the answer(s), and then produce output based on the results, and now you can get something useful.

"Whats the best way to do (task)" << prone to failure, functional of how esoteric it is.

"Research for me the top 3 best ways to do (task), report on your results and include your sources you found" << actually useful output, assuming you have something like playwright installed for it.

[–] village604@adultswim.fan 1 points 12 hours ago* (last edited 12 hours ago)

A user on here built what appears to be a layer over the LLM that runs the query through several other processes first in an attempt to answer the question before it gets to the LLM, and I think it's brilliant.

They get bonus points because they made it so the reasoning the LLM uses is given to you. Although I haven't fully gone through the documentation yet.