this post was submitted on 13 Mar 2026
647 points (98.2% liked)

Technology

82581 readers
4472 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Fizz@lemmy.nz -4 points 6 hours ago (1 children)

The problem with Firefox doing AI is theyre one foot out always. The features they add are always undercooked compared to the rest of the market. This looks really shit and useless in its current state like a worse version of perplexity browser.

[–] DudeImMacGyver@kbin.earth 15 points 5 hours ago (1 children)

All AI is undercooked: Errors are baked into LLMs and there is no viable solution to prevent the mistakes and outright bullshit they produce other than to assume it fucked up and pay an actual expert manually check literally everything it does.

[–] Analog@lemmy.ml 0 points 4 hours ago (2 children)

Errors are baked in but I don’t agree with the “no viable solution” part. One research team actually was able to identify the “neurons” responsible for hallucinations and adjust the contribution to negligible amounts.

https://www.youtube.com/watch?v=1ONwQzauqkc (Linking a youtuber instead of the actual study because he summarizes it pretty well and the research itself is not geared for laypersons.)

If this was implemented industry wide would it completely solve the problem? I don’t know, but I do know it would be a massive improvement.

[–] Jiral@lemmy.org 3 points 4 hours ago (1 children)

Not quoting the primary source does not per chance have anything to do with the source being a not peer reviewed archive of the Cornell University, does it? I wonder, is that normal in the field of AI research?

[–] Analog@lemmy.ml 2 points 2 hours ago

Here’s the source https://arxiv.org/abs/2512.01797

What does Cornell have to do with it? Genuinely curious as that seems completely out of the blue to me. Source was clearly Chinese.

[–] DudeImMacGyver@kbin.earth 1 points 4 hours ago (1 children)

I remain deeply skeptical.

Either way, it uses a ridiculous amount of power and comes at great environmental cost.

[–] Analog@lemmy.ml 1 points 2 hours ago (1 children)

Fuck me, you and people in general jump to conclusions so easily. My post was meant to educate, to shore up knowledge. To help out.

In no way was I saying “AI is good and the tech bros are right about it.” 🤦‍♂️

[–] DudeImMacGyver@kbin.earth 1 points 2 hours ago (1 children)

I never took what you wrote to mean that, but I am deeply skeptical that they can successfully elminiate hallucinations to the point that "AI" can be trusted to given correct results.

[–] Analog@lemmy.ml 1 points 2 hours ago (1 children)

Why bring up power and environmental cost? What did that have to do with anything?

Also if you’ll re-read what I wrote I used careful language to indicate I didn’t think this method would completely eliminate errors. Nevermind bridge the gap to “trusted.” (🤮 I will never trust AI.)

(Yeah I know the YouTuber used a sensational title; in their defense they kind of have to in order to get clicks. imho blame the algorithm and people’s reinforcement of that algorithm.)

[–] DudeImMacGyver@kbin.earth 1 points 2 hours ago

Why wouldn't I? It's pretty fucking important! Why would you take exception to that? I also think it's weird you assumed what conclusion I was jumping to.