this post was submitted on 30 Dec 2025
927 points (98.6% liked)

Technology

79476 readers
4257 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] U7826391786239@lemmy.zip 201 points 1 month ago* (last edited 1 month ago) (7 children)

i don't think it's emphasized enough that AI isn't just making up bogus citations with nonexistent books and articles, but increasingly actual articles and other sources are completely AI generated too. so a reference to a source might be "real," but the source itself is complete AI slop bullshit

https://www.tudelft.nl/en/2025/eemcs/scientific-study-exposes-publication-fraud-involving-widespread-use-of-ai

https://thecurrentga.org/2025/02/01/experts-fake-papers-fuel-corrupt-industry-slow-legitimate-medical-research/

the actual danger of it all should be apparent, especially in any field related to health science research

and of course these fake papers are then used to further train AI, causing factually wrong information to spread even more

[–] BreadstickNinja@lemmy.world 82 points 1 month ago (1 children)

It's a shit ouroboros, Randy!

load more comments (1 replies)
[–] tym@lemmy.world 51 points 1 month ago (3 children)

the movie idiocracy was a prophecy that we were too arrogant to take seriously.

now go away, I'm baitin

[–] IronBird@lemmy.world 30 points 1 month ago (1 children)

we would be lucky to have a president as down to earth as camacho

[–] Cethin@lemmy.zip 21 points 1 month ago

Yep. I don't care if a president is smart. I care if they listen to the experts. I don't want one who thinks they know everything, because no one can.

[–] CheeseNoodle@lemmy.world 10 points 1 month ago (1 children)

When is that movie set again? I want to mark my calender for the day the US finally gets a compitent president.

[–] tym@lemmy.world 21 points 1 month ago (3 children)

Movie was set in 2505... We're speed-running it. We should get our first pro-wrestler president in our lifetime.

[–] PalmTreeIsBestTree@lemmy.world 17 points 1 month ago

Trump technically is one. We are all ready there.

[–] Evkob@lemmy.ca 15 points 1 month ago
load more comments (1 replies)
load more comments (5 replies)
[–] brsrklf@jlai.lu 141 points 1 month ago (9 children)

Some people even think that adding things like “don’t hallucinate” and “write clean code” to their prompt will make sure their AI only gives the highest quality output.

Arthur C. Clarke was not wrong but he didn't go far enough. Even laughably inadequate technology is apparently indistinguishable from magic.

[–] clay_pidgin@sh.itjust.works 49 points 1 month ago (4 children)

I find those prompts bizarre. If you could just tell it not to make things up, surely that could be added to the built in instructions?

[–] mushroommunk@lemmy.today 54 points 1 month ago (9 children)

I don't think most people know there's built in instructions. I think to them it's legitimately a magic box.

load more comments (9 replies)
load more comments (3 replies)
[–] InternetCitizen2@lemmy.world 22 points 1 month ago* (last edited 1 month ago)

Grok, enhance this image

(•_•)
( •_•)>⌐■-■
(⌐■_■)

[–] Wlm@lemmy.zip 11 points 4 weeks ago (1 children)

Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.

[–] NikkiDimes@lemmy.world 15 points 4 weeks ago (6 children)

That's more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.

load more comments (6 replies)
load more comments (6 replies)
[–] nulluser@lemmy.world 127 points 1 month ago (1 children)

Everyone knows that AI chatbots like ChatGPT, Grok, and Gemini can often hallucinate sources.

No, no, apparently not everyone, or this wouldn't be a problem.

[–] FlashMobOfOne@lemmy.world 30 points 1 month ago

In hindsight, I'm really glad that the first time I ever used an LLM it gave me demonstrably false info. That demolished the veneer of trustworthiness pretty quickly.

[–] SleeplessCityLights@programming.dev 98 points 4 weeks ago (16 children)

I had to explain to three separate family members what it means for an Ai to hallucinate. The look of terror on their faces after is proof that people have no idea how "smart" a LLM chatbot is. They have been probably using one at work for a year thinking they are accurate.

[–] hardcoreufo@lemmy.world 27 points 4 weeks ago (11 children)

Idk how anyone searches the internet anymore. Search engines all turn up so I ask an AI. Maybe one out of 20 times it turns up what I'm asking for better than a search engine. The rest of the time it runs me in circles that don't work and wastes hours. So then I go back to the search engine and find what I need buried 20 pages deep.

[–] MrScottyTay@sh.itjust.works 12 points 4 weeks ago (1 children)

It's fucking awful isn't it. Summer day soon when i can be arsed I'll have to give one of the paid search engines a go.

I'm currently on qwant but I've already noticed a degradation in its results since i started using it at the start of the year.

load more comments (1 replies)
load more comments (10 replies)
[–] markovs_gun@lemmy.world 13 points 4 weeks ago (2 children)

I legitimately don't understand how someone can interact with an LLM for more than 30 minutes and come away from it thinking that it's some kind of super intelligence or that it can be trusted as a means of gaining knowledge without external verification. Do they just not even consider the possibility that it might not be fully accurate and don't bother to test it out? I asked it all kinds of tough and ambiguous questions the day I got access to ChatGPT and very quickly found inaccuracies, common misconceptions, and popular but ideologically motivated answers. For example, I don't know if this is still like this but if you ask ChatGPT questions about who wrote various books of the Bible, it will give not only the traditional view, but specifically the evangelical Christian view on most versions of these questions. This makes sense because they're extremely prolific writers, but it's simply wrong to reply "Scholars generally believe that the Gospel of Mark was written by a companion of Peter named John Mark" because this view hasn't been favored in academic biblical studies for over 100 years, even though it is traditional. Similarly, asking it questions about early Islamic history gets you the religious views of Ash'ari Sunni Muslims and not the general scholarly consensus.

load more comments (2 replies)
[–] SocialMediaRefugee@lemmy.world 12 points 4 weeks ago

I have a friend who constantly sends me videos that get her all riled up. Half the time I patiently explain to her why a video is likely AI or faked some other way. "Notice how it never says where it is taking place? Notice how they never give any specific names?" Fortunately she eventually agrees with me but I feel like I'm teaching critical thinking 101. I then think of the really stupid people out there who refuse to listen to reason.

load more comments (13 replies)
[–] b_tr3e@feddit.org 60 points 4 weeks ago* (last edited 4 weeks ago) (5 children)

No AI needed for that. These bloody librarians wouldn't let us have the Necronomicon either. Selfish bastards...

[–] Naevermix@lemmy.world 15 points 4 weeks ago (4 children)

I swear, librarians are the only thing standing between humanity and true greatness!

load more comments (4 replies)
[–] smh@slrpnk.net 14 points 4 weeks ago (4 children)
load more comments (4 replies)
[–] RalfWausE@feddit.org 11 points 4 weeks ago (1 children)

This one is on you. MY copy of the necronomicon firmly sits in my library in the west wing...

load more comments (1 replies)
load more comments (2 replies)
[–] pHr34kY@lemmy.world 52 points 1 month ago* (last edited 1 month ago) (4 children)

There's an old Monty Python sketch from 1967 that comes to mind when people ask a librarian for a book that doesn't exist.

They predicted the future.

[–] palordrolap@fedia.io 19 points 1 month ago

Are you sure that's not pre-Python? Maybe one of David Frost's shows like At Last the 1948 Show or The Frost Report.

Marty Feldman (the customer) wasn't one of the Pythons, and the comments on the video suggest that Graham Chapman took on the customer role when the Pythons performed it. (Which, if they did, suggests that Cleese may have written it, in order for him to have been allowed to take it with him.)

load more comments (3 replies)
[–] MountingSuspicion@reddthat.com 46 points 1 month ago (1 children)

I believe I got into a conversation on Lemmy where I was saying that there should be a big persistent warning banner stuck on every single AI chat app that "the following information has no relation to reality" or some other thing. The other person kept insisting it was not needed. I'm not saying it would stop all of these events, but it couldn't hurt.

[–] glitchdx@lemmy.world 31 points 1 month ago (2 children)

https://www.explainxkcd.com/wiki/index.php/2501:_Average_Familiarity

People who understand the technology forget that normies don't understand the technology.

[–] TubularTittyFrog@lemmy.world 11 points 1 month ago* (last edited 1 month ago) (2 children)

and normies think you're an asshole if you try to explain the technology to them, and cling to their ignorance of it basic it's more 'fun' to believe in magic

load more comments (2 replies)
[–] eli@lemmy.world 10 points 1 month ago (1 children)

TIL there is a whole ass mediawiki for explaining XKCD comics.

load more comments (1 replies)
[–] zanzo@lemmy.world 32 points 4 weeks ago (1 children)

Librarian here: Good news is that many libraries are standing up AI literacy programs to show people not only how to judge AI outputs but also how to get better results. If your local library isn’t doing this ask them why not.

load more comments (1 replies)
[–] SocialMediaRefugee@lemmy.world 30 points 4 weeks ago (1 children)

Every time I think people have reached maximum stupidity they prove me wrong.

[–] PetteriSkaffari@lemmy.world 16 points 4 weeks ago

"Two things are infinite: the universe and human stupidity; and I'm not sure about the universe."

Albert Einstein (supposedly)

[–] SethTaylor@lemmy.world 19 points 4 weeks ago

I guess Thomas Fullman was right: "When humans find wisdom in cold replicas of themselves, the arrow of evolution will bend into a circle". That's from Automating the Mind. One of his best.

[–] panda_abyss@lemmy.ca 19 points 1 month ago* (last edited 1 month ago) (4 children)

I plugged my local AI into offline wikipedia expecting a source of truth to make it way way better.

It’s better, but I also can’t tell when it’s making up citations now, because it uses Wikipedia to support its own world view from pre training instead of reality.

So it’s not really much better.

Hallucinations become a bigger problem the more info they have (that you now have to double check)

load more comments (4 replies)
[–] Lucidlethargy@sh.itjust.works 17 points 4 weeks ago (3 children)

Wait, are you guys saying "Of Mice And Men: Lennie's back" isn't real? I will LOSE MY SHIT if anyone confirms this!! 1!! 2.!

load more comments (3 replies)
[–] Blackmist@feddit.uk 15 points 4 weeks ago (6 children)

Luckily, the future will provide not only AI titles, but the contents of said books as well.

Given the amount of utter drivel people are watching and reading of late, we're probably already most of the way there.

load more comments (6 replies)
load more comments
view more: next ›