this post was submitted on 27 Mar 2026
366 points (96.7% liked)

Technology

83449 readers
3010 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] FosterMolasses@leminal.space 14 points 6 days ago (1 children)

This explains a lot, honestly.

Everyone keeps telling me how "addictive" and "convincing" and "personal feeling" ChatGPT is.

Meanwhile, I'm over here like

"Can you stop saying skrrrt after every sentence while I'm trying to research a serious topic, it's annoying"

"Understood, skrrrt 💥🌴🚗💨"

[–] SnotFlickerman@lemmy.blahaj.zone 226 points 1 week ago (42 children)

Huge Study

*Looks inside

this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

AI sucks in a lot of ways sure, but this feels like fud.

[–] TheBlackLounge@lemmy.zip 1 points 2 days ago

How big do you think the population of people with AI delusion is? Why can't 19 be a representative sample? Why is that not enough to make statements like "after the user expresses romantic interest in the chatbot, the chatbot is 7.4x more likely to express romantic interest in the next three messages, and 3.9x more likely to claim or imply sentience in the next three messages." when all 19 users expressed romantic interest?

[–] XLE@piefed.social 63 points 1 week ago (1 children)

The hugeness is probably

391, 562 messages across 4,761 different conversations

That's a lot of messages

[–] sukhmel@programming.dev 20 points 1 week ago (2 children)

If that's only 19 users, that's around 250 conversations per user 🤔

load more comments (2 replies)
[–] InternetCitizen2@lemmy.world 26 points 1 week ago (1 children)

I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

[–] Buddahriffic@lemmy.world 2 points 6 days ago (1 children)

It's about 300 samples for an estimate of the distribution with a 95% confidence iirc. That's assuming the samples are representative (unbiased) and 95% confidence doesn't mean it's within 95% of reality, but that 5% of tests run in such a way would be expected to be inaccurate (and there's no way of knowing for sure which one this particular sample is because even a meta study will have such an error rate, though you can increase the confidence with more samples or studies, just never to 100% unless you study every possible sample, including future ones).

[–] TheBlackLounge@lemmy.zip 1 points 2 days ago (1 children)

That doesn't make sense. What if your population is only 100?

[–] Buddahriffic@lemmy.world 1 points 2 days ago (1 children)

Then any statistics you measure on that population might be fully accurate for those 100 but might be less able to predict what the next 100 will look like.

You can still measure stats with smaller groups, it just means the confidence interval is smaller. With 300, there's a 95% chance your test results are close to reality. With 100 it might be more like 66%.

[–] TheBlackLounge@lemmy.zip 2 points 2 days ago (1 children)

Population is a statistical term which means "everything". There is no "next 100".

The 300 number is specifically about very big populations where you're trying to measure something like an average of an unknown variable. It doesn't apply to just anything statistics.

[–] Buddahriffic@lemmy.world 0 points 2 days ago

I meant like births, as in even if you can enumerate every single individual, statistics can apply to future members that don't yet exist.

And yeah, it's been a while and I remembered that the proof didn't depend on the population size but forgot that it assumed a large population size in the first place. I was wrong.

[–] A_norny_mousse@piefed.zip 12 points 1 week ago

Thanks, you saved me a click 😐

load more comments (38 replies)
[–] amgine@lemmy.world 48 points 1 week ago (6 children)

I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

[–] Tollana1234567@lemmy.today 14 points 1 week ago

its like the AI BF/GFs the subs are posting about.

load more comments (5 replies)
[–] givesomefucks@lemmy.world 44 points 1 week ago (7 children)

As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”

There's a certain irony in all the alright techbros really just wanting to be told they were "stunning and brave" this whole time.

[–] ExLisper@lemmy.curiana.net 43 points 1 week ago (13 children)

I think what we're seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can't digest it and get sick. The problem is there's no way to determine who can handle AI and who can't.

When I'm reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought "wow, this thing feels so real". Some people clearly have predisposition to jumping over the "it's a tool" reaction straight to "it's a conscious thing I can connect with". I think next step should be developing a test that can predict how someone will react to it.

[–] FosterMolasses@leminal.space 2 points 6 days ago

I think what we’re seeing is similar to lactose intolerance

For real. Good to know I can handle both my cheese and shitty LLM bots without bodily consequences lmao

[–] wonderingwanderer@sopuli.xyz 18 points 1 week ago (8 children)

I suspect that the difference is to no small degree correlated with a person's isolation/social-integration.

People who aren't socially integrated have always been more vulnerable to predatory cults and scams. It's because human interactions is a psychological need that's been hardcoded into us by evolution.

Some people say "I don't need human interaction, I enjoy my time alone!" But that's because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone. It's well-established within the field of psychology that true isolation can have a range of deep and far-reaching impacts on a person's well-being.

When people are developing, they need to socialize with their peers; and being unable to do so leads to maladaptive behavior patterns. Even as adults, people need regular social contact or their psychological state can quickly deteriorate. That's why solitary confinement is considered a method of torture in some circumstances, when it's used to depersonalize and destroy a person's sense of self-identity.

So that's why I suspect that people who are well-integrated with friends, family, acquaintances, and coworkers are probably less vulnerable to these sorts of delusions and can treat AI as "just a tool."

But for someone who hardly has any social interaction in a day, has no friends or family to talk to, and maybe their warmest interaction all week was with the clerk at the grocery store, then yeah I'd say it's predictable that they would be vulnerable to getting sucked into this trap of relying on an LLM for their social interaction.

It might be superficial, but it's a way of patching a hole. It's an expedient means to fulfill a need that they're not getting from anywhere else.

If we don't want this sort of stuff happening to people, then maybe we shouldn't ostracize them for being "weird" in the first place. Because nobody learns how to be "normal" by being alone all the time.

[–] FosterMolasses@leminal.space 2 points 6 days ago

Some people say “I don’t need human interaction, I enjoy my time alone!” But that’s because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone.

Oh shit, I was about to contest this logic but no you're absolutely right.

I'm definitely one of those people, but I also have never gone seeking validation online and missed the bus for a lot of those social media trends (outside of the ones that friends at the time ended up bullying me into using, which I would briefly check out then quickly bail on).

Maybe it is more an issue of identity and self-image, as I've never once felt emotionally "connected" to ChatGPT or any of these clunky LLMs people seem to swear by. I do admit that sometimes talking out a problem instead of marinating on it on your own can be useful... but I view it almost as an extension of journaling than anything else. There's always a clear line for me where it's like "Okay, I got what I needed out of this interaction and now it's clearly suggesting additional prompts I don't need to try and keep me engaging with it"...

It's crazy to me that other people don't seem to register that line at all lol, it seems so clearly artificial to me.

[–] ExLisper@lemmy.curiana.net 2 points 6 days ago (1 children)

Social isolation is definitely a factor but people also have different tolerance to it.

https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion

This guy for example was married, had daughter. He wasn't some lonely guy living in a basement. For him working from home was enough to fell isolated fall for AI psychosis. Other people can be significantly more socially isolated and still not be susceptible to it. I think understanding how LLMs work helps. For sure there are more factors.

If we don’t want this sort of stuff happening to people, then maybe we shouldn’t ostracize them for being “weird” in the first place.

Are you suggesting this only happens to people ostracized and somehow excluded from society? Because that's definitely not true. It can happen to anyone. Some people have genetic predisposition to mental illness, some people are just dealing with difficult moment in their life. You don't know if you're "immune" until you try it.

[–] wonderingwanderer@sopuli.xyz 3 points 6 days ago

I didn't say it was the only factor, but it definitely contributes.

Smoking causes cancer, but not everyone who smokes gets cancer, and some non-smokers and even olympic athletes do...

load more comments (6 replies)
[–] thedeadwalking4242@lemmy.world 16 points 1 week ago (5 children)

I bet it's probably correlated with low education as most things

load more comments (5 replies)
[–] Tiresia@slrpnk.net 12 points 1 week ago (6 children)

Cults and toxic self-help literature have existed before LLMs copied them. I don't know if LLMs are getting people who couldn't have been gotten by human scammers.

Scams have many different vectors and people can be vulnerable to them depending on their mood or position in life. Testing people on LLM intolerance would be more like testing them on their susceptibility to viruses.

People can be immunocompromised for various reasons, temporarily or permanently, so as a society public hygiene standards (and the material conditions to produce them) are a lot more valuable. Wash your hands after interacting, keep public spaces clean, that sort of stuff.

[–] FosterMolasses@leminal.space 1 points 6 days ago

I don’t know if LLMs are getting people who couldn’t have been gotten by human scammers

ModernProblemsModernSolutions.jpeg

load more comments (5 replies)
load more comments (9 replies)
[–] Hackworth@piefed.ca 16 points 1 week ago* (last edited 1 week ago) (1 children)

Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently helps keep the Assistant character away from dark traits (sometimes). But it hasn't been implemented in any models, I assume because of the cost of scaling it up.

[–] porcoesphino@mander.xyz 14 points 1 week ago* (last edited 1 week ago) (6 children)

When you talk to a large language model, you can think of yourself as talking to a character

But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don't fully know

Fuck me that's some terrifying anthropomorphising for a stochastic parrot

The study could also be summarised as "we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?". They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?

To be fair, I'm only about 1/3rd of the way through and struggling to continue reading it so I haven't got to the interesting research but the intro is, I think, terrible

load more comments (6 replies)
load more comments
view more: next ›