this post was submitted on 27 Mar 2026
366 points (96.7% liked)

Technology

84646 readers
7063 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SnotFlickerman@lemmy.blahaj.zone 226 points 1 month ago (9 children)

Huge Study

*Looks inside

this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

AI sucks in a lot of ways sure, but this feels like fud.

[–] XLE@piefed.social 63 points 1 month ago (1 children)

The hugeness is probably

391, 562 messages across 4,761 different conversations

That's a lot of messages

[–] sukhmel@programming.dev 20 points 1 month ago (1 children)

If that's only 19 users, that's around 250 conversations per user 🤔

[–] SnotFlickerman@lemmy.blahaj.zone 9 points 1 month ago (1 children)

...and about 82 messages per conversation. Also, at least half of all the messages are from the user to the AI, and the other half are from the AI to the user, meaning around 41 messages from the user per conversation.

[–] sukhmel@programming.dev 4 points 1 month ago

Yeah, I also thought about that, looks like a lot, but I guess users in this case differ from ordinary usage

[–] InternetCitizen2@lemmy.world 26 points 1 month ago (1 children)

I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

[–] Buddahriffic@lemmy.world 2 points 1 month ago (1 children)

It's about 300 samples for an estimate of the distribution with a 95% confidence iirc. That's assuming the samples are representative (unbiased) and 95% confidence doesn't mean it's within 95% of reality, but that 5% of tests run in such a way would be expected to be inaccurate (and there's no way of knowing for sure which one this particular sample is because even a meta study will have such an error rate, though you can increase the confidence with more samples or studies, just never to 100% unless you study every possible sample, including future ones).

[–] TheBlackLounge@lemmy.zip 1 points 1 month ago (1 children)

That doesn't make sense. What if your population is only 100?

[–] Buddahriffic@lemmy.world 1 points 1 month ago (1 children)

Then any statistics you measure on that population might be fully accurate for those 100 but might be less able to predict what the next 100 will look like.

You can still measure stats with smaller groups, it just means the confidence interval is smaller. With 300, there's a 95% chance your test results are close to reality. With 100 it might be more like 66%.

[–] TheBlackLounge@lemmy.zip 2 points 1 month ago (1 children)

Population is a statistical term which means "everything". There is no "next 100".

The 300 number is specifically about very big populations where you're trying to measure something like an average of an unknown variable. It doesn't apply to just anything statistics.

[–] Buddahriffic@lemmy.world 0 points 1 month ago

I meant like births, as in even if you can enumerate every single individual, statistics can apply to future members that don't yet exist.

And yeah, it's been a while and I remembered that the proof didn't depend on the population size but forgot that it assumed a large population size in the first place. I was wrong.

[–] A_norny_mousse@piefed.zip 12 points 1 month ago

Thanks, you saved me a click 😐

[–] UnderpantsWeevil@lemmy.world 8 points 1 month ago

I wonder if the headline was written by an AI

[–] Lost_My_Mind@lemmy.world 7 points 1 month ago (21 children)
[–] tburkhol@lemmy.world 37 points 1 month ago

fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.

load more comments (19 replies)
[–] chunes@lemmy.world 5 points 1 month ago (2 children)

It's not really ethical to just yoink people's chats and study them

Tell that to the advertizing companies.

[–] braxy29@lemmy.world 8 points 1 month ago

"We received chat logs directly from people who self-identified as having some psychological harm related to chatbot usage (e.g. they felt deluded) via an IRB-approved Qualtrics survey "

[–] orbituary@lemmy.dbzer0.com 2 points 1 month ago

*hugely funded?

[–] TheBlackLounge@lemmy.zip 1 points 1 month ago

How big do you think the population of people with AI delusion is? Why can't 19 be a representative sample? Why is that not enough to make statements like "after the user expresses romantic interest in the chatbot, the chatbot is 7.4x more likely to express romantic interest in the next three messages, and 3.9x more likely to claim or imply sentience in the next three messages." when all 19 users expressed romantic interest?