this post was submitted on 27 Mar 2026
244 points (96.6% liked)

Technology

83126 readers
3716 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SnotFlickerman@lemmy.blahaj.zone 175 points 20 hours ago (8 children)

Huge Study

*Looks inside

this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

AI sucks in a lot of ways sure, but this feels like fud.

[–] XLE@piefed.social 42 points 18 hours ago (1 children)

The hugeness is probably

391, 562 messages across 4,761 different conversations

That's a lot of messages

[–] sukhmel@programming.dev 4 points 2 hours ago (1 children)

If that's only 19 users, that's around 250 conversations per user 🤔

...and about 82 messages per conversation. Also, at least half of all the messages are from the user to the AI, and the other half are from the AI to the user, meaning around 41 messages from the user per conversation.

[–] A_norny_mousse@piefed.zip 9 points 14 hours ago

Thanks, you saved me a click 😐

[–] chunes@lemmy.world 3 points 12 hours ago (2 children)

It's not really ethical to just yoink people's chats and study them

[–] braxy29@lemmy.world 2 points 5 hours ago

"We received chat logs directly from people who self-identified as having some psychological harm related to chatbot usage (e.g. they felt deluded) via an IRB-approved Qualtrics survey "

[–] Canonical_Warlock@lemmy.dbzer0.com 12 points 11 hours ago

Tell that to the advertizing companies.

[–] InternetCitizen2@lemmy.world 24 points 20 hours ago

I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

[–] UnderpantsWeevil@lemmy.world 7 points 20 hours ago

I wonder if the headline was written by an AI

[–] Lost_My_Mind@lemmy.world 3 points 19 hours ago (2 children)
[–] tburkhol@lemmy.world 30 points 19 hours ago

fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.

[–] orbituary@lemmy.dbzer0.com 1 points 19 hours ago

*hugely funded?