this post was submitted on 28 Mar 2026
389 points (96.9% liked)

Technology

83449 readers
3010 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] phoenixz@lemmy.ca 47 points 6 days ago (3 children)

OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”

Buuuuulshit

Open AI needs people to be as addicted as possible as it uses the Facebook model of business only with N times the investment behind it so it needs users to use more at any cost, and these CEO's being the psychopaths that they are, they don't give a shit about things like consequences

[–] PhoenixDog@lemmy.world 15 points 6 days ago

This is like any matchmaking app genuinely attempting to match you with "the one" through AI, algorithms, science, etc so when you meet the perfect person you stop giving the app money.

I got lucky and married my fuck buddy that I met on Tinder. But that is not a good business plan. Why would OpenAI drive people to stop using their product.

I'm a functional alcoholic. Last I checked booze companies aren't reaching out to me to stop buying booze because they care about my personal health or mental wellbeing....

[–] UnderpantsWeevil@lemmy.world 5 points 5 days ago

Buuuuulshit

I mean, what are the odds that the statement was composed by an AI?

[–] motruck@lemmy.zip 3 points 6 days ago* (last edited 6 days ago)

Companies only care about money.

[–] Unpigged@lemmy.dbzer0.com 12 points 5 days ago

It's worrying how often I see news like that where they elaborate on human traits like acceptance and "understanding" of the model.

Could it be that our society had disconnected from emotion so far that any synthetic simulacra of a real compassion makes vulnerable people swallow it bait, line and sinker?

[–] lmmarsano@group.lt 23 points 6 days ago (5 children)
[–] AstralPath@lemmy.ca 9 points 5 days ago

I learned it as "PEBKAC". Problem exists between keyboard and chair. PICNIC is nice too though.

[–] Quazatron@lemmy.world 2 points 5 days ago

Layer 8 issue.

[–] lost_faith@lemmy.ca 2 points 5 days ago

So much nicer than the issue is between the keyboard and the chair or an I/O error

[–] Eheran@lemmy.world 3 points 6 days ago

What problem does the chair have...?

[–] UltraBlack@lemmy.world 7 points 5 days ago

Fucking idiots

[–] FosterMolasses@leminal.space 22 points 6 days ago (3 children)

“Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

See, I never understood this. Mine could never even follow simple instructions lol

Like I say "Give me a list of types of X, but exclude Y"

"Understood!

#1 - Y

(I know you said to exclude this one but it's a popular option among-)"

lmfaoooo

[–] very_well_lost@lemmy.world 15 points 6 days ago

That's because it isn't true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of 'fine-tuning' a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any 'memory' or 'learning' that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

-You have a conversation with a model.

-Your conversation is saved into a database with all of the other conversations you've had. Often, an LLM will be used to 'summarize' your conversation before it's stored, causing some details and context to be lost.

-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.

[–] phoenixz@lemmy.ca 7 points 6 days ago

I've experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so "whose a good boy!!!!" annoying.

People don't talk like these chatbots do, their training data that was stolen from humanity definitely doesn't contain that, that is "behavior" included by the providers to try and make sure that people get as hooked as possible

Gotta make back those billions of investments on a dead end technology somehow

load more comments (1 replies)
[–] Kuma@lemmy.world 19 points 6 days ago (5 children)

I think this is both scary and very interesting. What kind of person do you have to be to become addicted like them? Is this the same as gambling addiction? Do you need a type of gene? Would this type of personality be receptive to hypnotize, cult, delusions about their idol and so on? Or is this something that can happen to anyone who is depressed and feel lonely? How did the llm even earn enough trust? In a cult is there a lot of ppl reaffirming so it is a lot easier to understand.

It is so hard to understand even tho I really want to. I have never cared about an object or idol/celebrate. AI can I never even take serious as a living beeing, the only emotion it triggers are frustration and how you feel about a tool that works as it should, so pretty apathetic. Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?

A lot of questions that I do not think anyone here can answer haha, but maybe one of them.

[–] UnderpantsWeevil@lemmy.world 5 points 5 days ago (1 children)

What kind of person do you have to be to become addicted like them?

Human cognition degrades with stress, exhaustion, and trauma. If you're in a position where turning to an AI for relationship advice seems like a good idea, you're probably already suffering from one or more of the above.

Also doesn't help that AIs are sycophantic precisely because sycophancy is addictive. This isn't a "type of person" so much as a "tool engineered towards chronic use". It's like asking "What kind of person regularly smokes crack?"

Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?

I'll give you a personal example. I have a friend who is currently pregnant and going through a bad breakup with her baby-daddy. She's a trial lawyer by trade - very smart, very motivated, very well-to-do, but also horribly overworked, living by herself, and suffering from all the biochemical consequences of turning a single celled organism into a human being.

As a result of some poorly conceived remarks, she's alienated herself from a number of close friends to the point where we doubt there's going to be a baby shower. Part of the impulse to say these things came from her own drama. But part if it came from her discovering ChatGPT as a tool to analyze other people's statements. This has created a vicious behavioral spiral, during which she says something regrettable and gets a regrettable response in turn. She plugs the conversation into ChatGPT, because she has nobody else to talk to. And ChatGPT feeds her some self-affirming bullshit that inflates her ego far enough to say another stupid thing.

To complicate matters, her baby daddy is also using ChatGPT to analyze her conversations. And he's decided she's cheated on him, the baby isn't his, and she's plotting to scam him.

So now you've got two people - already stressed and exhausted - getting fed a series of toxic delusions by a machine that is constantly reaffirming in the way none of your friends or family are. It's compounding your misery, which drives anxiety and sends you back to the machine that offers temporary relief. But the advice from the machine yields more misery down the line, raising your anxiety, and sending you back to the machine.

What's producing this feedback loop? You could argue it is the individual, foolish enough to engage with the machine to begin with. But that's far more circumstantial than personality driven. If my friend didn't have a cell phone, she wouldn't be reaching for ChatGPT. If she wasn't pregnant, she wouldn't be so stressed and anxious. If she wasn't in a fight with her boyfriend, she wouldn't be feeding conversations into the prompt engine.

[–] Kuma@lemmy.world 2 points 5 days ago (1 children)

Thanks for giving me a real life example.

I still find it hard to understand the emotional attachment to LLMs and why people believe their ideas (like the guy in the article). But I find her story to be a lot more understanding. It adds another layer, and it made me think.

It sounds like she is too overworked and stressed to make decisions or even think for herself, so she lets GPT do it for her. I assume it works most of the time and is a big help for many things that the baby daddy could had helped with instead if they were still a happy couple. I assume the biggest drive to use it is so she can turn off her brain. Which is why she has become dependent on the only stable and consistent thing in her life (that is my assumption about how she feels). Maybe that’s mostly how it goes, starts with using it as a tool and then you get lazy (for lack of a better term) and it keeps snowballing from there.

I feel for everyone involved. I hope she gets better soon, and I hope you do too, being overworked and stressed really destroys you and the people around you in many ways.

[–] UnderpantsWeevil@lemmy.world 2 points 5 days ago (1 children)

I still find it hard to understand the emotional attachment to LLMs and why people believe their ideas

It's a conversation you're having on the internet with an agent that sounds like a human. People get invested for the same reason they get catfished.

It sounds like she is too overworked and stressed to make decisions or even think for herself, so she lets GPT do it for her.

That's the nut of it. And ChatGPT tends to mix the pastiche of a well-researched argument with the kind of feel-good self-affirmations that win over their audience. So you're getting what looks - at first glance - to be good advice. And then you're getting glazed on top of it. And then it's designed to tell you what you want to hear, so you're getting affirmation bias.

I hope she gets better soon, and I hope you do too, being overworked and stressed really destroys you and the people around you in many ways.

I mean, that's why human-to-human interactions are valuable. But it's also why they're difficult. Like any good medicine, it can taste bitter up front even if its what you need in the long run.

[–] Kuma@lemmy.world 2 points 5 days ago

100%! That is why I always set it as my top priority to say yes to friends and family (as long as it is reasonable) or do spontaneous things with them even when I do not feel like doing anything that day. And some friends are really hard to schedule anything with because of life so you need to take the chance when you get it haha.

I feel the best when I am with the ppl I care about, covid really showed me that. So I do understand why some who do not have friends or family may create some kind of unhealthy relationship with GPT just like some create unhealthy, even obsessive parasocial relationships with youtubers.

I have tried talking to GPT as a person but it feels extremely uncomfortable and hollow. With a human do I get stimulation, like knowledge, they challenge my view or ideas and give me different perspectives, I feel that really helps me understand the world better and I miss all of that from GPT, it isn't even creative and can not inspire me with new ideas but maybe that is a good thing if ppl tend to follow its instructions.

Do you talk to it? Other than giving it tasks.

[–] chunes@lemmy.world 8 points 6 days ago (4 children)
[–] Kuma@lemmy.world 3 points 5 days ago

Wow, that is a big mix of anime isekai, vegetarian, delusions and religion/spiritual ideas, in a very dystopic way.

load more comments (3 replies)
[–] architect@thelemmy.club 4 points 6 days ago

I don’t know. Give it 1 hour and it forgets who and what you even spoke about.

There are ways to make a local llm with memory but even then it’s still not a person and acts insane.

load more comments (2 replies)
[–] JATtho@lemmy.world 3 points 5 days ago

I have recently realized that a sum negative knowledge situation can exist, and this is a thing with "AI". The work the AI does may actually reduce the useful knowledge. It's like you have built a working fusion reactor, but have zero knowledge how to replicate it or able to explain why it works.

The point this happens to a person, means she/he can't be trusted with the tech and should stay far away from it.

The negative knowledge pit can be so deep that some people are unable to escape from it, and start confidently believing in the (AI injected) garbage like it's their own thoughts...

[–] greyscale@lemmy.sdf.org 6 points 6 days ago* (last edited 6 days ago)

You couldn't pay me to put that green herpes on my profile picture.

[–] SaveTheTuaHawk@lemmy.ca 2 points 5 days ago

I just looked at the Grok interface...an animated cartoon of a teenage girl, seriously?

[–] Internetexplorer@lemmy.world 5 points 6 days ago (1 children)

AI can be convincing, and it will swear until it's blue in the face that something is right and then just be completely wrong.

But that happens maybe 10% of the time. Other times it is mostly right.

So got to be careful. This guy was in his 50's, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.

[–] IratePirate@feddit.org 15 points 6 days ago (5 children)

AI can [...] be completely wrong. But that happens maybe 10% of the time.

Where are you pulling your numbers from, mate? The figures I've seen so far start somewhere >40% and go all the way up to 70%.

[–] Internetexplorer@lemmy.world 1 points 4 days ago

Personal actual usage, not some made up internet number

load more comments (4 replies)
[–] wulrus@lemmy.world 4 points 6 days ago (1 children)

The one point I don't completely understand is the tax debt: Wouldn't a failed business, no matter how ridiculous, be a complete write-off?

Maybe the problem is that he has to tax each fiscal year independently, so a tax debt in 2023 from successful freelance work would not be diminished by a failed "business idea" in 2024.

load more comments (1 replies)
load more comments
view more: next ›