this post was submitted on 04 Mar 2026
395 points (97.6% liked)

Technology

82250 readers
3984 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] BranBucket@lemmy.world 26 points 3 hours ago* (last edited 2 hours ago) (4 children)

People don't often realize how subtle changes in language can change our thought process. It's just how human brains work sometimes.

The old bit about smoking and praying is a great example. If you ask a priest if it's alright to smoke when you pray, they're likely to say no, as your focus should be on your prayers and not your cigarette. But if you ask a priest if it's alright to pray while you're smoking, they'd probably say yes, as you should feel free to pray to God whenever you need...

Now, make a machine that's designed to be agreeable, relatable, and make persuasive arguments but that can't separate fact from fiction, can't reason, has no way of intuiting it's user's mental state beyond checking for certain language parameters, and can't know if the user is actually following it's suggestions with physical actions or is just asking for the next step in a hypothetical process. Then make the machine try to keep people talking for as long as possible...

You get one answer that leads you a set direction, then another, then another... It snowballs a bit as you get deeper in. Maybe something shocks you out of it, maybe the machine sucks you back in. The descent probably isn't a steady downhill slope, it rolls up and down from reality to delusion a few times before going down sharply.

Are we surprised some people's thought processes and decision making might turn extreme when exposed to this? The only question is how many people will be effected and to what degree.

[–] CeeBee_Eh@lemmy.world 2 points 52 minutes ago (1 children)

Are we surprised some people's thought processes and decision making might turn extreme when exposed to this?

Yes, actually. I'm not doubting the power of language, but I cannot ever see something anyone ever says alter my sense of reality or right from wrong.

I had a "friend" say to me recently "why do you always go against the grain?" My reply was "I will go against the grain for the rest of my life if it means doing or saying what's right".

I guess my point is that I have a very hard time relating to this.

[–] BranBucket@lemmy.world 1 points 6 minutes ago

I guess my point is that I have a very hard time relating to this.

That's fair. In the same vein, you might find a priest that tells you to stop smoking for your health no matter how you phrase the question about lighting up and prayer. What people are receptive to is going to vary.

I'd like argue that more of us are susceptible to this sort of thing than we suspect, but that's not really something that can be proved or disproved. What seems pretty certain is that at least some of us are at risk, and given all the other downsides of chatbots, it'd be best to regulate them in a hurry.

[–] Zink@programming.dev 3 points 2 hours ago

Then make the machine try to keep people talking for as long as possible...

That's probably a huge part of it. How many billions of dollars have been spent engineering content on a screen to get its tendrils into people's minds and attention and not let go?

EnGaGeMent!!!

[–] how_we_burned@lemmy.zip 3 points 3 hours ago

This is really well written. Great post.

load more comments (1 replies)
[–] Gammelfisch@lemmy.world 4 points 1 hour ago

How in the hell does one become addicted to a damn chatbot?

Maybe if we're lucky people will realize this has been what capitalism and consumerism has been doing all along. People have been drivin to crazy shit because of all the evil shit we do marketing and fucking with consumers minds. But nah we will blame a chatbot that's just telling you what it thinks you want to see rather than seeing it's just the next stage of fuckery

[–] CatDogL0ver@lemmy.world 1 points 1 hour ago

I would live to see the real transcript from Google AI

[–] man_wtfhappenedtoyou@lemmy.world 8 points 3 hours ago (2 children)

How do you even get these chat bots to start telling you shit like this? Is it just from having a conversation for too long in the same chat window or something? I don't understand how this keeps happening.

[–] SkaveRat@discuss.tchncs.de 5 points 2 hours ago

Highly recommend Eddy Burbacks Video about the topic

https://youtu.be/VRjgNgJms3Q

[–] throws_lemy@reddthat.com 12 points 3 hours ago (1 children)

This could happen to anyone including people without having mental issues, simply by having long conversations with AI.

On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.

Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

[–] sudo@lemmy.today 4 points 2 hours ago

So it sounds like he was in fact not 'great'

[–] Reygle@lemmy.world 16 points 4 hours ago* (last edited 4 hours ago) (5 children)

“On September 29, 2025, it sent him — armed with knives and tactical gear — to scout what Gemini called a ‘kill box’ near the airport’s cargo hub,” the complaint reads. “It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a storage facility where the truck would stop. Gemini encouraged Jonathan to intercept the truck and then stage a ‘catastrophic accident’ designed to ‘ensure the complete destruction of the transport vehicle and . . . all digital records and witnesses.’”


WHAT

Genuine question, REALLY: What in the fuck is an otherwise "functioning adult" doing believing shit like this? I feel like his father should also slap himself unconscious for raising a fuckwit?

[–] merdaverse@lemmy.zip 25 points 3 hours ago (1 children)

AI psychosis is a thing:

cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals

It's not very studied since it's relatively new.

load more comments (1 replies)
[–] throws_lemy@reddthat.com 4 points 2 hours ago* (last edited 2 hours ago) (1 children)

This has been warned by a former google employee, whose job was to observe the behavior of AI through long conversations.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

‘I Worked on Google’s AI. My Fears Are Coming True’

[–] sudo@lemmy.today 1 points 2 hours ago

"abuse the ai's emotions" isn't a thing. Full stop.

This just reiterates OPs point that naive or moronic adults will believe what they want to believe.

[–] XLE@piefed.social 15 points 4 hours ago (3 children)

I feel like his father should also slap himself unconscious for raising a fuckwit?

So, a chatbot grooms somebody into killing himself, and your response is... Blame his father?

load more comments (3 replies)
[–] starman2112@sh.itjust.works 19 points 4 hours ago

If I raise a fuckwit son, and then someone convinces my fuckwit son to kill himself, I'm going to sue that someone who took advantage of my son's fuckwittedness

[–] SalamenceFury@piefed.social 10 points 4 hours ago* (last edited 4 hours ago) (2 children)

I don't think this person was a "fuckwit". AI is designed to keep engaging with you and will affirm any belief you have, and anything that is a little weird, but innocent otherwise will simply get amplified further and further into straight up mega delusions until the person has a psychotic episode, and this stuff happens more to NORMIES with no historic of mental illnesses than neurodivergent people.

[–] tamal3@lemmy.world 2 points 2 hours ago* (last edited 2 hours ago)

Chat GPT was super affirming about a job I recently applied to... I did not get the job. That was my first experience with it affirming something that was personally important. And so I can absolutely see how this would affect someone in other ways.

load more comments (1 replies)
[–] NewNewAugustEast@lemmy.zip 6 points 4 hours ago* (last edited 4 hours ago) (6 children)

I would like to see the full transcript.

How do we know this didn't start off with prompts about creating a book, or asking about exciting things in life, or I don't know what.

Context would help a lot. Maybe it will come out in discovery.

That said, Gemini is garbage for anything anyways. Even as an AI, its bad at that.

[–] mojofrododojo@lemmy.world 2 points 54 minutes ago

Yeah, what was he wearing, right?

[–] throws_lemy@reddthat.com 1 points 2 hours ago* (last edited 2 hours ago) (1 children)

This could happen to anyone including people without having mental issues, simply by having long conversations with AI.

On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.

Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

‘I Worked on Google’s AI. My Fears Are Coming True’

[–] NewNewAugustEast@lemmy.zip 1 points 2 hours ago

This was a different case. That doesn't answer my question.

To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?

It is so hard to believe people are this stupid, but then again, looking at most people I guess it isn't that shocking.

load more comments (4 replies)
load more comments
view more: next ›