this post was submitted on 04 Mar 2026
471 points (97.8% liked)

Technology

82250 readers
3984 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] throws_lemy@reddthat.com 1 points 5 hours ago* (last edited 5 hours ago) (1 children)

This could happen to anyone including people without having mental issues, simply by having long conversations with AI.

On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.

Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

‘I Worked on Google’s AI. My Fears Are Coming True’

[–] NewNewAugustEast@lemmy.zip 3 points 5 hours ago

This was a different case. That doesn't answer my question.

To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?

It is so hard to believe people are this stupid, but then again, looking at most people I guess it isn't that shocking.