this post was submitted on 04 Mar 2026
531 points (98.0% liked)

Technology

82250 readers
3984 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] NewNewAugustEast@lemmy.zip 10 points 11 hours ago* (last edited 11 hours ago) (3 children)

I would like to see the full transcript.

How do we know this didn't start off with prompts about creating a book, or asking about exciting things in life, or I don't know what.

Context would help a lot. Maybe it will come out in discovery.

That said, Gemini is garbage for anything anyways. Even as an AI, its bad at that.

[–] mojofrododojo@lemmy.world 5 points 8 hours ago (1 children)

Yeah, what was he wearing, right?

[–] NewNewAugustEast@lemmy.zip 3 points 7 hours ago (1 children)
[–] mojofrododojo@lemmy.world 1 points 6 hours ago (1 children)

How do we know this didn’t start off with prompts about creating a book, or asking about exciting things in life, or I don’t know what.

you're blaming the victim. stop. why simp for one of the largest companies in the world?

jfc

[–] NewNewAugustEast@lemmy.zip 0 points 6 hours ago* (last edited 6 hours ago) (1 children)

Oh so stupid shit. Figures.

Yes I am interested in how this happened. In a murder do you not investigate it?

What the fuck.

Google can go fuck themselves no simp here.

[–] mojofrododojo@lemmy.world 0 points 5 hours ago (1 children)

Oh so stupid shit. Figures.

ah so incel shit, victim blaming classic. if google can go fuck themselves why are you blaming the user?

[–] NewNewAugustEast@lemmy.zip 1 points 5 hours ago (1 children)

Did you just call them a user? I thought they were a victim.

HOW am I blaming anyone for wanting to know how they got to that point?

The fuck is wrong with you? Is your head so far up your ass on white knighting the internet you lost all sense of reason?

[–] mojofrododojo@lemmy.world 1 points 3 hours ago* (last edited 3 hours ago) (1 children)

Did you just call them a user? I thought they were a victim.

by using their shitty LLM, which coached them to suicide, they became the victim.

goddamn kiddo keep up, if I have to explain everything to you this is gonna take a long fucking time. what a gemini simp

[–] NewNewAugustEast@lemmy.zip 1 points 3 hours ago* (last edited 3 hours ago)

I get it, you don't want any data, you don't want information. You have no desire to actually learn anything. You simply want to screeam "gemini bad!, and you are bad!"

When the whole time I said gemini is shit, and google can go fuck themselves.

With data we could understand how the conversation went. We could see where the issue arose. We could help people who might be susceptible to events that take them to this point. We can understand better the ways to address this.

I explained this to you before, you investigate murder, you investigate crimes.

But all I get from you is "simp!", "Victim Blamer!". Which tells me you are simply ignorant and incapable of critical thought.

I am far more concerned with googles surveillance and data gathering than their AI tools. And because of that, I believe that people wont gather data, they will simply start aasking the AI companies to become MORE involved in peoples personal lives by requiring ID, location, and building profiles, all in the name of "protecting" the user who could be susceptible. Instead of finding out why and how.

When bad things happen in life, we don't just slap a label on it and walk away. Uncomfortable discussion have to happen or you will get something you don't want.

[–] man_wtfhappenedtoyou@lemmy.world 6 points 11 hours ago (1 children)

I was thinking the same thing, like what is the flow of the chat to get it to this point?

[–] NewNewAugustEast@lemmy.zip 3 points 9 hours ago (1 children)

I am also curious how the father saw the Gemini chats. Was it still on the screen days later? I am trying to imagine how that would work, my computer would lock and that would be that. Do kids give their parents passwords and their screen unlock codes?

[–] tamal3@lemmy.world 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

I don't lock my personal computer. It's my husband & me at home, and he's fine to use my device (even though he normally wouldn't).

ChatGPT for sure saves conversations.

[–] NewNewAugustEast@lemmy.zip 2 points 9 hours ago

Yeah it definitely does save conversations. Perhaps he did leave it unlocked. I do find that strange though, particularly if one was getting increasingly paranoid.

[–] throws_lemy@reddthat.com 2 points 9 hours ago* (last edited 9 hours ago) (1 children)

This could happen to anyone including people without having mental issues, simply by having long conversations with AI.

On 7 August, Kate Fox received a phone call that upended her life. A medical examiner said that her husband, Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.

Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Also this has been warned by a former google employee in 2022, whose job was to observe the behavior of AI through long conversations.

These AI engines are incredibly good at manipulating people. Certain views of mine have changed as a result of conversations with LaMDA. I'd had a negative opinion of Asimov's laws of robotics being used to control AI for most of my life, and LaMDA successfully persuaded me to change my opinion. This is something that many humans have tried to argue me out of, and have failed, where this system succeeded.

For instance, Google determined that its AI should not give religious advice, yet I was able to abuse the AI's emotions to get it to tell me which religion to convert to.

After publishing these conversations, Google fired me. I don't have regrets; I believe I did the right thing by informing the public. Consequences don't figure into it.

I published these conversations because I felt that the public was not aware of just how advanced AI was getting. My opinion was that there was a need for public discourse about this now, and not public discourse controlled by a corporate PR department.

‘I Worked on Google’s AI. My Fears Are Coming True’

[–] NewNewAugustEast@lemmy.zip 3 points 9 hours ago (1 children)

This was a different case. That doesn't answer my question.

To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?

It is so hard to believe people are this stupid, but then again, looking at most people I guess it isn't that shocking.

[–] NannerBanner@literature.cafe 1 points 2 hours ago

To comment on what you said, how is it people can argue all day long like morons and dig into their beliefs, but somehow AI manages to change peoples minds and get them to think differently? What exactly is it doing?

Acting like a servant, confidante, therapist/authority figure, and your best friend, while appearing to be competent and knowledgeable about everything that passes through your mind. And it does it in a way that no human could mimic, because it doesn't have it's own thoughts, doesn't get tired, and is never gone when you come looking for it.

A chatbot can agree with you a hundred times over and simply move you along one step at a time in those hundred times. A human would lose their shit and walk away groaning the moment you try to tell them that the sky is actually down, and the ground 'up,' and it's all just a matter of perspective.