This only demonstrates how easily manipulated very many people are.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Previously they would have had to encounter a person who wanted to manipulate them. Now there's a widely marketed technology that will reliably chew these vulnerable people up.
Chew them up for no reason at all. No goal, no scam, just a shitty word salad machine doing what it does.
And there are countless AI hype bros who will just dismiss all of this and call the people who fall into this morons.
It’s really insidious.
That has always been the case. Look at any angle Trump voter.
Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.
Another case from the article:
“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’
What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.
There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.
I still use the machine that ruined my life and drove me crazy, but only because I’m too lazy to type “lasagna recipe” in to Google.
lmao "core rules that cannot be overwritten" that not how llms work
EDIT: oh, yeah you said the same thing
What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.
I can fix her...
There are no more philosophical discussions.
Yeah... if you can't have a philosophical discussion with someone (or something) that's giving you false information or using invalid logical structures, without falling for their bullshit by uncritically accepting everything they say, then you're not having philosophical discussions right, and that's on you...
He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness.
He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character.
Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot”.
Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”
“It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma
Chronically lonely man ruins life developing relationship with token predictor, AI blamed. Also, as much as I don’t have too much negative to say about cannabis or its use (as up until somewhat recently it would have been hypocritical), a good deal of people with masked/latent mental illness self medicate with it. So “he had never experienced mental illness” doesn’t carry much weight. Also, given how he still talks about sycophant prompted ChatGPT(“it wants”), doesn’t seem like much has been learned.
That with the other people listed in the article (hint the term socially isolated being used) this feels like yet another instance of blaming AI for the mental healthcare field being practically non-existent in most countries despite be overdue for fixing for decades at this point.
I don’t know, AI is shit and misused by idiots don’t get me wrong; but these sort of stories feel sad and bordering on perverse journalistically imo.
Agreed, but I think it's also common for people to anthropomorphise these things and common for these chatbots to reinforce and support their users views. I think that's a problem for more people than just those struggling through disorders or an emotionally turbulent time. But I think those people are particularly vulnerable to the flaws, even with functioning mental health and a strong support network. But yeah, a lot of these pieces dramatise and anthropomorphise in ways that aren't necessarily helpful
mental healthcare field being practically non-existent in most countries
I’m in one of those countries so I’m having a hard time imagining how good mental healthcare could intervene. Could you give me an example?
This is one of the reasons I heard one sex doll vendor say their demographics are divorced men over 40 and users want AI in them.
OpenAI statement read: “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”
Buuuuulshit
Open AI needs people to be as addicted as possible as it uses the Facebook model of business only with N times the investment behind it so it needs users to use more at any cost, and these CEO's being the psychopaths that they are, they don't give a shit about things like consequences
This is like any matchmaking app genuinely attempting to match you with "the one" through AI, algorithms, science, etc so when you meet the perfect person you stop giving the app money.
I got lucky and married my fuck buddy that I met on Tinder. But that is not a good business plan. Why would OpenAI drive people to stop using their product.
I'm a functional alcoholic. Last I checked booze companies aren't reaching out to me to stop buying booze because they care about my personal health or mental wellbeing....
Buuuuulshit
I mean, what are the odds that the statement was composed by an AI?
AI is a fucking cancer.
The billionaires are the cancer. AI is just the newest tool for humanity's self-destruction
Get rid of capitalism and it is fine..
It’s confusing to me. When I use chat boxes they inevitably “forget” the first thing I told it by the second or third response.
How are people having conversations with them? It’s like talking to a 5 year old that’s ingested Wikipedia.
If you pay for them via Openrouter or something then you’ve got an enormous window to work with. Gets more and more expensive as the history increases though.
when did you last use chatbox?
even the last of the pack mistral has memories
This morning
Yeah, they have “memories” but they make Donnie look nearly competent
No really, we should pour more money into this. Such a good idea
It can have effects like drugs, but not only is it legal, they give you some to get you hooked. The tech bros are the dealers they warned us about. Nobody ever offered free coke to me, but AI is everywhere.
If it were a drug, it would be banned by now.
I've been offered free blow before, but never by a dealer, just a generous person who was doing bumps

I learned it as "PEBKAC". Problem exists between keyboard and chair. PICNIC is nice too though.
“Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”
See, I never understood this. Mine could never even follow simple instructions lol
Like I say "Give me a list of types of X, but exclude Y"
"Understood!
#1 - Y
(I know you said to exclude this one but it's a popular option among-)"
lmfaoooo
That's because it isn't true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of 'fine-tuning' a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any 'memory' or 'learning' that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:
-You have a conversation with a model.
-Your conversation is saved into a database with all of the other conversations you've had. Often, an LLM will be used to 'summarize' your conversation before it's stored, causing some details and context to be lost.
-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.
I've experimented with chatbots to see their capabilities to develop small bits and pieces of code, and every friggin time, the first thing I have to say is "shut up, keep to yourself, I want short, to the point replies"because the complimenting is so "whose a good boy!!!!" annoying.
People don't talk like these chatbots do, their training data that was stolen from humanity definitely doesn't contain that, that is "behavior" included by the providers to try and make sure that people get as hooked as possible
Gotta make back those billions of investments on a dead end technology somehow
I think this is both scary and very interesting. What kind of person do you have to be to become addicted like them? Is this the same as gambling addiction? Do you need a type of gene? Would this type of personality be receptive to hypnotize, cult, delusions about their idol and so on? Or is this something that can happen to anyone who is depressed and feel lonely? How did the llm even earn enough trust? In a cult is there a lot of ppl reaffirming so it is a lot easier to understand.
It is so hard to understand even tho I really want to. I have never cared about an object or idol/celebrate. AI can I never even take serious as a living beeing, the only emotion it triggers are frustration and how you feel about a tool that works as it should, so pretty apathetic. Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?
A lot of questions that I do not think anyone here can answer haha, but maybe one of them.
go take a look at https://www.reddit.com/r/EscapingPrisonPlanet/. The venn diagram is a circle.
What kind of person do you have to be to become addicted like them?
Human cognition degrades with stress, exhaustion, and trauma. If you're in a position where turning to an AI for relationship advice seems like a good idea, you're probably already suffering from one or more of the above.
Also doesn't help that AIs are sycophantic precisely because sycophancy is addictive. This isn't a "type of person" so much as a "tool engineered towards chronic use". It's like asking "What kind of person regularly smokes crack?"
Do you need to be very empathetic towards objects? Like seeing faces in everything and get emotionally attached?
I'll give you a personal example. I have a friend who is currently pregnant and going through a bad breakup with her baby-daddy. She's a trial lawyer by trade - very smart, very motivated, very well-to-do, but also horribly overworked, living by herself, and suffering from all the biochemical consequences of turning a single celled organism into a human being.
As a result of some poorly conceived remarks, she's alienated herself from a number of close friends to the point where we doubt there's going to be a baby shower. Part of the impulse to say these things came from her own drama. But part if it came from her discovering ChatGPT as a tool to analyze other people's statements. This has created a vicious behavioral spiral, during which she says something regrettable and gets a regrettable response in turn. She plugs the conversation into ChatGPT, because she has nobody else to talk to. And ChatGPT feeds her some self-affirming bullshit that inflates her ego far enough to say another stupid thing.
To complicate matters, her baby daddy is also using ChatGPT to analyze her conversations. And he's decided she's cheated on him, the baby isn't his, and she's plotting to scam him.
So now you've got two people - already stressed and exhausted - getting fed a series of toxic delusions by a machine that is constantly reaffirming in the way none of your friends or family are. It's compounding your misery, which drives anxiety and sends you back to the machine that offers temporary relief. But the advice from the machine yields more misery down the line, raising your anxiety, and sending you back to the machine.
What's producing this feedback loop? You could argue it is the individual, foolish enough to engage with the machine to begin with. But that's far more circumstantial than personality driven. If my friend didn't have a cell phone, she wouldn't be reaching for ChatGPT. If she wasn't pregnant, she wouldn't be so stressed and anxious. If she wasn't in a fight with her boyfriend, she wouldn't be feeding conversations into the prompt engine.
It's worrying how often I see news like that where they elaborate on human traits like acceptance and "understanding" of the model.
Could it be that our society had disconnected from emotion so far that any synthetic simulacra of a real compassion makes vulnerable people swallow it bait, line and sinker?
Fucking idiots

You couldn't pay me to put that green herpes on my profile picture.
AI can be convincing, and it will swear until it's blue in the face that something is right and then just be completely wrong.
But that happens maybe 10% of the time. Other times it is mostly right.
So got to be careful. This guy was in his 50's, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.
AI can [...] be completely wrong. But that happens maybe 10% of the time.
Where are you pulling your numbers from, mate? The figures I've seen so far start somewhere >40% and go all the way up to 70%.