this post was submitted on 28 Mar 2026
219 points (96.2% liked)

Technology

83150 readers
3477 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 41 comments
sorted by: hot top controversial new old
[–] wulrus@lemmy.world 1 points 30 minutes ago

The one point I don't completely understand is the tax debt: Wouldn't a failed business, no matter how ridiculous, be a complete write-off?

Maybe the problem is that he has to tax each fiscal year independently, so a tax debt in 2023 from successful freelance work would not be diminished by a failed "business idea" in 2024.

[–] greyscale@lemmy.sdf.org 1 points 34 minutes ago* (last edited 32 minutes ago)

You couldn't pay me to put that green herpes on my profile picture.

[–] FosterMolasses@leminal.space 1 points 57 minutes ago

“Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.”

See, I never understood this. Mine could never even follow simple instructions lol

Like I say "Give me a list of types of X, but exclude Y"

"Understood!

#1 - Y

(I know you said to exclude this one but it's a popular option among-)"

lmfaoooo

[–] Internetexplorer@lemmy.world 5 points 7 hours ago (1 children)

AI can be convincing, and it will swear until it's blue in the face that something is right and then just be completely wrong.

But that happens maybe 10% of the time. Other times it is mostly right.

So got to be careful. This guy was in his 50's, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.

[–] IratePirate@feddit.org 2 points 1 hour ago (1 children)

AI can [...] be completely wrong. But that happens maybe 10% of the time.

Where are you pulling your numbers from, mate? The figures I've seen so far start somewhere >40% and go all the way up to 70%.

[–] hanrahan@slrpnk.net 2 points 17 minutes ago (1 children)

so..a bit like economists then ?

[–] IratePirate@feddit.org 1 points 4 minutes ago

Not if we're talking Jim Cramer, who is well beyond 70%.

[–] CTDummy@aussie.zone 53 points 13 hours ago (3 children)

He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness.

He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character.

Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot”.

Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

“It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma

Chronically lonely man ruins life developing relationship with token predictor, AI blamed. Also, as much as I don’t have too much negative to say about cannabis or its use (as up until somewhat recently it would have been hypocritical), a good deal of people with masked/latent mental illness self medicate with it. So “he had never experienced mental illness” doesn’t carry much weight. Also, given how he still talks about sycophant prompted ChatGPT(“it wants”), doesn’t seem like much has been learned.

That with the other people listed in the article (hint the term socially isolated being used) this feels like yet another instance of blaming AI for the mental healthcare field being practically non-existent in most countries despite be overdue for fixing for decades at this point.

I don’t know, AI is shit and misused by idiots don’t get me wrong; but these sort of stories feel sad and bordering on perverse journalistically imo.

[–] Aatube@lemmy.dbzer0.com 4 points 2 hours ago

mental healthcare field being practically non-existent in most countries

I’m in one of those countries so I’m having a hard time imagining how good mental healthcare could intervene. Could you give me an example?

[–] Spacehooks@reddthat.com 2 points 2 hours ago

This is one of the reasons I heard one sex doll vendor say their demographics are divorced men over 40 and users want AI in them.

[–] porcoesphino@mander.xyz 20 points 12 hours ago

Agreed, but I think it's also common for people to anthropomorphise these things and common for these chatbots to reinforce and support their users views. I think that's a problem for more people than just those struggling through disorders or an emotionally turbulent time. But I think those people are particularly vulnerable to the flaws, even with functioning mental health and a strong support network. But yeah, a lot of these pieces dramatise and anthropomorphise in ways that aren't necessarily helpful

[–] MountingSuspicion@reddthat.com 70 points 14 hours ago (2 children)

Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

Another case from the article:

“I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

What's weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be "overwritten" because they do not exist to ChatGPT. It does not know what words mean.

[–] SchwertImStein@lemmy.dbzer0.com 4 points 2 hours ago* (last edited 2 hours ago)

lmao "core rules that cannot be overwritten" that not how llms work

EDIT: oh, yeah you said the same thing

[–] scytale@piefed.zip 28 points 11 hours ago

There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.

[–] Triumph@fedia.io 80 points 14 hours ago (3 children)

This only demonstrates how easily manipulated very many people are.

[–] Nomad@infosec.pub 6 points 4 hours ago

That has always been the case. Look at any angle Trump voter.

[–] floofloof@lemmy.ca 51 points 14 hours ago* (last edited 14 hours ago) (1 children)

Previously they would have had to encounter a person who wanted to manipulate them. Now there's a widely marketed technology that will reliably chew these vulnerable people up.

[–] Steve@startrek.website 31 points 12 hours ago (1 children)

Chew them up for no reason at all. No goal, no scam, just a shitty word salad machine doing what it does.

[–] paraphrand@lemmy.world 6 points 10 hours ago

And there are countless AI hype bros who will just dismiss all of this and call the people who fall into this morons.

It’s really insidious.

[–] vacuumflower@lemmy.sdf.org -2 points 8 hours ago (1 children)
[–] Triumph@fedia.io 5 points 8 hours ago (1 children)
[–] vacuumflower@lemmy.sdf.org 3 points 5 hours ago

Yes, I can't stress how terrifying this is. Still all people.

[–] CompactFlax@discuss.tchncs.de 26 points 14 hours ago (2 children)

It’s confusing to me. When I use chat boxes they inevitably “forget” the first thing I told it by the second or third response.

How are people having conversations with them? It’s like talking to a 5 year old that’s ingested Wikipedia.

[–] DireTech@sh.itjust.works 12 points 13 hours ago

If you pay for them via Openrouter or something then you’ve got an enormous window to work with. Gets more and more expensive as the history increases though.

[–] Eyekaytee@aussie.zone 3 points 13 hours ago (1 children)

when did you last use chatbox?

even the last of the pack mistral has memories

[–] CompactFlax@discuss.tchncs.de 9 points 13 hours ago* (last edited 13 hours ago) (1 children)

This morning

Yeah, they have “memories” but they make Donnie look nearly competent

[–] Eyekaytee@aussie.zone 3 points 13 hours ago (1 children)

weird, i don’t have that experience at all

claude in particular is a huge step up above the others

[–] CompactFlax@discuss.tchncs.de 5 points 13 hours ago (1 children)

To be fair haven’t tried that one. Gemini started bringing in unrelated, previous shit to a recent conversation, which is the first time I’ve experienced that.

[–] Eyekaytee@aussie.zone 2 points 12 hours ago (1 children)

ah ive been degoogling for years now, only maps and youtube left

claude for sure no1 to me but i haven’t ofc compared to gemini, qwen is a chronic over thinker, glm is not bad

mistral seems like it’s a year behind the sota models, still in its “confidently incorrect can’t double check things” phase

whereas others seem to be more like hrmm is this right? let me search web to be sure

[–] CompactFlax@discuss.tchncs.de 4 points 12 hours ago

Same, but Gemini was the best of the lot about six months ago and it’s where I go these days for brain dead searching.

I’ll give Claude a go next week. I do try to avoid them, but sometimes I have a question that just isn’t keyword search-able.

[–] devolution@lemmy.world 32 points 15 hours ago* (last edited 14 hours ago) (3 children)
[–] Trex202@lemmy.world 35 points 14 hours ago (1 children)

The billionaires are the cancer. AI is just the newest tool for humanity's self-destruction

[–] FosterMolasses@leminal.space 3 points 52 minutes ago

This right here. Before that it was AirBnb, social media, smartphones... the list goes on.

[–] Sims@lemmy.ml 3 points 13 hours ago

Get rid of capitalism and it is fine..

[–] SeductiveTortoise@piefed.social 21 points 14 hours ago (2 children)

No really, we should pour more money into this. Such a good idea 🫩

It can have effects like drugs, but not only is it legal, they give you some to get you hooked. The tech bros are the dealers they warned us about. Nobody ever offered free coke to me, but AI is everywhere.

[–] UncleArthur@lemmy.world 3 points 5 hours ago

If it were a drug, it would be banned by now.

[–] Dyskolos@lemmy.zip 8 points 13 hours ago (1 children)

You're absolutely right. Totally unrelated: wanna try some free blow?

[–] SeductiveTortoise@piefed.social 5 points 13 hours ago

Hey, stop dismantling my argument! /s