Nah, it's good that they ripped off that bandaid. Parasocial AI relationships are terrible.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
The worst part is that they backstepped a bit and made it "friendlier".
Basically undoing that part.
its between codependant relationship and parasocial relationship of celebrities/public figures which is the extreme end, because usually its ends with stalking, or death threats.
Just a few more bucks bro! I swear then it will be the revolutionary "AI" we promised it to be.
*Few more billion.
I sometimes wonder if silicon valley tech businesses in general will take a reputation hit with investors when this bubble bursts, it's gonna be a doozy.
But then I remember how many greedy idiots there are out there pumping money into grifts in the hope of The Big Win, and my expectations of consequences are tempered.
your company doesnt look like it has a trillion. maybe apple , google can expand a little, or nvidia, but they surely arent going to build more.
That's pathetic
we definitely need to eradicate tech ceos from existence
You misspelled billionaires.
But not all tech CEO's are billionaires...
just need ZUCKERBORG ANDROID to malfunction.
"we fucked up our massive new generation product launch.. oh well lets invest trillions in new data centers" How do investors keep falling for this shit.
How indeed. It's probably a multi-factor phenomenon which requires an anthropological study for a serious answer. (Good luck trying to get the necessary access to study them.) My guess for one factor in this, is that they have more money than they know what to do with.
The american stock market is purely vibe driven now
Don’t they have enough?!? How about they fix and optimize their fancy autocompletion software instead?
They took a path they believed would develop into something, and it's a narrow alley they can't turn around in. They have to keep going with more compute and power to continue the chase. Thing is, everyone else seemingly thought they were onto something and followed as well, so they're all in the same predicament where reversing course is suicide. So they hope they can keep selling the dream a bit longer until something happens.
To be fair, it's a lot more than just autocomplete. But it's a lot less than what they wanted by now too.
vibe innovation, they are the ones that think AI will be innovative in science by spontaneous generating of new science discoveries, without "researchers, labs, papers"
I have seen some people talk like that, and it strikes me as a religion. There's euphoria, zeal, hope. To them AGI is coming to usher in heaven on earth. Singularity is like rupture.
Sam Altman is one of the preachers of this religion.
“I literally lost my only friend overnight with no warning,” one person posted on Reddit
It was meant to be satirical at the time, but maybe Futurama wasn't entirely off the mark. That Redditor isn't quite at that level, but it's still probably not healthy to form an emotional attachment to the Markov chain equivalent of a sycophantic yes-man.
Markov chain equivalent of a sycophantic yes-man.
not only that, but one that is fully owned and operated by a business that could change it any time they want, or even cease to exist completely.
This isn’t like a game where you could run your own server if you’re a big enough fan. if chatgpt stops existing in its current form that’s it.
I’m honestly surprised your’s is not the top comment. Like, whatever, the launch was bad, but there is a serious mental health crisis if people are forming emotional bonds to the software.
It annoys me that Chat GPT flat out lies to you when it doesn’t know the answer, and doesn’t have any system in place to admit it isn’t sure about something. It just makes it up and tells you like it’s fact.
LLMs don't have any awareness of their internal state, so there's no way for them to see something as a gap of knowledge.
Took me ages to understand this. I'd thought "If an AI doesn't know something, why not just say so?“
The answer is: that wouldn't make sense because an LLM doesn't know ANYTHING
It doesn't admit anything, it's a language machine
It doesn‘t know that it doesn‘t know because it doesn‘t actually know anything. Most models are trained on posts from the internet like this one where people rarely ever just chime in to admit they don‘t have an answer anyway. If you don‘t know something you either silently search the web for an answer or ask.
So since users are the ones asking ChatGPT, the LLM mimics the role of a person that knows the answer. It only makes sense AI is a „confidently wrong“ powerhouse.
Chat GPT makes up everything it says. It’s just good at guessing and bullshitting.
That’s actually one thing that got significantly improved with GPT-5, fewer hallucinations. Still not perfect of course
Altman also said that he thinks we’re in an AI “bubble.”
No shit, Sherlock.
Pathetic