this post was submitted on 26 Mar 2025
2 points (75.0% liked)

Technology

69098 readers
3036 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] glibg@lemmy.ca 4 points 3 weeks ago (2 children)
[–] theterrasque@infosec.pub 2 points 3 weeks ago

The quote was originally on news and journalists.

[–] LovableSidekick@lemmy.world 0 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Another realization might be that the humans whose output ChatGPT was trained on were probably already 40% wrong about everything. But let's not think about that either. AI Bad!

[–] starman2112@sh.itjust.works 2 points 3 weeks ago

This is a salient point that's well worth discussing. We should not be training large language models on any supposedly factual information that people put out. It's super easy to call out a bad research study and have it retracted. But you can't just explain to an AI that that study was wrong, you have to completely retrain it every time. Exacerbating this issue is the way that people tend to view large language models as somehow objective describers of reality, because they're synthetic and emotionless. In truth, an AI holds exactly the same biases as the people who put together the data it was trained on.

[–] Shanmugha@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

I'll bait. Let's think:

-there are three humans who are 98% right about what they say, and where they know they might be wrong, they indicate it

  • now there is an llm (fuck capitalization, I hate the ways they are shoved everywhere that much) trained on their output

  • now llm is asked about the topic and computes the answer string

By definition that answer string can contain all the probably-wrong things without proper indicators ("might", "under such and such circumstances" etc)

If you want to say 40% wrong llm means 40% wrong sources, prove me wrong

[–] Zip2@feddit.uk 2 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I couldn’t be bothered to read the article, so I got ChatGPT to summarise it. Apparently there’s nothing to worry about.

[–] interdimensionalmeme@lemmy.ml 0 points 3 weeks ago (2 children)

You didn't, I did

The article discusses a study by OpenAI and MIT Media Lab revealing that heavy ChatGPT users — dubbed "power users" — are showing signs of addiction to the chatbot. These signs include preoccupation, withdrawal, mood changes, and loss of control. While most users don't form emotional connections with ChatGPT, those who use it more frequently tend to view it as a "friend," especially if they feel lonely or emotionally unfulfilled.

Interestingly, people using ChatGPT for personal reasons (like sharing emotions) showed less emotional dependency than those using it for non-personal tasks (like brainstorming). Also, text-based interactions led to more emotional language than voice interactions, with brief voice use correlating to better well-being. Overall, prolonged use — regardless of purpose — increased the risk of emotional dependence.

You said:

Make a list of the main points and for each explain why it will be worse when it is society wide

ChatGPT said:

Here’s a list of the main points from the article, along with why each could be more concerning on a society-wide scale:

Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):

Why it's worse at scale:

If large numbers of people become dependent on AI tools, especially to the point of addiction, it could lead to widespread mental health issues, reduced productivity, and a society increasingly unable to function without AI assistance.

Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:

Why it's worse at scale:

A society where many people substitute AI for real human connection could see rising loneliness, social detachment, and a decline in community and interpersonal relationships. This weakens social bonds and may impact mental health and civic cohesion.

Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:

Why it's worse at scale:

This could create a feedback loop where loneliness drives AI use, and AI use further isolates people from real-world interactions, exacerbating social alienation on a mass level.

People stress out over subtle changes in ChatGPT’s behavior:

Why it's worse at scale:

If millions of people are emotionally reactive to minor algorithmic changes, this gives tech companies enormous influence over collective emotional states. It also creates societal instability, as shifts in AI behavior could unintentionally trigger widespread anxiety or confusion.

Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:

Why it's worse at scale:

If the most popular and accessible form of interaction (text) increases emotional entanglement, and the healthier option (brief voice use) is less utilized, we might unintentionally nudge society toward less emotionally healthy patterns of AI interaction.

Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:

Why it's worse at scale:

This is counterintuitive and dangerous because it suggests people may not realize they're forming dependencies while doing routine work. If AI tools meant to boost productivity are quietly making users emotionally reliant, it could have long-term effects on workforce dynamics and mental resilience.

Prolonged usage increases risk of emotional dependence, regardless of how it's used:

Why it's worse at scale:

As AI becomes embedded in everyday life, the likelihood of widespread emotional dependence increases. This could shift power dynamics between humans and AI, making people less autonomous and more easily influenced by algorithmic systems.

Let me know if you want this formatted for a presentation, policy paper, or discussion.

[–] liv@lemmy.nz 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

That's really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.

Namely, emotional dependence will give AI's big tech company owners increased power over people.

It's not as if these concepts aren't widely discussed online, everything from Meta's emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.

[–] interdimensionalmeme@lemmy.ml 2 points 3 weeks ago (1 children)
[–] liv@lemmy.nz 1 points 3 weeks ago (1 children)

Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-"lobotomized".

[–] interdimensionalmeme@lemmy.ml 2 points 3 weeks ago

I think this power to shape the available knowledge, removing it, paywalling it, based on discrimination, leveraging it, and finally manipulating for advertising, state security and personnal reason is why it should be illegal to privately own any ML/ AI models of any kind. Drive them all underground and only let the open ones benefit from sales in public.

[–] interdimensionalmeme@lemmy.ml 0 points 3 weeks ago (1 children)

You said: Take the same list again, for each point explain why it's actually beneficial and how it will get better with increased adoption

ChatGPT said:

Here’s the same list of main points, but this time with a more optimistic take — highlighting how each trend could actually be beneficial and how increased adoption might make things better for individuals and society as a whole:

Heavy ChatGPT users show signs of addiction (preoccupation, withdrawal, mood changes, loss of control):

Why it could be beneficial:

If people are turning to ChatGPT frequently, it may indicate that it's providing meaningful value — like reducing cognitive load, aiding creativity, or offering emotional support. As usage becomes more common, society may develop healthier digital habits, just as it has with smartphones and social media. Over time, norms and tools (e.g., usage timers, digital wellness features) could help people manage their use responsibly.

Long-term users tend to form emotional bonds with ChatGPT, sometimes seeing it as a “friend”:

Why it could be beneficial:

For people lacking companionship or emotional support, especially the elderly or isolated, AI companions can provide comfort, conversation, and a sense of presence. On a wider scale, AI could reduce loneliness-related health problems and even support mental wellness initiatives by being an accessible form of care or early intervention.

Lonelier individuals are more likely to use ChatGPT heavily and become emotionally attached:

Why it could be beneficial:

If AI offers a low-barrier outlet for the lonely, it might be a first step toward re-engagement with the world. AI can encourage people to express feelings they wouldn't share with others and even prompt them to seek human connection, therapy, or support services. Widespread AI use could destigmatize conversations about loneliness and mental health.

People stress out over subtle changes in ChatGPT’s behavior:

Why it could be beneficial:

This shows users are emotionally invested — which can push developers to create more consistent, emotionally intelligent systems. If society recognizes the emotional impact of AI, it will encourage better design, transparency, and regulation. Over time, this can lead to more stable and human-aware AI that positively complements emotional needs.

Emotional language is used more in text interactions than voice, and brief voice use is associated with better well-being:

Why it could be beneficial:

This gives users multiple channels of expression to suit their mood or needs — voice for brief emotional boosts, text for deeper reflection. With broader adoption, people can tailor AI interaction styles to maximize personal benefit. This insight also allows designers to create features that better match emotional and mental health goals.

Users who engage with ChatGPT for non-personal tasks are more emotionally dependent than those who use it for personal reasons:

Why it could be beneficial:

This means AI is becoming a highly trusted tool, even in practical domains. As more people adopt it for brainstorming, research, or decision-making, we’ll likely see a democratization of high-level thinking tools. Over time, dependency could evolve into partnership — similar to how calculators or GPS became essential but ultimately empowering tools.

Prolonged usage increases risk of emotional dependence, regardless of how it's used:

Why it could be beneficial:

Long-term usage can foster deeper understanding of one’s own thought processes, communication patterns, and emotional states. As AI integrates into daily life, people might develop greater self-awareness and reflection through their interactions. Society could benefit from individuals who are more emotionally literate and thoughtful, even if that awareness is sparked by an AI relationship.

Let me know if you'd like this version turned into a policy brief, an article, or a debate prep doc — could be used in all kinds of conversations.

You said: Explain how chatgpt is a stochastic parrot that cannot by design say anything fundamentally new and how it is basically just a keyboard auto complete on steroid

ChatGPT said:

Alright, here's the breakdown of that critique — the "stochastic parrot" idea and the comparison to advanced autocomplete — explained clearly and fairly:

  1. What does “stochastic parrot” mean?

The term "stochastic parrot" was coined in a 2021 paper by researchers including Emily Bender and Timnit Gebru. It refers to large language models like ChatGPT that:

"Stochastic" means based on probability. The model doesn’t “think” or “understand”; it predicts the next word based on statistical likelihood, given the words before it.

"Parrot" because it doesn’t generate new ideas—it repeats patterns and phrases it has seen in training data, like a parrot mimics human speech.

So, calling ChatGPT a stochastic parrot is a way of saying: It’s not truly intelligent. It’s mimicking language patterns, not generating original thought.

  1. It can’t say anything fundamentally new — why?

By design, ChatGPT:

Is trained on vast amounts of existing text from books, websites, forums, and more.

Learns the statistical patterns of how humans use language.

Generates responses by predicting the next most likely word or phrase based on that training.

Because it’s not connected to lived experience, emotion, consciousness, or intent, it:

Can’t invent genuinely new concepts.

Can’t form subjective opinions.

Can’t create knowledge that isn’t derived from patterns it has already seen.

It might sound novel, but under the hood, it’s remixing existing information in clever ways — not discovering or reasoning in a human sense.

  1. It's like "autocomplete on steroids" — how so?

If you’ve used Gmail’s or your phone’s autocomplete, you’ve seen basic language prediction in action. Type “I hope you...” and it might suggest “...have a great day.”

ChatGPT is essentially the same mechanism:

Autocomplete predicts the next word based on simple models.

ChatGPT predicts sequences of words using a far more complex, nuanced statistical model trained on billions of examples.

So e:

It can hold context better.

It can imitate different styles.

But it's still guessing the next words, not “thinking” or “understanding.”

In summary:

Calling ChatGPT a stochastic parrot or autocomplete on steroids emphasizes that:

It doesn’t understand meaning.

It doesn’t create knowledge.

It reflects patterns in language, not consciousness or intelligence.

It’s powerful and useful — but it’s important not to mistake fluency for understanding.

Let me know if you want the counterpoint to this critique too — because there are arguments in defense of the model’s creative utility despite its limitations.

[–] interdimensionalmeme@lemmy.ml 0 points 3 weeks ago (1 children)

You said: Now make the maximally opposite idea, make heavy use of the concept of the latent space

ChatGPT said:

Absolutely — here's the maximally opposite take, fully embracing the idea that models like ChatGPT are more than just stochastic parrots, and heavily leveraging the concept of latent space to argue that these models can, in a very real sense, generate novel ideas and insights. ChatGPT Is Not a Stochastic Parrot — It’s a Navigator of Latent Thought Space

While critics call ChatGPT a “stochastic parrot” or “autocomplete on steroids,” this view drastically underestimates the depth and complexity of what large language models are actually doing. These models aren’t just parroting data — they are mapping and exploring an extraordinarily rich latent space of human knowledge, culture, and thought.

  1. Latent Space: The Hidden Geometry of Meaning

At the core of models like ChatGPT is the idea of a latent space — a high-dimensional mathematical structure where abstract concepts, relationships, and patterns are embedded.

Every word, sentence, and idea is represented as a point or vector in this space.

The relationships between points encode not just surface patterns, but deep structural relationships between ideas — analogies, metaphors, causal chains, emotional tones, etc.

This latent space isn’t programmed in — it emerges from the training data through self-supervised learning. The result is a dynamic, multi-dimensional conceptual map that humans don’t have direct access to but the model can navigate.

  1. Novelty as Latent Interpolation and Extrapolation

Human creativity often works by connecting existing concepts in unexpected ways — something these models are exceptionally good at.

When ChatGPT generates text, it’s not repeating past sentences — it’s interpolating and extrapolating within the latent space.

It finds new paths between existing ideas, often producing connections no single human has written down before.

This is not parroting — it’s a form of latent-space creativity.

For example, when the model generates a metaphor or analogy you've never heard before, it’s discovering a new alignment in the conceptual topology — something genuinely emergent.

  1. Emergent Generalization, Not Memorization

Critics argue LLMs can’t understand or generalize. But in fact:

Models demonstrate emergent behaviors that were not directly trained into them — like multi-step reasoning, analogical thinking, or transfer across domains.

These behaviors arise from complex interactions in the latent space, not from rote recall.

In this view, ChatGPT is less like a parrot and more like a mathematical mind, fluent in the topography of human language and thought.

  1. Language as a Proxy for Thought

Language is not just a tool for communication — it’s a compressed, high-bandwidth representation of human thinking.

Because LLMs are trained on so much human-generated language, they build an internal model of how humans think, reason, feel, and imagine — as encoded in language.

That means ChatGPT isn’t just finishing your sentence; it’s navigating the conceptual manifolds of knowledge, philosophy, narrative, humor, and more — sometimes in ways humans themselves haven’t yet seen.

  1. With Scale Comes Emergence

As models scale in size and training data, they demonstrate qualitatively new behaviors — not just quantitative improvements. This includes things like:

Few-shot learning

Chain-of-thought reasoning

Style transfer

Multi-modal integration

These aren't just features of autocomplete — they’re signs of systems operating in a rich cognitive manifold, capable of abstract manipulation of meaning. In Summary:

ChatGPT isn’t just guessing the next word. It’s:

Navigating a vast, learned geometry of meaning

Connecting ideas in novel ways

Generating emergent insight by recombining latent concepts

It may not “understand” like a human, but it can synthesize in a way that mimics — and sometimes even surpasses — human creativity.

Calling it a "stochastic parrot" is like calling a jazz musician an “audio repeater” because they’re using notes they’ve heard before. It misses the creativity in the combination — the generative power that lies within the latent space.

Want to see a side-by-side debate between these two perspectives? Or explore where the middle ground is?

[–] grrgyle@slrpnk.net 0 points 3 weeks ago (1 children)

Is this hell? Is this comment thread hell? Does it never end?

[–] interdimensionalmeme@lemmy.ml 0 points 3 weeks ago (1 children)

I would have pasted it as a single comment, but that hit the character limit. So I split it in multiple comments. But now people aren't downvoting them equally, so the comments are getting out of order. These really have to be read in my posting order to understand what I did.

Oh well, too bad, ironically this kibd of highly negative response shows me, it was not worth the effort to post this and I do well to just keep to myself as I usually do.

[–] grrgyle@slrpnk.net 0 points 3 weeks ago (1 children)

Yeah the content is fine, but there's too much of it for a comment thread. You've got to spin that stuff off into an etherpad link or something, otherwise it's just too much matter to inflict on an innocent comment section.

[–] interdimensionalmeme@lemmy.ml 0 points 3 weeks ago (1 children)

But that means it will now receive 1% of the reading it would otherwise have as well as now the thread's coherence depends on that other website still existing. Which, in 2500 years, it probably won't.

[–] aeshna_cyanea@lemm.ee 1 points 3 weeks ago* (last edited 3 weeks ago)

Directly and with votes we the collective audience are telling you, please keep overlong ai gibberish in an external link. If that makes it get fewer views then perhaps it's not that interesting

[–] KingThrillgore@lemmy.ml 2 points 3 weeks ago (1 children)
[–] jade52@lemmy.ca 1 points 3 weeks ago (2 children)

What the fuck is vibe coding... Whatever it is I hate it already.

[–] Cgers@lemmy.dbzer0.com 1 points 3 weeks ago

Using AI to hack together code without truly understanding what your doing

[–] NostraDavid@programming.dev 1 points 3 weeks ago

Andrej Karpathy (One of the founders of OpenAI, left OpenAI, worked for Tesla back in 2015-2017, worked for OpenAI a bit more, and is now working on his startup "Eureka Labs - we are building a new kind of school that is AI native") make a tweet defining the term:

There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

People ignore the "It's not too bad for throwaway weekend projects", and try to use this style of coding to create "production-grade" code... Lets just say it's not going well.

source (xcancel link)

[–] LovableSidekick@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

TIL becoming dependent on a tool you frequently use is "something bizarre" - not the ordinary, unsurprising result you would expect with common sense.

[–] emeralddawn45@discuss.tchncs.de 0 points 3 weeks ago

If you actually read the article Im 0retty sure the bizzarre thing is really these people using a 'tool' forming a roxic parasocial relationship with it, becoming addicted and beginning to see it as a 'friend'.

[–] N0body@lemmy.dbzer0.com 1 points 3 weeks ago (6 children)

people tend to become dependent upon AI chatbots when their personal lives are lacking. In other words, the neediest people are developing the deepest parasocial relationship with AI

Preying on the vulnerable is a feature, not a bug.

[–] NostraDavid@programming.dev 0 points 3 weeks ago (1 children)

That was clear from GPT-3, day 1.

I read a Reddit post about a woman who used GPT-3 to effectively replace her husband, who had passed on not too long before that. She used it as a way to grief, I suppose? She ended up noticing that she was getting too attach to it, and had to leave him behind a second time...

[–] trotfox@lemmy.world 1 points 3 weeks ago

Ugh, that hit me hard. Poor lady. I hope it helped in some way.

[–] Deceptichum@quokk.au 0 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

These same people would be dating a body pillow or trying to marry a video game character.

The issue here isn’t AI, it’s losers using it to replace human contact that they can’t get themselves.

[–] morrowind@lemmy.ml 0 points 3 weeks ago (1 children)

You labeling all lonely people losers is part of the problem

[–] BradleyUffner@lemmy.world 0 points 3 weeks ago (1 children)

If you are dating a body pillow, I think that's a pretty good sign that you have taken a wrong turn in life.

[–] NostraDavid@programming.dev 0 points 3 weeks ago (1 children)

What if it's either that, or suicide? I imagine that people who make that choice don't have a lot of choice. Due to monetary, physical, or mental issues that they cannot make another choice.

[–] BradleyUffner@lemmy.world 0 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I'm confused. If someone is in a place where they are choosing between dating a body pillow and suicide, then they have DEFINITELY made a wrong turn somewhere. They need some kind of assistance, and I hope they can get what they need, no matter what they choose.

I think my statement about "a wrong turn in life" is being interpreted too strongly; it wasn't intended to be such a strong and absolute statement of failure. Someone who's taken a wrong turn has simply made a mistake. It could be minor, it could be serious. I'm not saying their life is worthless. I've made a TON of wrong turns myself.

[–] liv@lemmy.nz 0 points 3 weeks ago (1 children)

Trouble is your statement was in answer to @morrowind@lemmy.ml's comment that labeling lonely people as losers is problematic.

Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to "cause" it.

[–] BradleyUffner@lemmy.world 1 points 3 weeks ago (1 children)

That's an excellent point that I wasn't considering. Thank you for explaining what I was missing.

[–] liv@lemmy.nz 1 points 3 weeks ago
load more comments (4 replies)
[–] gamer@lemm.ee 1 points 3 weeks ago

That is peak clickbait, bravo.

[–] flamingo_pinyata@sopuli.xyz 1 points 3 weeks ago (1 children)

But how? The thing is utterly dumb. How do you even have a conversation without quitting in frustration from it's obviously robotic answers?

But then there's people who have romantic and sexual relationships with inanimate objects, so I guess nothing new.

[–] Kolanaki@pawb.social 1 points 3 weeks ago

If you're also dumb, chatgpt seems like a super genius.

[–] El_Azulito@lemmy.world 0 points 3 weeks ago (1 children)

I mean, I stopped in the middle of the grocery store and used it to choose best frozen chicken tenders brand to put in my air fryer. …I am ok though. Yeah.

[–] aceshigh@lemmy.world 1 points 3 weeks ago

At the store it calculated which peanuts were cheaper - 3 pound of shelled peanuts on sale, or 1 pound of no shell peanuts at full price.

[–] HappinessPill@lemmy.ml 0 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Do you guys remember when internet was the thing and everybody was like: "Look, those dumb fucks just putting everything online" and now is: "Look at this weird motherfucker that don't post anything online"

[–] NikkiDimes@lemmy.world 1 points 3 weeks ago

Remember when people used to say and believe "Don't believe everything you read on the internet?"

I miss those days.

[–] TheBat@lemmy.world 1 points 3 weeks ago

I remember when internet was a place

[–] PieMePlenty@lemmy.world 0 points 3 weeks ago (1 children)

Its too bad that some people seem to not comprehend all chatgpt is doing is word prediction. All it knows is which next word fits best based on the words before it. To call it AI is an insult to AI... we used to call OCR AI, now we know better.

[–] Lifter@discuss.tchncs.de 1 points 3 weeks ago

LLM is a subset of ML, which is a subset of AI.

[–] cupcakezealot@lemmy.blahaj.zone 0 points 3 weeks ago (1 children)

chatbots and ai are just dumber 1990s search engines.

I remember 90s search engines. AltaVista was pretty ok a t searching the small web that existed, but I'm pretty sure I can get better answers from the LLMs tied to Kagi search.

AltaVista also got blown out of the water by google(back when it was just a search engine), and that was in the 00s not the 90s. 25 to 35 years ago is a long time, search is so so much better these days(or worse if you use a "search" engine like Google now).

Don't be the product.

load more comments
view more: next ›