this post was submitted on 19 Jul 2025
432 points (94.1% liked)

Technology

73071 readers
2338 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

..without informed consent.

top 50 comments
sorted by: hot top controversial new old
[–] PlutoniumAcid@lemmy.world 45 points 2 days ago

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can't rely on proof-of-thought anymore.

This is what makes AI so insidious. It's like email spam. It puts the burden on the reader to determine and sort ham from spam.

[–] brsrklf@jlai.lu 164 points 3 days ago (12 children)

Every now and then I see a guy barging in a topic bringing nothing else than "I asked [some AI service] and here's what it said", followed by 3 paragraphs of AI-gened gibberish. And then when it's not well received they just don't seem to understand.

It's baffling to me. Anyone can ask an AI. A lot of people specifically don't, because they don't want to battle with its output for an hour trying to sort out from where it got its information, whether it represented it well, or even whether it just hallucinated half of it.

And those guys come posting a wall of text they may or may not have read themselves, and then they have the gall to go "What's the problem, is any of that wrong?"... Dude, the problem is you have no fucking idea if it's wrong yourself, have nothing to back it up, and have only brought automated noise to the conversation.

[–] expr@programming.dev 35 points 2 days ago

I was trying to help onboard a new lead engineer and I was working through debugging his caddy config on Slack. I'm clearly putting in effort to help him diagnose his issue and he posts "I asked chatgpt and it said these two lines need to be reversed", which was completely false (caddy has a system for reordering directives) and honestly just straight up insulting. Fucking pissed me off. People need to stop brining AI slop into conversations. It isn't welcome and can fuck right off.

The actual issue? He forgot to restart his development server. 😡

load more comments (11 replies)
[–] rob_t_firefly@lemmy.world 17 points 2 days ago

I am not sure the kind of people who think using the thieving bullshit slop machine is a fine thing to do to can be trusted to have appropriate ideas about rudeness and etiquette.

[–] chihuamaranian@lemmy.ca 15 points 2 days ago (1 children)

Personally, I don't mind the "I asked AI and it said..." Because I can choose to ignore anything that follows.

Yes, I can judge the sender. But consent is still in my hands.

Otherwise, I largely agree with the article on its points, and also appreciate it raising the overall topic of etiquette given a new technology.

Like the shift to smart phones, this changes the social landscape.

[–] Auth@lemmy.world 5 points 2 days ago (1 children)

I really dont like "I asked AI and it said X" but then I realise that many people including myself will search google and then relay random shit that seems useful and I dont see how AI is much different. Maybe both are bad, I dont do either anymore. But I guess both are just a person trying to be helpful and at the end of the day thats a good thing.

[–] JohnEdwa@sopuli.xyz 4 points 1 day ago* (last edited 1 day ago)

And now googling will just result in "I asked AI and it said X", as the first thing you get is the AI summary shit. A friend of mine does this constantly, we are in a discord call and somebody asks a question, he will google it and repeat the AI slop back as a fact.

Half the time it's wrong.

[–] Pamasich@kbin.earth 4 points 2 days ago (1 children)

Here's a question regarding the informed consent part.

The article gives the example of asking whether the recipient wants the AI's answer shared.

"I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want."

Do you (I mean generally people reading this thread, not OP specifically) think Lemmy's spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.

[–] erlend_sh@lemmy.world 2 points 1 day ago

Good question; that would qualify for me, yeh!

[–] Evotech@lemmy.world 33 points 3 days ago (1 children)

The worst is being in a technical role, and having project managers and marketing people telling me how it is based on some chathpt output

Like shut the fuck up please, you literally don’t know what you are talking about

[–] squaresinger@lemmy.world 14 points 3 days ago (1 children)

Sadly we had that problem before AI too... "Some dude I know told me this is super easy to do"

[–] lemmyknow@lemmy.today 8 points 2 days ago (1 children)
[–] Klear@lemmy.world 3 points 2 days ago

We all lose. Fatality!

[–] chronicledmonocle@lemmy.world 17 points 2 days ago

I work in a Technical Assistance Center for a networking company. Last night, while working, I got a ticket where the person kept sending troubleshooting summaries they asked ChatGPT to write.

Speedrun me not reading your ticket any%.

[–] OriginalUsername7@lemmy.world 76 points 3 days ago* (last edited 3 days ago) (4 children)

This is exactly something that has annoyed me in a sports community I follow back on Reddit. Posts with titles along the lines of “I asked ChatGPT what it thinks will happen in the game this weekend and here is what it said”.

Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

We’ve also learned nothing about the OPs opinion on the matter, other than maybe that they don’t have one. And even more to the point, it’s so intellectually lazy that it just feels like karma farming. “Ya I have nothing to add but I do love me them updoots”.

I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

But that argument always just feels overly combative for what is otherwise a pretty relaxed sports community. It’s just not worth having that fight there.

[–] Auth@lemmy.world 3 points 2 days ago

Old reddit would have annihilated that post.

[–] WhyJiffie@sh.itjust.works 12 points 3 days ago

Why? What does ChatGPT add to the conversation here? Asking the question directly in the subreddit would have encouraged the same discussion.

I guess it has some tabloid-like value. which if counts as value, tells a lot about the other party.

[–] Cethin@lemmy.zip 6 points 3 days ago (2 children)

I would rather someone posted saying they knew shit all about the sport but they were interested, than someone feigning knowledge by using ChatGPT as some sort of novel point of view, which it never is. It’s ways the most milquetoast response possible, ironically adding less to the conversation than the question it’s responding to.

That's literally the point of them. They're supposed to generate what the most likely result would be. They aren't supposed to be creative or anything like that. They're supposed to be generic.

load more comments (2 replies)
[–] kshade@lemmy.world 1 points 2 days ago

Treating an LLM like a novelty oracle seems okay-ish to me, it's a bit like predicting who will win the game by seeing which bowl a duck will eat from. Except minus the cute duck, of course. At least nobody will take it too serious, and those that do will probably see why they shouldn't.

Still annoying though.

[–] Allero@lemmy.today 15 points 2 days ago (1 children)

I like the premise behind this.

But how do we differentiate? Unless explicitly mentioned, it might be hard to tell the difference between AI and native human message.

It's enough for the other side not to mention the message is AI-generated to fool us for quite a while.

[–] rottingleaf@lemmy.world 4 points 2 days ago

You differentiate by only seeing what your acknowledged peers post and what their acknowledged peers post.

That's for communities of many people. That requires having global transparent ID of the other user. Now you interact with a service on the Web. That stops being good enough.

I actually like that, because that might mean that today's Web in its entirety is not good enough.

The old "services yielding linked hypertext" one - yes. A personal webpage is a person. It's possible to devise common way of checks. Many services, some good and some not - a way to technically separate them too.

An alternative to Usenet with global IDs for users and posts - yes.

But one platform-website for all of a kind of interactions with generic executable dynamic contents - morally obsolete.

If that happens, I'm going to donate to OpenAI and whoever else makes it happen. Well, maybe not much, but I am.

[–] audaxdreik@pawb.social 57 points 3 days ago (1 children)

Blindsight mentioned!

The only explanation is that something has coded nonsense in a way that poses as a useful message; only after wasting time and effort does the deception becomes apparent. The signal functions to consume the resources of a recipient for zero payoff and reduced fitness. The signal is a virus.

This has been my biggest problem with it. It places a cognitive load on me that wasn't there before, having to cut through the noise.

[–] Auth@lemmy.world 3 points 2 days ago (1 children)

Is blindsight worth a read? It seemed interesting from the brief description.

[–] audaxdreik@pawb.social 3 points 2 days ago

Oh yes, I think Peter Watts is a great author. He's very good at tackling high concept ideas while also keeping it fun and interesting. Blindsight has a vampire in it in case there wasn't already enough going on for you 😁

Unrelated to the topic at hand, I also highly recommend Starfish by him. It was the first novel of his I read. A dark, psychological thriller about a bunch of misfits working a deep sea geothermal power plant and how they cope (or don't) with the situation at hand.

[–] Patch@feddit.uk 44 points 3 days ago* (last edited 3 days ago) (1 children)

If only the biggest problem was messages starting "I asked ChatGPT and this is what it said:"

A far bigger problem is people using AI to draft text and then posting it as their own. On social media like this, I can't count the number of comments I've encountered midway through an otherwise normal discussion thread, and only clocked 2 paragraphs in that I'm reading a chat bot's response. I feel like I've had time and braincells stolen from me in the deception for the moments spent reading and attempting to derive meaning from it.

And just this week I received an application from someone wanting work in my office which was very clearly AI generated. Obviously that person will not be offered any work. If you can't be bothered to write your own "why I want to work here" cover letter, then I can't be bothered to work with you.

[–] jj4211@lemmy.world 18 points 3 days ago* (last edited 3 days ago) (1 children)

Have seen emails at work that were AI generated, but they made no disclaimer. Then someone points out how wildly incorrect it was and they just say "oh whoops, not my fault, I just ask ed an LLM". They set things up to take credit if people liked it, and used the LLMs are just stupid as an excuse when it doesn't fly.

[–] nomy@lemmy.zip 6 points 3 days ago

In every business I've worked in, any email longer than a paragraph better have a summary and action items at the end or nobody is going to read it.

In business time is money, email should be short and to the point.

[–] Feyd@programming.dev 21 points 3 days ago

Yes. I am getting so sick and tired of people asking me for help then proceeding to rain unhelpful suggestions from their LLM upon me while I'm trying to think through their problem. You wouldn't be asking for help if that stuff was helping you!

[–] LesserAbe@lemmy.world 14 points 3 days ago (2 children)

This is a good post.

Thinking about it some more, I don't necessarily mind if someone said "I googled it and..." then provides some self generated summary of what they found which is relevant to the discussion.

I wouldn't mind if someone did the same with an LLM response. But just like I don't want to read a copy and paste of chatgpt results I don't want to read someone copy/pasting search results with no human analysis.

[–] belit_deg@lemmy.world 7 points 3 days ago

I have a few colleagues that are very skilled and likeable people, but have horrible digital etiquette (40-50 year olds).

Expecting people to read regurgitated gpt-summaries are the most obvious.

But another one that bugs me just as much, are sharing links with no annotation. Could be a small article or a long ass report or white paper with 140 pages. Like, you expect me to bother read it, but you can't bother to say what's relevant about it?

I genuinely think it's well intentioned for the most part. They're just clueless about what makes for good digital etiquette.

[–] Almacca@aussie.zone 6 points 3 days ago

If you're going to use an LLM, at least follow the links it provides to the source of what they output. You really need to check their work.

[–] finitebanjo@lemmy.world 8 points 3 days ago

You're damn right, if somebody puts slop in my face I get visibly aggressive.

[–] jjjalljs@ttrpg.network 17 points 3 days ago

Sometimes people are my old job post AI stuff and I just tell them "stop using the lie machine"

[–] zapzap@lemmings.world 8 points 3 days ago (1 children)

I think sometimes when we ask people something we're not just seeking information. We're also engaging with other humans. We're connecting, signaling something, communicating something with the question, and so on. I use LLMs when I literally just want to know something, but I also try to remember the value of talking to other human beings as well.

[–] finitebanjo@lemmy.world 5 points 3 days ago (1 children)

You should pretty much assume everything that a chatbot says could be false to a much higher degree than human written content, making it effectively useless for your stated purpose.

[–] zapzap@lemmings.world 0 points 1 day ago (1 children)

That has not been my experience.

[–] finitebanjo@lemmy.world 1 points 1 day ago (1 children)

I gave advice, advice rarely follows what you've experienced or people wouldn't feel the need to give it.

[–] zapzap@lemmings.world 0 points 10 hours ago
[–] Glitchvid@lemmy.world 4 points 3 days ago* (last edited 1 day ago)

What a coincidence, I was just reading sections of Blindsight again for an assignment (not directly related to its contents) and had a similar thought when re-parsing a section near the one in the OP — it's scary how closely the novel depicted something analogous to contemporary LLM output.

load more comments
view more: next ›