this post was submitted on 14 Mar 2026
343 points (97.5% liked)

Technology

83185 readers
3085 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] YetAnotherNerd@sopuli.xyz 130 points 2 weeks ago (2 children)

I’m getting that more and more. “I asked ChatGPT and it said”. Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

Make sure they know they just lost input right ms the next time. No, I don’t ask Harry, he just quoted GPT last time, and I’d already asked it this time so there was no reason to involve him. Nothing worse for a lead than people not wanting them to lead because they’ve abdicated the job to spicy autocorrect.

[–] Zos_Kia@jlai.lu 44 points 2 weeks ago (1 children)

Dude, we work for the same company and I could have typed that in, and maybe I did. I wanted your experience with it, that’s why I asked you.

To me it's like sending the "let me google that for you" link to answer a question. It's just bad form. I don't want your whole reasoning trace man, i just want to know what you understand of it and maybe you'll catch some detail i'm missing or whatever. It's simple, i won't read LLM output, my colleagues know it and i get shit for it but no i am not digesting this material for you. Give me a 3 bullet-point version in your own words, the point is not just in the data exchange it's also to make sure you are aware of the answer and we have a common truth.

Or failing that, just give me the fucking prompt and at least i'll know if you understand the question.

[–] ulterno@programming.dev 14 points 2 weeks ago (1 children)

Or failing that, just give me the fucking prompt and at least I’ll know if you understand the question.

This one's really nice. I should make this my go to response to anyone doing that.

[–] Zos_Kia@jlai.lu 8 points 2 weeks ago

I'd love to take the credit but i actually stole it from that link that made the rounds on Hacker News

[–] AliasAKA@lemmy.world 20 points 2 weeks ago (1 children)

I think this is the way. A certain number of times of “[coworker] wasn’t asked because they only respond with LLMs, so I just ask the LLMs directly. I am not sure what [coworker]’s expertise is anymore, I just don’t consult them” and I suspect coworker may in fact stop responding with LLMs.

[–] YetAnotherNerd@sopuli.xyz 5 points 2 weeks ago (1 children)

Maybe. But they may just paste it without GPT attribution, so we’ll see.

[–] AliasAKA@lemmy.world 4 points 2 weeks ago

In my experience it is obvious. Calling people on it also makes them feel embarrassed usually. I put something like “I can just ask an LLM myself if I wanted this output. Please provide your own commentary.” If I were a manager and I had an employee just copy pasting that kind of output, I’d probably wonder if that employee actually contributes anything.

[–] neclimdul@lemmy.world 53 points 2 weeks ago (4 children)

A lot of times I feel like its more than lazy, its rude.

Either its something I'm supposed to know and you think I'm dumber than chatgpt or to dumb to look it up myself.

Or it's something you're supposed to know and don't think I'm worth the time to give me your opinion.

Either way, feels like a fuck you.

[–] Zeddex@sh.itjust.works 11 points 2 weeks ago* (last edited 2 weeks ago)

Yep. Someone on another team at my work does this constantly.

Them: I'm having a problem with x

Me: Ok, do this

Them: But Copilot said...

Then why are you even asking me? Stop bothering me and wasting my time.

[–] brotato@slrpnk.net 9 points 2 weeks ago

I 100% agree. To me it sort of feels like that old “Let me Google that for you” website. Like I wouldn’t have asked you something if I wanted you to just prompt ChatGPT. I want your informed opinion. But I guess informed opinions are hard to come by these days.

[–] Psythik@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)
[–] XeroxCool@lemmy.world 3 points 2 weeks ago (2 children)
[–] UntitledQuitting@reddthat.com 5 points 2 weeks ago

thats the one set in japan right?

load more comments (1 replies)
[–] d00ery@lemmy.world 46 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Someone literally copy and pasted a whole ChatGPT comment in an email reply to some questions I'd asked them. I was somewhat insulted.

[–] NekoKoneko@lemmy.world 33 points 2 weeks ago

You're right to feel insulted. LLMs are verbose and unreliable often enough that you have to check any work that comes out (or be negligent).

So what's usually happening is someone is saving their time by spending yours. They saved the time normally needed to write a thoughtful reply by shifting the time and cognitive cost of reading and verifying to you, with AI as an excuse (often not without condescension, which is a type of "virtue signaling" driven by c-suite AI boosting). The slop output looks like "work product," but is neither - it took no work and is a facade of a "product" because it's unverified.

They are being selfish, and it is objectively an insulting act.

[–] Armok_the_bunny@lemmy.world 4 points 2 weeks ago

Put them on a list where any and every email they send you gets fed into GPT and replied to without you ever reading it, then to make sure they know that explain what's happening in the signature.

load more comments (1 replies)
[–] NottaLottaOcelot@lemmy.ca 26 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I’m flabbergasted that they admit that ChatGPT said it, rather than copy-pasting it and pretending it’s their own work and hoping you don’t read it closely.

Even plagiarism has become lazy these days. At least do me the respect of concocting a lie.

[–] Eranziel@lemmy.world 24 points 2 weeks ago (2 children)

Some people seem to use it as an appeal to authority. This only works if you think ChatGPT is an authority on anything, though.

[–] NottaLottaOcelot@lemmy.ca 9 points 2 weeks ago (1 children)

I suppose you’re right, which is odd to me as the phrase “ChatGPT says…” automatically makes me question the validity of the information

[–] ashar@infosec.pub 7 points 2 weeks ago

It makes me doubt the validity of the person who wrote "ChatGPT said"

[–] k0e3@lemmy.ca 4 points 2 weeks ago

I find some of my friends and family say it as sort of a caveat. It's like saying, "here's the bare minimum 'research' I did. Take it with a huge grain of salt..." At least, that's how I interpret it from their tone of voice since they sound like they feel bad for admitting it.

[–] HereIAm@lemmy.world 11 points 2 weeks ago

I have a work colleague who does the copy pasting. He asks me how I can tell when he's using AI to write git commit messages when there's a sudden spike in capitalised words, correct grammar, emojis, bullet points (and add in that the message sometimes has nothing to do with what's in the changes). It's infuriating when he uses it in a discussion. I thought he's lack in skills to make himself understood was bad, but arguing essentially with a chatbot is so much worse.

[–] RegularJoe@lemmy.world 21 points 2 weeks ago

ChatGPT isn’t on the team.

Except that when someone pastes “ChatGPT thinks that {wall of AI-generated text}”

That person put ChatGPT on the team. And if there was no human input, the competition is free to use that and mock it word for word. Use fear, uncertainty, and doubt to convince your team that anyone can use that, including your competition, if it is published.

The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection.

https://natlawreview.com/article/copyright-offices-latest-guidance-ai-and-copyrightability

[–] bold_atlas@lemmy.world 20 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

When you let AI do your talking for your then you are voluntarily making yourself redundant.

BTW your chatbot is no Cyrano de Bergerac. It does not fool others nearly as much as you think it does. And the more you use it the more "smell blind" you become to it, just like someone who has no idea they reek because they're brain has filtered it long ago. Your use of AI becomes more and more obvious and cringe.

[–] Jakeroxs@sh.itjust.works 1 points 2 weeks ago

I find the exact opposite, as I use AI more, I can specifically tell when others use it and try to hide it more easily. People at work do it frequently.

[–] lechekaflan@lemmy.world 17 points 2 weeks ago (3 children)

It is lazy, and infuriating as it becomes mainstream.

load more comments (3 replies)
[–] leriotdelac@lemmy.zip 10 points 2 weeks ago (3 children)

It's the same as "Google said this". Before AI, Google could say nothing, it's a search engine. Same with gpt - it's a tool to access information from different sources.

Just having information out in the Internet / on a search index / accessed by LLM doesn't make it relevant or credible...

And what buffles me: it's pretty easy to set up gpt to cite sources and provide the links, filtering through sources that a user trusts. Why neither of my friends do it? Why "gpt said" is even an argument in a discussion?...

[–] Dozzi92@lemmy.world 7 points 2 weeks ago

Except people just straight up copy paste gpt output. At the very least people would say "I googled and got this result and that result." We've taken what was minimal work and made it minimaler.

[–] thedeadwalking4242@lemmy.world 5 points 2 weeks ago

Except information from Google was human made at least to some degree. With LLMs there is no guarantee

[–] nullroot@lemmy.world 1 points 2 weeks ago

.... Buffalo buffalo buffles

[–] EncryptKeeper@lemmy.world 9 points 2 weeks ago (1 children)

I got this response from a 70+ Catholic Priest. Quite literally nothing in this world is sacred or real anymore.

[–] ulterno@programming.dev 8 points 2 weeks ago

Considering that despite going over lvl70, he decided with Catholic Priest instead of Saint,Warlock or Archmage, it should already be making you question his decision making ability.

[–] ab4kus@feddit.org 7 points 2 weeks ago (1 children)

"According to leading language models..."

[–] SethTaylor@lemmy.world 6 points 2 weeks ago

"This idea was tested in a state-of-the-art simulation" - Jerry Smith

[–] INHALE_VEGETABLES@aussie.zone 4 points 2 weeks ago

I'd generate an AI response to this but CBF.

[–] GaMEChld@lemmy.world 2 points 2 weeks ago

A simulation is only as accurate as the person's ability to rationalize. It should only be used by people who can already out think it, because you need to be able to challenge and correct it.

[–] ItsMeForRealNow@lemmy.world 1 points 1 week ago

Counter point - I say it to mean 'don't trust this shit but since we're out of ideas, we can check this out'.

load more comments
view more: next ›