this post was submitted on 18 Feb 2026
921 points (99.4% liked)

Technology

81451 readers
4451 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tabular@lemmy.world 227 points 1 day ago* (last edited 1 day ago) (6 children)

Before hitting submit I'd worry I've made a silly mistake which would make me look a fool and waste their time.

Do they think the AI written code Just Works (TM)? Do they feel so detached from that code that they don't feel embarrassment when it's shit? It's like calling yourself a fictional story writer and writing "written by (your name)" on the cover when you didn't write it, and it's nonsense.

[–] Pamasich@kbin.earth 34 points 1 day ago

Nowadays people use OpenClaw agents which don't really involve human input beyond the initial "fix this bug" prompt. They independently write the code, submit the PR, argue in the comments, and might even write a hit piece on you for refusing to merge their code.

[–] kadu@scribe.disroot.org 163 points 1 day ago (1 children)

I'd worry I've made a silly mistake which would make me look a fool and waste their time.

AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.

Somebody comes along saying "Oh look at the image is just genera..." and you cut them with "looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?"

[–] MITM0@lemmy.world 25 points 1 day ago (2 children)

Not good enough, you need to poison the data

[–] leftzero@lemmy.dbzer0.com 20 points 1 day ago (2 children)

I don't want my data poisoned, I'd rather just poison the AI bros.

[–] k0e3@lemmy.ca 13 points 1 day ago

Yeah but then their Facebook accounts will keep producing slop even after they're gone.

[–] MITM0@lemmy.world 4 points 1 day ago

Tempting, but even that is not good enough as another reply pointed out

[–] Tyrq@lemmy.dbzer0.com 7 points 1 day ago

the data eventually poisons itself when it can do nothing but refer to its own output from however many generations of hallucinated data

[–] atomicbocks@sh.itjust.works 74 points 1 day ago (3 children)

From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

[–] notso@feddit.org 48 points 1 day ago (1 children)

You guys, it's almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon's playbook by flooding the zone with slop.

[–] sqw@lemmy.sdf.org 0 points 6 hours ago

at least with foss the horseshit is being done in public.

[–] wonderingwanderer@sopuli.xyz 2 points 16 hours ago

Doesn't someone have to review those submissions before they're published?

[–] Resonosity@lemmy.dbzer0.com 2 points 22 hours ago

Can Cloudflare help prevent this?

[–] Feyd@programming.dev 106 points 1 day ago (1 children)

LLM code generation is the ultimate dunning Kruger enhancer. They think they're 10x ninja wizards because they can generate unmaintainable demos.

[–] YetAnotherNerd@sopuli.xyz 55 points 1 day ago (1 children)

They’re not going to maintain it - they’ll just throw it back to the LLM and say “enhance”.

Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn't exist, or it'll use a face of someone in the training set and they go after the wrong person.

Either way I have a feeling they'll he some ENHANCE failure episode due to AI.

[–] SkaveRat@discuss.tchncs.de 63 points 1 day ago (1 children)

Do they think the AI written code Just Works

yes.

literally yes.

It's insane

[–] turboSnail@piefed.europe.pub 8 points 1 day ago (2 children)

That's how you know who never even tried to run the code.

[–] bjoern_tantau@swg-empire.de 4 points 18 hours ago (1 children)

Reminds me of one job I had where my boss asked shortly after starting there if their entry test was too hard. They had gotten several submissions from candidates that wouldn't even run.

I envision these types of people are now vibe coding.

[–] turboSnail@piefed.europe.pub 4 points 17 hours ago

Super lazy job applications… can’t even bother to put two minutes into vibing.

[–] SkaveRat@discuss.tchncs.de 7 points 22 hours ago (1 children)

that's the annoying part.

LLM code can range to "doesn't even compile" to "it actually works as requested".

The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From "let's abstract this with 5 new layers" to "I'm going to refactor that whole class of objects to get this simple method in there".

The requested feature might actually work. 100%.

It's just very possible that it either broke other stuff, or made the codebase less maintainable.

That's why it's important that people actually know the codebase and know what they/the model are doing. Just going "works for me, glhf" is not a good way to keep a maintainable codebase

[–] turboSnail@piefed.europe.pub 6 points 17 hours ago

LOL. So true.
On top of that, an LLM can also take you on a wild goose chase. When it gives you trash, you tell it to find a way to fix it. It introduces new layers of complication and installs new libraries without ever really approaching a solution. It’s up to the programmer to notice a wild goose chase like that and pull the plug early on.

That’s a fun little mini-game that comes with vibe coding.

[–] JustEnoughDucks@feddit.nl 7 points 1 day ago

I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn't come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.