this post was submitted on 30 Sep 2025
952 points (98.6% liked)

Technology

75634 readers
3267 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

you are viewing a single comment's thread
view the rest of the comments
[–] simplejack@lemmy.world 33 points 23 hours ago (1 children)

Might be there someday, but right now it’s basically a substitute for me googling some shit.

If I let it go ham, and code everything, it mutates into insanity in a very short period of time.

[–] degen@midwest.social 29 points 23 hours ago (3 children)

I'm honestly doubting it will get there someday, at least with the current use of LLMs. There just isn't true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.

[–] Jason2357@lemmy.ca 1 points 3 hours ago

They are statistical prediction machines. The more they output, the larger the portion of their "context window" (statistical prior) becomes the very output they generated. It's a fundamental property of the current LLM design that the snake will eventually eat enough of it's tail to puke garbage code.

[–] Windex007@lemmy.world 15 points 21 hours ago (1 children)

A crazy number of devs weren't even using EXISTING code assistant tooling.

Enterprise grade IDEs already had tons of tooling to generate classes and perform refactoring in a sane and algorithmic way. In a way that was deterministic.

So many use cases people have tried to sell me on (boilerplate handling) and im like "you have that now and don't even use it!".

I think there is probably a way to use llms to try and extract intention and then call real dependable tools to actually perform the actions. This cult of purity where the llm must actually be generating the tokens themselves... why?

I'm all for coding tools. I love them. They have to actually work though. Paradigm is completely wrong right now. I don't need it to "appear" good, i need it to BE good.

[–] degen@midwest.social 7 points 20 hours ago (1 children)

Exactly. We're already bootstrapping, re-tooling, and improving the entire process of development to the best of our collective ability. Constantly. All through good, old fashioned, classical system design.

Like you said, a lot of people don't even put that to use, and they remain very effective. Yet a tiny speck of AI tech and its marketing is convincing people we're about to either become gods or be usurped.

It's like we took decades of technical knowledge and abstraction from our Computing Canon and said "What if we didn't use that anymore?"

[–] Jason2357@lemmy.ca 1 points 3 hours ago

This is the smoking gun. If the AI hype boys really were getting that "10x engineer" out of AI agents, then regular developers would not be able to even come close to competing. Where are these 10x engineers? What have they made? They should be able to spin up whole new companies, with whole new major software products. Where are they?

[–] Glitchvid@lemmy.world 1 points 12 hours ago

I think we've tapped most of the mileage we can get from the current science, the AI bros conveniently forget there have been multiple AI winters, I suspect we'll see at least one more before "AGI" (if we ever get there).