this post was submitted on 30 Sep 2025
910 points (98.5% liked)

Technology

75634 readers
3860 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] MrScottyTay@sh.itjust.works 14 points 10 hours ago

I use AI as an entryway to learning or for finding the name or technique that I'm thinking of but can't remember or know it's name so then i can look elsewhere for proper documentation. I would never have it just blindly writing code.

Sadly search engines getting shitter has sort of made me have to use it to replace them.

Then it's also good to quickly parse an error for anything obviously wrong.

[–] skulkbane@lemmy.world 2 points 6 hours ago

Im not super surprised, but AI has been really useful when it comes to learning or giving me a direction to look into something more directly.

Im not really an advocate for AI, but there are some really nice things AI can do. And i like to test the code quality of the models i have access to.

I always ask for a ftp server and dns server, to check what it can do and they work surprisingly well most of the time.

[–] vrighter@discuss.tchncs.de 23 points 13 hours ago (1 children)

it's slowing you down. The solution to that is to use it in even more places!

Wtf was up with that conclusion?

[–] poopkins@lemmy.world 2 points 8 hours ago

I don't think it's meant to be a conclusion. The article serves as a recap of several reports and studies about the effectivity of LLMs with coding, and the final quote from Bain & Company was a counterpoint to the previous ones asserting that productivity gains are minimal at best, but also that measuring productivity is a grey area.

[–] RagingRobot@lemmy.world 17 points 13 hours ago (2 children)

I have been vibe coding a whole game in JavaScript to try it out. So far I have gotten a pretty ok game out of it. It's just a simple match three bubble pop type of thing so nothing crazy but I made a design and I am trying to implement it using mostly vibe coding.

That being said the code is awful. So many bad choices and spaghetti code. It also took longer than if I had written it myself.

So now I have a game that's kind of hard to modify haha. I may try to setup some unit tests and have it refactor using those.

[–] mcv@lemmy.zip 5 points 10 hours ago (1 children)

Sounds like vibecoders will have to relearn the lessons of the past 40 years of software engineering.

[–] CheeseNoodle@lemmy.world 2 points 8 hours ago* (last edited 8 hours ago)

As with every profession every generation... only this time on their own because every company forgot what employee training is and expects everyone to be born with 5 years of experience.

[–] jaykrown@lemmy.world 2 points 8 hours ago (1 children)

Wait, are you blaming AI for this, or yourself?

load more comments (1 replies)
[–] kadaverin0@lemmy.dbzer0.com 34 points 15 hours ago

Imagine if we did "vibe city infrastructure". Just throw up a fucking suspension bridge and we'll hire some temps to come in later to find the bad welds and missing cables.

[–] z3rOR0ne@lemmy.ml 158 points 19 hours ago* (last edited 19 hours ago) (1 children)

Even though this shit was apparent from day fucking 1, at least the Tech Billionaires were able to cause mass layoffs, destroy an entire generation of new programmers' careers, introduce an endless amount of tech debt and security vulnerabilities, all while grifting investors/businesses and making billions off of all of it.

Sad excuses for sacks of shit, all of them.

[–] Prove_your_argument@piefed.social 21 points 16 hours ago

Look on the bright side, in a couple of years they will come crawling back to us, desperate for new things to be built so their profit machines keep profiting.

Current ML techniques literally cannot replace developers for anything but the most rudimentary of tasks.

I wish we had true apprenticeships out there for development and other tech roles.

[–] favoredponcho@lemmy.zip 25 points 16 hours ago (1 children)

Glad someone paid a bunch of worthless McKinsey consultants what I could’ve told you myself

[–] StefanT@lemmy.world 13 points 11 hours ago (1 children)

It is not worthless. My understanding is that management only trusts sources that are expensive.

load more comments (1 replies)
[–] Goldholz@lemmy.blahaj.zone 7 points 13 hours ago

No shit sherlock!

[–] peoplebeproblems@midwest.social 66 points 20 hours ago (7 children)

“No Duh,” say senior developers everywhere.

I'm so glad this was your first line in the post

load more comments (7 replies)
[–] dylanmorgan@slrpnk.net 36 points 18 hours ago (1 children)

The most immediately understandable example I heard of this was from a senior developer who pointed out that LLM generated code will build a different code block every time it has to do the same thing. So if that function fails, you have to look at multiple incarnations of the same function, rather than saying “oh, let’s fix that function in the library we built.”

[–] kescusay@lemmy.world 14 points 15 hours ago (1 children)

Yeah, code bloat with LLMs is fucking monstrous. If you use them, get used to immediately scouring your code for duplications.

[–] jj4211@lemmy.world 1 points 7 hours ago (1 children)

Yeah if I use it and it generatse more than 5 lines of code, now I just immediately cancel it out because I know it's not worth even reading. So bad at repeating itself and falling to reasonably break things down in logical pieces..

With that I only have to read some of it's suggestions, still throw out probably 80% entirely, and fix up another 15%, and actually use 5% without modification.

[–] kescusay@lemmy.world 1 points 7 hours ago

There are tricks to getting better output from it, especially if you're using Copilot in VS Code and your employer is paying for access to models, but it's still asking for trouble if you're not extremely careful, extremely detailed, and extremely precise with your prompts.

And even then it absolutely will fuck up. If it actually succeeds at building something that technically works, you'll spend considerable time afterwards going through its output and removing unnecessary crap it added, fixing duplications, securing insecure garbage, removing mocks (God... So many fucking mocks), and so on.

I think about what my employer is spending on it a lot. It can't possibly be worth it.

[–] Sibshops@lemmy.myserv.one 131 points 22 hours ago (5 children)

I mean.. At best it's a stack overflow/google replacement.

[–] Steve@startrek.website 18 points 17 hours ago (1 children)

I found that it only does well if the task is already well covered by the usual sources. Ask for anything novel and it shits the bed.

[–] snooggums@piefed.world 8 points 14 hours ago* (last edited 14 hours ago)

That's because it doesn't understand anything and is just vomiting forth output based on the code that was fed into it.

[–] timbuck2themoon@sh.itjust.works 13 points 16 hours ago (2 children)

At absolute best.

My experience is it's the bottom stack overflow answers. Making up bullshit and nonexistent commands, etc.

[–] mcv@lemmy.zip 1 points 9 hours ago

If you know what you want, its automatic code completion can save you some typing in those cases where it gets it right (for repetitive or trivial code that doesn't require much thought). It's useful if you use it sparingly and can see through its bullshit.

For junior coders, though, it could be absolute poison.

load more comments (1 replies)
[–] Warl0k3@lemmy.world 85 points 22 hours ago* (last edited 22 hours ago) (16 children)

There's some real perks to using AI to code - it helps a ton with templatable or repetitive code, and setting up tedious tasks. I hate doing that stuff by hand so being able to pass it off to copilot is great. But we already had tools that gave us 90% of the functionality copilot adds there, so it's not super novel, and I've never had it handle anything properly complicated at all successfully (asking GPT-5 to do your dynamic SQL calls is inviting disaster, for example. Requires hours of reworking just to get close.)

[–] Feyd@programming.dev 84 points 22 hours ago (1 children)

But we already had tools that gave us 90%

More reliable ones.

[–] MaggiWuerze@feddit.org 19 points 12 hours ago

Deterministic ones

load more comments (15 replies)
load more comments (2 replies)
[–] aesthelete@lemmy.world 12 points 17 hours ago

It turns every prototyping exercise into a debugging exercise. Even talented coders often suck ass at debugging.

[–] DarkDarkHouse@lemmy.sdf.org 11 points 17 hours ago (1 children)

The biggest value I get from AI in this space is when I get handed a pile of spagehtti and ask for an initial overview.

[–] jj4211@lemmy.world 1 points 7 hours ago

I thought that as well and got some code from someone that left the company and asked it to comment it.

It did the obvious "x= 5 // assign 5 to x" crap comments and then it got to the actually confusing part and just skipped that mess entirely....

[–] Dojan@pawb.social 17 points 18 hours ago

I miss the days when machine learning was fun. Poking together useless RNN models with a small dataset to make a digital Trump that talked about banging his daughter, end endless nipples flowing into America. Exploring the latent space between concepts.

[–] popekingjoe@lemmy.world 7 points 15 hours ago

Oh wow. No shit. Anyway!

[–] sp3ctr4l@lemmy.dbzer0.com 32 points 21 hours ago* (last edited 21 hours ago)

Almost like its a desperate bid to blow another stock/asset bubble to keep 'the economy' going, from C suite, who all knew the housing bubble was going to pop when this all started, and now is.

Funniest thing in the world to me is high and mid level execs and managers who believe their own internal and external marketing.

The smarter people in the room realize their propoganda is in fact propogands, and are rolling their eyes internally that their henchmen are so stupid as to be true believers.

[–] Somecall_metim@lemmy.dbzer0.com 10 points 17 hours ago

I am jack's complete lack of surprise.

[–] simplejack@lemmy.world 33 points 22 hours ago (1 children)

Might be there someday, but right now it’s basically a substitute for me googling some shit.

If I let it go ham, and code everything, it mutates into insanity in a very short period of time.

[–] degen@midwest.social 29 points 21 hours ago (3 children)

I'm honestly doubting it will get there someday, at least with the current use of LLMs. There just isn't true comprehension in them, no space for consideration in any novel dimension. If it takes incredible resources for companies to achieve sometimes-kinda-not-dogshit, I think we might need a new paradigm.

[–] Windex007@lemmy.world 15 points 19 hours ago (2 children)

A crazy number of devs weren't even using EXISTING code assistant tooling.

Enterprise grade IDEs already had tons of tooling to generate classes and perform refactoring in a sane and algorithmic way. In a way that was deterministic.

So many use cases people have tried to sell me on (boilerplate handling) and im like "you have that now and don't even use it!".

I think there is probably a way to use llms to try and extract intention and then call real dependable tools to actually perform the actions. This cult of purity where the llm must actually be generating the tokens themselves... why?

I'm all for coding tools. I love them. They have to actually work though. Paradigm is completely wrong right now. I don't need it to "appear" good, i need it to BE good.

load more comments (2 replies)
load more comments (2 replies)
load more comments
view more: ‹ prev next ›