this post was submitted on 30 Sep 2025
910 points (98.5% liked)

Technology

75634 readers
3860 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

top 50 comments
sorted by: hot top controversial new old
[–] badgermurphy@lemmy.world 5 points 45 minutes ago (1 children)

I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don't understand, though, is the magnitude of this bubble then.

Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.

In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.

So, I guess my question is, "What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?" If there really are none, or if the gains are only incremental, then my question becomes an incredulous, "Is this biggest in history tech bubble really composed entirely of unfounded hype?"

[–] TipsyMcGee@lemmy.dbzer0.com 1 points 12 minutes ago

When the AI bubble bursts, even janitors and nurses will lose their jobs. Financial institutions will go bust.

[–] JackbyDev@programming.dev 5 points 1 hour ago (1 children)

The people talking about AI coding the most at my job are architects and it drives me insane.

[–] ceiphas@feddit.org 1 points 1 hour ago

I am a software architect, an mainly usw it to refactor my own old code... But i am maybe not a typical architect...

[–] MIXEDUNIVERS@discuss.tchncs.de 2 points 34 minutes ago

i use it for programming arduinos for my smarthome. its pretty nice but also aggravating.

[–] OmegaMan@lemmings.world 2 points 57 minutes ago (2 children)

Writing apps with AI seems pretty cooked. But I've had some great successes using GitHub copilot for some annoying scripting work.

[–] NikkiDimes@lemmy.world 2 points 39 minutes ago

I think it's useful for writing mundane snippets I've written a million times or helping me with languages I'm less familiar with, but anything more compex becomes pretty spaghetti pretty quick.

[–] Canconda@lemmy.ca 1 points 48 minutes ago (1 children)

AI is works well for mindless tasks. Data formatting, rough drafts, etc.

Once a task requires context and abstract thinking, AI can't handle it.

[–] OmegaMan@lemmings.world 1 points 9 minutes ago

Eh, I don't know. As long as you can break it down into smaller sub-tasks, AI can do some complex stuff. Just have to figure out where the line is. I've nudged it along into reading multiple LENGTHY API documentation pages and written some fairly complex scripting logic.

[–] arc99@lemmy.world 3 points 59 minutes ago* (last edited 58 minutes ago) (1 children)

I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

[–] hietsu@sopuli.xyz 1 points 2 minutes ago

How is it not correct if the code successfully does the very thing that was prompted?

F.ex. in my company we don’t have any real programmers but have built handful of useful tools (approx. 400-1600 LOC, mainly Python) to do some data analysis, regex stuff to cleanup some output files, index some files and analyze/check their contents for certain mistakes, dashboards to display certain data, etc.

Of course the apps may not have been perfect after the very first prompt, or even compiled, but after iterating an error or two, and explaining an edge case or two, they’ve started to perform flawlessly, saving tons of work hours per week. So how is this not useful? If the code creates results that are correct, doesn’t that make the app itself technically ”correct” too, albeit likely not nearly as optimized as equivalent human code would be.

[–] andros_rex@lemmy.world 7 points 2 hours ago (2 children)

So when the AI bubble burst, will there be coding jobs available to clean up the mess?

[–] Alaknar@sopuli.xyz 2 points 18 minutes ago

There already are. People all over LinkedIn are changing their titles to "AI Code Cleanup Specialist".

[–] aidan@lemmy.world 3 points 1 hour ago

I mean largely for most of us I hope. But I feel like the tech sector was oversatured because of all the hype of it being an easy get rich quick job. Which for some people it was.

[–] Deflated0ne@lemmy.world 4 points 2 hours ago (1 children)

According to Deutsche Bank the AI bubble is a pillar of our economy now.

So when it pops. I guess that's kinda apocalyptic.

[–] hroderic@lemmy.world 5 points 1 hour ago

Only for taxpayers ☝️

[–] bitjunkie@lemmy.world 3 points 1 hour ago

I'd much rather write my own bugs to have to waste hours fixing, thanks.

[–] drmoose@lemmy.world 18 points 6 hours ago* (last edited 6 hours ago) (3 children)

I code with LLMs every day as a senior developer but agents are mostly a big lie. LLMs are great for information index and rubber duck chats which already is incredible feaute of the century but agents are fundamentally bad. Even for Python they are intern-level bad. I was just trying the new Claude and instead of using Python's pathlib.Path it reinvented its own file system path utils and pathlib is not even some new Python feature - it has been de facto way to manage paths for at least 3 years now.

That being said when prompted in great detail with exact instructions agents can be useful but thats not what being sold here.

After so many iterations it seems like agents need a fundamental breakthrough in AI tech is still needed as diminishing returns is going hard now.

[–] jj4211@lemmy.world 2 points 2 hours ago (1 children)

I will concur with the whole 'llm keeps suggesting to reinvent the wheel'

And poorly. Not only did it not use a pretty basic standard library to do something, it's implementation is generally crap. For example it offered up a solution that was hard coded to IPv4, and the context was very ipv6 heavy

[–] JackbyDev@programming.dev 2 points 1 hour ago (1 children)

I have a theory that it's partly because a bunch of older StackOverflow answers have more votes than newer ones using new features. More referring to not using relatively new features as much as it should.

[–] korazail@lemmy.myserv.one 2 points 52 minutes ago

I'd wager that the votes are irrelevant. Stock overflow is generously <50% good code and is mostly people saying 'this code doesn't work -- why?' and that is the corpus these models were trained on.

I've yet to see something like a vibe coding livestream where something got done. I can only find a lot of 'tutorials' that tell how to set up tools. Anyone want to provide one?

I could.. possibly.. imagine a place where someone took quality code from a variety of sources and generate a model that was specific to a single language, and that model was able to generate good code, but I don't think we have that.

Vibe coders: Even if your code works and seems to be a success, do you know why it works, how it works? Does it handle edge cases you didn't include in your prompt? Does it expose the database to someone smarter than the LLM? Does it grant an attacker access to the computer it's running on, if they are smarter than the LLM? Have you asked your LLM how many 'r's are in strawberry?

At the very least, we will have a cyber-security crisis due to vibe coding; especially since there seems to be a high likelihood of HR and Finance vibe coders who think they can do the traditional IT/Dev work without understanding what they are doing and how to do it safely.

[–] Jason2357@lemmy.ca 3 points 2 hours ago

If it wasn't for all the AI hype that it's going to do everyone's job, LLMs would be widely considered an amazing advancement in computer-human interaction and human assistance. They are so much better than using a search engine to parse web forums and stack overflow, but that's not going to pay for investing hundreds of billions into building them out. My experience is like yours - I use AI chat as a huge information index mainly, and helpful sounding board occasionally, but it isn't much good beyond that.

[–] umbraroze@slrpnk.net 4 points 6 hours ago (2 children)

Oh yes. The Great pathlib. The Blessed pathlib. Hallowed be it and all it does.

I'm a Ruby girl. A couple of years ago I was super worried about my decision to finally start learning Python seriously. But once I ran into pathlib, I knew for sure that everything will be fine. Take an everyday headache problem. Solve it forever. Boom. This is how standard libraries should be designed.

Pathlib is very nice indeed, but I can understand why a lot of languages don't do similar things. There are major challenges implementing something like that. Cross-platform functionality is a big one, for example. File permissions between Unix systems and Windows do not map perfectly from one system to another which can be a maintenance burden.

But I do agree. As a user, it feels great to have. And yes, also in general, the things Python does with its standard library are definitely the way things should be done, from a user's point of view at least.

[–] HugeNerd@lemmy.ca 5 points 5 hours ago

I disagree. Take a routine problem and invent a new language for it. Then split it into various incompatible dialects, and make sure in all cases it requires computing power that no one really has.

I would say absolutely in the general sense nost people, and the salesmen, frame them in.

When I was invited to assist with the GDC development, I got a chance to partner with a few AI developers and see the development process firsthand, try my hand at it myself, and get my hands on a few low parameter models for my own personal use. It's really interesting just how capable some models are in their specific use-cases. However, even high param. models easily become useless at the drop of a hat.

I found the best case, one that's rarely done mind you, is integrate the model into a program that has the ability to call a known database. With a properly trained model to format output in both natural language and use a given database for context calls, and concrete information, the qualitative performance leaps ahead by bounds. Problem is, that requires so much customization it pretty much ends up being something a capable hobbyist would do, it's just not economically sound for a business to adopt.

[–] HugeNerd@lemmy.ca 5 points 6 hours ago
[–] donalonzo@lemmy.world 12 points 8 hours ago (2 children)

LLMs work great to ask about tons of documentation and learn more about high-level concepts. It's a good search engine.

The code they produce have basically always disappointed me.

[–] nightlily@leminal.space 7 points 7 hours ago (2 children)

On proprietary products, they are awful. So many hallucinations that waste hours. A manager used one on a code review of mine and only admitted it after I spent the afternoon chasing it.

[–] Jason2357@lemmy.ca 3 points 2 hours ago

Those happen so often. I've taken to stop calling them hallucinations anymore (that's anthropomorphising and over-selling what LLMs do imho). They are statistical prediction machines, and either they hit their practical limits of predicting useful output, or we just call it broken.

I think the next 10 years are going to be all about learning what LLMs are actually good for, and what they are fundamentally limited at no matter how much GPU ram we throw at it.

[–] zaphod@sopuli.xyz 2 points 3 hours ago

Not even proprietary, just niche things. In other words anything that's rarely used in open source code, because there's nothing to train the models on.

load more comments (1 replies)
[–] Fyrnyx@kbin.melroy.org 9 points 8 hours ago

But will something be done about it?

NOooOoOoOoOoo. As long as it is still the new shiny toy for techbros and executive-bros to tinker with, it'll continue.

[–] elbiter@lemmy.world 31 points 11 hours ago (1 children)

AI coding is the stupidest thing I've seen since someone decided it was a good idea to measure the code by the amount of lines written.

[–] ellohir@lemmy.world 10 points 8 hours ago

More code is better, obviously! Why else would a website to see a restaurant menu be 80Mb? It's all that good, excellent code.

load more comments
view more: next ›