this post was submitted on 30 Sep 2025
952 points (98.6% liked)

Technology

75634 readers
3267 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

top 50 comments
sorted by: hot top controversial new old
[–] badgermurphy@lemmy.world 5 points 2 hours ago (2 children)

I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don't understand, though, is the magnitude of this bubble then.

Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.

In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.

So, I guess my question is, "What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?" If there really are none, or if the gains are only incremental, then my question becomes an incredulous, "Is this biggest in history tech bubble really composed entirely of unfounded hype?"

[–] SparroHawc@lemmy.zip 1 points 1 hour ago

From what I've seen and heard, there are a few factors to this.

One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they're at the forefront of the Next Big Thing in order to keep bringing investment money in.

Another is that LLMs are uniquely suited to extending the honeymoon period.

The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.

The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don't know something instead of feeding you BS.

It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.

Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone's eyes, the more money continues to roll in. So, the bubble keeps building.

[–] TipsyMcGee@lemmy.dbzer0.com 2 points 1 hour ago

When the AI bubble bursts, even janitors and nurses will lose their jobs. Financial institutions will go bust.

[–] Blackmist@feddit.uk 1 points 1 hour ago

Of course. Shareholders want results, and not just results for nVidia's bottom line.

[–] arc99@lemmy.world 5 points 2 hours ago* (last edited 2 hours ago) (2 children)

I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

[–] ikirin@feddit.org 1 points 16 minutes ago* (last edited 14 minutes ago)

I've seen and used AI for snippets of code and it's pretty decent at that.

With my colleagues I always compare it to a battery powered drill. It's very powerful and can make shit a lot easier. But you'd not try to build furniture from scratch with only a battery powered drill.

You need the knowledge to use it - and also saws, screws, the proper bits for those screws and so on and so forth.

[–] hietsu@sopuli.xyz 0 points 1 hour ago (2 children)

How is it not correct if the code successfully does the very thing that was prompted?

F.ex. in my company we don’t have any real programmers but have built handful of useful tools (approx. 400-1600 LOC, mainly Python) to do some data analysis, regex stuff to cleanup some output files, index some files and analyze/check their contents for certain mistakes, dashboards to display certain data, etc.

Of course the apps may not have been perfect after the very first prompt, or even compiled, but after iterating an error or two, and explaining an edge case or two, they’ve started to perform flawlessly, saving tons of work hours per week. So how is this not useful? If the code creates results that are correct, doesn’t that make the app itself technically ”correct” too, albeit likely not nearly as optimized as equivalent human code would be.

[–] arc99@lemmy.world 1 points 39 minutes ago

If the code doesn't compile, or is badly mangled, or uses the wrong APIs / imports or forgets something really important then it's broken. I can use AI to inform my opinion and sometimes makes use of what it outputs but critically I know how to program and I know how to spot good and bad code.

I can't speak for how you use it, but if you don't have any real programmers and you're iterating until something works then you could be producing junk and not know it. Maybe it doesn't matter in your case if its a bunch for throwaway scripts and helpers but if you have actual code in production where money, lives, reputation, safety or security are at risk then it absolutely does.

[–] LaMouette@jlai.lu 2 points 1 hour ago

It's not bad for your use case but going beyond that without issues and actual developpers to fix the vibe code is not yet possible for llms

[–] JackbyDev@programming.dev 5 points 2 hours ago (1 children)

The people talking about AI coding the most at my job are architects and it drives me insane.

[–] ceiphas@feddit.org 1 points 2 hours ago

I am a software architect, an mainly usw it to refactor my own old code... But i am maybe not a typical architect...

i use it for programming arduinos for my smarthome. its pretty nice but also aggravating.

[–] OmegaMan@lemmings.world 2 points 2 hours ago (2 children)

Writing apps with AI seems pretty cooked. But I've had some great successes using GitHub copilot for some annoying scripting work.

[–] NikkiDimes@lemmy.world 2 points 2 hours ago

I think it's useful for writing mundane snippets I've written a million times or helping me with languages I'm less familiar with, but anything more compex becomes pretty spaghetti pretty quick.

[–] Canconda@lemmy.ca 1 points 2 hours ago (1 children)

AI is works well for mindless tasks. Data formatting, rough drafts, etc.

Once a task requires context and abstract thinking, AI can't handle it.

[–] OmegaMan@lemmings.world 2 points 1 hour ago

Eh, I don't know. As long as you can break it down into smaller sub-tasks, AI can do some complex stuff. Just have to figure out where the line is. I've nudged it along into reading multiple LENGTHY API documentation pages and written some fairly complex scripting logic.

[–] andros_rex@lemmy.world 7 points 3 hours ago (2 children)

So when the AI bubble burst, will there be coding jobs available to clean up the mess?

[–] Alaknar@sopuli.xyz 2 points 1 hour ago

There already are. People all over LinkedIn are changing their titles to "AI Code Cleanup Specialist".

[–] aidan@lemmy.world 3 points 3 hours ago

I mean largely for most of us I hope. But I feel like the tech sector was oversatured because of all the hype of it being an easy get rich quick job. Which for some people it was.

[–] Deflated0ne@lemmy.world 5 points 3 hours ago (1 children)

According to Deutsche Bank the AI bubble is a pillar of our economy now.

So when it pops. I guess that's kinda apocalyptic.

[–] hroderic@lemmy.world 6 points 2 hours ago

Only for taxpayers ☝️

[–] bitjunkie@lemmy.world 3 points 3 hours ago

I'd much rather write my own bugs to have to waste hours fixing, thanks.

[–] drmoose@lemmy.world 17 points 8 hours ago* (last edited 8 hours ago) (5 children)

I code with LLMs every day as a senior developer but agents are mostly a big lie. LLMs are great for information index and rubber duck chats which already is incredible feaute of the century but agents are fundamentally bad. Even for Python they are intern-level bad. I was just trying the new Claude and instead of using Python's pathlib.Path it reinvented its own file system path utils and pathlib is not even some new Python feature - it has been de facto way to manage paths for at least 3 years now.

That being said when prompted in great detail with exact instructions agents can be useful but thats not what being sold here.

After so many iterations it seems like agents need a fundamental breakthrough in AI tech is still needed as diminishing returns is going hard now.

[–] jj4211@lemmy.world 2 points 3 hours ago (1 children)

I will concur with the whole 'llm keeps suggesting to reinvent the wheel'

And poorly. Not only did it not use a pretty basic standard library to do something, it's implementation is generally crap. For example it offered up a solution that was hard coded to IPv4, and the context was very ipv6 heavy

[–] JackbyDev@programming.dev 2 points 2 hours ago (1 children)

I have a theory that it's partly because a bunch of older StackOverflow answers have more votes than newer ones using new features. More referring to not using relatively new features as much as it should.

[–] korazail@lemmy.myserv.one 2 points 2 hours ago

I'd wager that the votes are irrelevant. Stock overflow is generously <50% good code and is mostly people saying 'this code doesn't work -- why?' and that is the corpus these models were trained on.

I've yet to see something like a vibe coding livestream where something got done. I can only find a lot of 'tutorials' that tell how to set up tools. Anyone want to provide one?

I could.. possibly.. imagine a place where someone took quality code from a variety of sources and generate a model that was specific to a single language, and that model was able to generate good code, but I don't think we have that.

Vibe coders: Even if your code works and seems to be a success, do you know why it works, how it works? Does it handle edge cases you didn't include in your prompt? Does it expose the database to someone smarter than the LLM? Does it grant an attacker access to the computer it's running on, if they are smarter than the LLM? Have you asked your LLM how many 'r's are in strawberry?

At the very least, we will have a cyber-security crisis due to vibe coding; especially since there seems to be a high likelihood of HR and Finance vibe coders who think they can do the traditional IT/Dev work without understanding what they are doing and how to do it safely.

[–] Jason2357@lemmy.ca 3 points 3 hours ago

If it wasn't for all the AI hype that it's going to do everyone's job, LLMs would be widely considered an amazing advancement in computer-human interaction and human assistance. They are so much better than using a search engine to parse web forums and stack overflow, but that's not going to pay for investing hundreds of billions into building them out. My experience is like yours - I use AI chat as a huge information index mainly, and helpful sounding board occasionally, but it isn't much good beyond that.

load more comments (3 replies)
[–] HugeNerd@lemmy.ca 5 points 7 hours ago

I would say absolutely in the general sense nost people, and the salesmen, frame them in.

When I was invited to assist with the GDC development, I got a chance to partner with a few AI developers and see the development process firsthand, try my hand at it myself, and get my hands on a few low parameter models for my own personal use. It's really interesting just how capable some models are in their specific use-cases. However, even high param. models easily become useless at the drop of a hat.

I found the best case, one that's rarely done mind you, is integrate the model into a program that has the ability to call a known database. With a properly trained model to format output in both natural language and use a given database for context calls, and concrete information, the qualitative performance leaps ahead by bounds. Problem is, that requires so much customization it pretty much ends up being something a capable hobbyist would do, it's just not economically sound for a business to adopt.

load more comments
view more: next ›