this post was submitted on 07 Dec 2025
1081 points (98.0% liked)

Technology

77635 readers
2956 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

top 50 comments
sorted by: hot top controversial new old
[–] edgemaster72@lemmy.world 207 points 6 days ago (9 children)

Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.

And all they'll hear is "not failure, metrics great, ship faster, productive" and go against your advice because who cares about three months later, that's next quarter, line must go up now. I also found this bit funny:

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me... I was proud of what I’d created.

Well you didn't create it, you said so yourself, not sure why you'd be proud, it's almost like the conclusion should've been blindingly obvious right there.

[–] AutistoMephisto@lemmy.world 103 points 6 days ago (4 children)

The top comment on the article points that out.

It's an example of a far older phenomenon: Once you automate something, the corresponding skill set and experience atrophy. It's a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired. I'll have to find it but there's a story about a modern fighter jet pilot not being able to handle a WWII era Lancaster bomber. They don't know how to do the stuff that modern warplanes do automatically.

[–] logicbomb@lemmy.world 52 points 6 days ago (2 children)

It's more like the ancient phenomenon of spaghetti code. You can throw enough code at something until it works, but the moment you need to make a non-trivial change, you're doomed. You might as well throw away the entire code base and start over.

And if you want an exact parallel, I've said this from the beginning, but LLM coding at this point is the same as offshore coding was 20 years ago. You make a request, get a product that seems to work, but maintaining it, even by the same people who created it in the first place, is almost impossible.

load more comments (2 replies)
[–] ctrl_alt_esc@lemmy.ml 29 points 6 days ago (1 children)

I agree with you, though proponents will tell you that's by design. Supposedly, it's like with high-level languages. You don't need to know the actual instructions in assembly anymore to write a program with them. I think the difference is that high-level language instructions are still (mostly) deterministic, while an LLM prompt certaily isn't.

load more comments (1 replies)
load more comments (2 replies)
load more comments (8 replies)
[–] dejected_warp_core@lemmy.world 47 points 5 days ago (1 children)

To quote your quote:

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

I think the author just independently rediscovered "middle management". Indeed, when you delegate the gruntwork under your responsibility, those same people are who you go to when addressing bugs and new requirements. It's not on you to effect repairs: it's on your team. I am Jack's complete lack of surprise. The idea that relying on AI to do nuanced work like this and arrive at the exact correct answer to the problem, is naive at best. I'd be sweating too.

[–] fuck_u_spez_in_particular@lemmy.world 11 points 5 days ago (1 children)

The problem though (with AI compared to humans): The human team learns, i.e. at some point they probably know what the mistake was and avoids doing it again. AI instead of humans: well maybe the next or different model will fix it maybe...

And what is very clear to me after trying to use these models, the larger the code-base the worse the AI gets, to the point of not helping at all or even being destructive. Apart from dissecting small isolatable pieces of independent code (i.e. keep the context small for the AI).

Humans likely get slower with a larger code-base, but they (usually) don't arrive at a point where they can't progress any further.

load more comments (1 replies)
[–] ignirtoq@feddit.online 134 points 6 days ago (7 children)

We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.

Except we are talking about that, and the tech bro response is "in 10 years we'll have AGI and it will do all these things all the time permanently." In their roadmap, there won't be a next generation of software developers, product managers, or mid-level leaders, because AGI will do all those things faster and better than humans. There will just be CEOs, the capital they control, and AI.

What's most absurd is that, if that were all true, that would lead to a crisis much larger than just a generational knowledge problem in a specific industry. It would cut regular workers entirely out of the economy, and regular workers form the foundation of the economy, so the entire economy would collapse.

"Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders."

[–] grue@lemmy.world 66 points 6 days ago

That's why they're all-in on authoritarianism.

[–] UnspecificGravity@piefed.social 31 points 6 days ago

Yep, and now you know why all the tech companies suddenly became VERY politically active. This future isn't compatible with democracy. Once these companies no longer provide employment their benefit to society becomes a big fat question mark.

load more comments (5 replies)
[–] Nalivai@lemmy.world 32 points 5 days ago (1 children)

They never actually say what "product" do they make, it's always "shipped product" like they're fucking amazon warehouse. I suspect because it's some trivial webpage that takes an afternoon for a student to ship up, that they spent three days arguing with an autocomplete to shit out.

load more comments (1 replies)
[–] phed@lemmy.ml 25 points 5 days ago (2 children)

I do a lot with AI but it is not good enough to replace humans, not even close. It repeats the same mistakes after you tell it no, it doesn't remember things from 3 messages ago when it should. You have to keep re-explaining the goal to it. It's wholey incompetant. And yea when you have it do stuff you aren't familiar with or don't create, def. I have it write a commentary, or I take the time out right then to ask it what x or y does then I add a comment.

[–] kahnclusions@lemmy.ca 16 points 5 days ago* (last edited 5 days ago) (5 children)

Even worse, the ones I’ve evaluated (like Claude) constantly fail to even compile because, for example, they mix usages of different SDK versions. When instructed to use version 3 of some package, it will add the right version as a dependency but then still code with missing or deprecated APIs from the previous version that are obviously unavailable.

More time (and money, and electricity) is wasted trying to prompt it towards correct code than simply writing it yourself and then at the end of the day you have a smoking turd that no one even understands.

LLMs are a dead end.

load more comments (5 replies)
[–] echodot@feddit.uk 10 points 5 days ago (3 children)

There's no point telling it not to do x because as soon as you mention it x it goes into its context window.

It has no filter, it's like if you had no choice in your actions, and just had to do every thought that came into your head, if you were told not to do a thing you would immediately start thinking about doing it.

load more comments (3 replies)
[–] Agent641@lemmy.world 48 points 6 days ago (6 children)

I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me.

Let's just call it even.

load more comments (6 replies)
[–] pdxfed@lemmy.world 61 points 6 days ago (5 children)

Great article, brave and correct. Good luck getting the same leaders who blindly believe in a magical trend for this or next quarters numbers; they don't care about things a year away let alone 10.

I work in HR and was stuck by the parallel between management jobs being gutted by major corps starting in the 80s and 90s during "downsizing" who either never replaced them or offshore them. They had the Big 4 telling them it was the future of business. Know who is now providing consultation to them on why they have poor ops, processes, high turnover, etc? Take $ on the way in, and the way out. AI is just the next in long line of smart people pretending they know your business while you abdicate knowing your business or employees.

Hope leaders can be a bit braver and wiser this go 'round so we don't get to a cliffs edge in software.

load more comments (5 replies)
[–] raspberriesareyummy@lemmy.world 66 points 6 days ago (27 children)

So there's actual developers who could tell you from the start that LLMs are useless for coding, and then there's this moron & similar people who first have to fuck up an ecosystem before believing the obvious. Thanks fuckhead for driving RAM prices through the ceiling... And for wasting energy and water.

[–] psycotica0@lemmy.ca 107 points 6 days ago (4 children)

I can least kinda appreciate this guy's approach. If we assume that AI is a magic bullet, then it's not crazy to assume we, the existing programmers, would resist it just to save our own jobs. Or we'd complain because it doesn't do things our way, but we're the old way and this is the new way. So maybe we're just being whiny and can be ignored.

So he tested it to see for himself, and what he found was that he agreed with us, that it's not worth it.

Ignoring experts is annoying, but doing some of your own science and getting first-hand experience isn't always a bad idea.

[–] 5too@lemmy.world 52 points 6 days ago (1 children)

And not only did he see for himself, he wrote up and published his results.

load more comments (1 replies)
[–] bassomitron@lemmy.world 41 points 6 days ago (4 children)

100% this. The guy was literally a consultant and a developer. It'd just be bad business for him to outright dismiss AI without having actual hands on experience with said product. Clients want that type of experience and knowledge when paying a business to give them advice and develop a product for them.

load more comments (4 replies)
load more comments (2 replies)
[–] khepri@lemmy.world 25 points 6 days ago (2 children)

They are useful for doing the kind of boilerplate boring stuff that any good dev should have largely optimized and automated already. If it's 1) dead simple and 2) extremely common, then yeah an LLM can code for you, but ask yourself why you don't have a time-saving solution for those common tasks already in place? As with anything LLM, it's decent at replicating how humans in general have responded to a given problem, if the problem is not too complex and not too rare, and not much else.

[–] lambdabeta@lemmy.ca 22 points 6 days ago

Thats exactly what I so often find myself saying when people show off some neat thing that a code bot "wrote" for them in x minutes after only y minutes of "prompt engineering". I'll say, yeah I could also do that in y minutes of (bash scripting/vim macroing/system architecting/whatever), but the difference is that afterwards I have a reusable solution that: I understand, is automated, is robust, and didn't consume a ton of resources. And as a bonus I got marginally better as a developer.

Its funny that if you stick them in an RPG and give them an ability to "kill any level 1-x enemy instantly, but don't gain any xp for it" they'd all see it as the trap it is, but can't see how that's what AI so often is.

load more comments (1 replies)
load more comments (25 replies)
[–] vpol@feddit.uk 61 points 6 days ago (13 children)

The developers can’t debug code they didn’t write.

This is a bit of a stretch.

[–] Xyphius@lemmy.ca 47 points 6 days ago

agreed. 50% of my job is debugging code I didn't write.

load more comments (12 replies)
[–] Unlearned9545@lemmy.world 52 points 6 days ago (4 children)

Fractional CTO: Some small companies benefit from the senior experience of these kinds of executives but don't have the money or the need to hire one full time. A fraction of the time they are C suite for various companies.

load more comments (4 replies)
[–] DupaCycki@lemmy.world 9 points 4 days ago

Personally I tried using LLMs for reading error logs and summarizing what's going on. I can say that even with somewhat complex errors, they were almost always right and very helpful. So basically the general consensus of using them as assistants within a narrow scope.

Though it should also be noted that I only did this at work. While it seems to work well, I think I'd still limit such use in personal projects, since I want to keep learning more, and private projects are generally much more enjoyable to work on.

Another interesting use case I can highlight is using a chatbot as documentation when the actual documentation is horrible. However, this only works within the same ecosystem, so for instance Copilot with MS software. Microsoft definitely trained Copilot on its own stuff and it's often considerably more helpful than the docs.

[–] Suffa@lemmy.wtf 32 points 6 days ago (45 children)

AI is really great for small apps. I've saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me.

But anything big and it's fucking stupid, it cannot track large projects at all.

load more comments (45 replies)
[–] Evotech@lemmy.world 24 points 6 days ago (2 children)

Just ask the ai to make the change?

[–] theneverfox@pawb.social 21 points 5 days ago (16 children)

AI isn't good at changing code, or really even understanding it... It's good at writing it, ideally 50-250 lines at a time

load more comments (16 replies)
[–] BarneyPiccolo@lemmy.today 11 points 5 days ago (8 children)

I don't know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I'd try it anyway, because what do you have to lose?

Unless it gets pissed off at being questioned, and destroys the world. I've seen more than few movies about that.

load more comments (8 replies)
[–] CarbonatedPastaSauce@lemmy.world 61 points 6 days ago* (last edited 6 days ago) (14 children)

Something any (real, trained, educated) developer who has even touched AI in their career could have told you. Without a 3 month study.

[–] AutistoMephisto@lemmy.world 74 points 6 days ago* (last edited 6 days ago) (4 children)

What's funny is this guy has 25 years of experience as a software developer. But three months was all it took to make it worthless. He also said it was harder than if he'd just wrote the code himself. Claude would make a mistake, he would correct it. Claude would make the same mistake again, having learned nothing, and he'd fix it again. Constant firefighting, he called it.

load more comments (4 replies)
load more comments (13 replies)
[–] rimu@piefed.social 45 points 6 days ago* (last edited 6 days ago) (5 children)

FYI this article is written with a LLM.

image

Don't believe a story just because it confirms your view!

[–] AmbiguousProps@lemmy.today 40 points 6 days ago (8 children)

I've heard that these tools aren't 100% accurate, but your last point is valid.

load more comments (8 replies)
[–] LiveLM@lemmy.zip 32 points 6 days ago (4 children)

Aren't these LLM detectors super inaccurate?

load more comments (4 replies)
load more comments (3 replies)
[–] dsilverz@calckey.world 42 points 6 days ago (9 children)

@AutistoMephisto@lemmy.world @technology@lemmy.world

I used to deal with programming since I was 9 y.o., with my professional career in DevOps starting several years later, in 2013. I dealt with lots of other's code, legacy code, very shitty code (especially done by my "managers" who cosplayed as programmers), and tons of technical debts.

Even though I'm quite of a LLM power-user (because I'm a person devoid of other humans in my daily existence), I never relied on LLMs to "create" my code: rather, what I did a lot was tinkering with different LLMs to "analyze" my own code that I wrote myself, both to experiment with their limits (e.g.: I wrote a lot of cryptic, code-golf one-liners and fed it to the LLMs in order to test their ability to "connect the dots" on whatever was happening behind the cryptic syntax) and to try and use them as a pair of external eyes beyond mine (due to their ability to "connect the dots", and by that I mean their ability, as fancy Markov chains, to relate tokens to other tokens with similar semantic proximity).

I did test them (especially Claude/Sonnet) for their "ability" to output code, not intending to use the code because I'm better off writing my own thing, but you likely know the maxim, one can't criticize what they don't know. And I tried to know them so I could criticize them. To me, the code is.. pretty readable. Definitely awful code, but readable nonetheless.

So, when the person says...

The developers can’t debug code they didn’t write.

...even though they argue they have more than 25 years of experience, it feels to me like they don't.

One thing is saying "developers find it pretty annoying to debug code they didn't write", a statement that I'd totally agree! It's awful to try to debug other's (human or otherwise) code, because you need to try to put yourself on their shoes without knowing how their shoes are... But it's doable, especially by people who deal with programming logic since their childhood.

Saying "developers can't debug code they didn't write", to me, seems like a layperson who doesn't belong to the field of Computer Science, doesn't like programming, and/or only pursued a "software engineer" career purely because of money/capitalistic mindset. Either way, if a developer can't debug other's code, sorry to say, but they're not developers!

Don't take me wrong: I'm not intending to be prideful or pretending to be awesome, this is beyond my person, I'm nothing, I'm no one. I abandoned my career, because I hate the way the technology is growing more and more enshittified. Working as a programmer for capitalistic purposes ended up depleting the joy I used to have back when I coded in a daily basis. I'm not on the "job market" anymore, so what I'm saying is based on more than 10 years of former professional experience. And my experience says: a developer that can't put themselves into at least trying to understand the worst code out there can't call themselves a developer, full stop.

load more comments (9 replies)
[–] deathbird@mander.xyz 27 points 6 days ago (1 children)

I think this kinda points to why AI is pretty decent for short videos, photos, and texts. It produces outputs that one applies meaning to, and humans are meaning making animals. A computer can't overlook or rationalize a coding error the same way.

load more comments (1 replies)
[–] flamingo_pinyata@sopuli.xyz 42 points 6 days ago* (last edited 6 days ago) (1 children)

“fractional CTO”(no clue what that means, don’t ask me)

For those who were also interested to find out this means: Consultant and advisor in a part time role, paid to make decisions that would usually fall under the scope of a CTO, but for smaller companies who can't afford a full-time experienced CTO

[–] zerofk@lemmy.zip 29 points 6 days ago (3 children)

That sounds awful. You get someone who doesn’t really know the company or product, they take a bunch of decisions that fundamentally affect how you work, and then they’re gone.

… actually, that sounds exactly like any other company.

load more comments (3 replies)
[–] HugeNerd@lemmy.ca 30 points 6 days ago (4 children)

Computers are too powerful and too cheap. Bring back COBOL, painfully expensive CPU time, and some sort of basic knowledge of what's actually going on.

Pain for everyone!

load more comments (4 replies)
[–] lepinkainen@lemmy.world 13 points 5 days ago* (last edited 3 days ago) (1 children)

Same thing would happen if they were a non-coder project manager or designer for a team of actual human programmers.

Stuff done, shipped and working.

“But I can’t understand the code 😭”, yes. You were the project manager why should you?

[–] JcbAzPx@lemmy.world 35 points 5 days ago (10 children)

I think the point is that someone should understand the code. In this case, no one does.

load more comments (10 replies)
load more comments
view more: next ›