this post was submitted on 14 May 2026
389 points (96.0% liked)

Technology

84646 readers
4308 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Kaligalis@lemmy.world 9 points 1 hour ago (1 children)

Nah, AI isn't that good. When you don't properly review every single line twice, you get the most absurd bullshit you've ever seen.
I use Claude Code Opus daily btw.

[–] Nalivai@lemmy.world 2 points 14 minutes ago

That's the funnest part. You loose your ability to code, and you do it by using thing that isn't even that good, and you don't get anything out of it. Isn't that great?

[–] NocturnalMorning@lemmy.world 7 points 3 hours ago

I weap for the environment and our future water and electricity availability.

[–] BenevolentOne@infosec.pub 7 points 7 hours ago

Being able to call out a middle manager that if these tools are really so great he can just open the PR himself is pretty awesome though.

[–] normalentrance@lemmy.zip 29 points 12 hours ago* (last edited 8 hours ago) (1 children)

It feels like relying on GPS while driving around. If you know the roads well and just want some help with live traffic or somewhere you haven't been before, it's a decent tool.

If you rely on it because you don't want to think and just want to press the easy button, you're going to have a bad time sooner or later.

Back to software, I think there are a lot of people introducing concepts they don't understand or can't maintain (either from poor quality slop or it is just too advanced for their current level of understanding). You can do a few turns like this, until you're stuck burning tokens in a loop without moving forward in a meaningful way.

I try to avoid taking the easy route myself unless I've burnt too much time stuck on some small detail. Ultimately I feel it is super important to understand what you are delivering. Whether it is writing it yourself, copying a stack overflow post, or using an LLM. Once you commit and push to prod you've got to deal with that crap.

[–] themaninblack@lemmy.world 1 points 1 hour ago

Agree completely but I wanted to add: you can also get into an incomprehensible mess without vibing. Just follow the serverless flask tutorial, start writing raw SQL, and away you go!

I asked Claude today about why a coworker was getting errors and it almost exploded.

[–] aesthelete@lemmy.world 19 points 13 hours ago

Hot take: they had no ability to code in the first place.

[–] dejected_warp_core@lemmy.world 32 points 17 hours ago (2 children)

(X) Doubt

As a Sr. Engineer, I completely get that my situation may be wildly different from what's cited in the article.

Right now, I'm using AI "in the loop" rather than "as the loop". That's a big difference. And I'm getting my ass kicked routinely on review for dumb-ass things that I'm letting slide from AI generated output. And rightly so. Plus, models routinely lead me down sub-optimal blind alleys while dreaming up really stupid ways to fix problems. The level of (re)prompting I have to provide to suggest to get decent quality results converges on a post-grad that has encyclopedic knowledge of software engineering as it exists online, but with zero real-world experience. It's both impressive and dangerous as a replacement for software engineering.

In the mode I describe above, I'm not losing the ability to do anything. I can see how one could surrender some coding chops or familiarity with a whole language or stack, in favor of automation. But all you have to do is not do that.

I will say that as a rapid-prototyping technology, It's nothing short of miraculous. I've watched junior engineers knock together medium-weight applications, complete with browser UI/UX and decent workflow, in less than a week. This is great for showing value or putting something semi-functional in front of management and/or customers. But pivoting those prototypes into something maintainable is an utter nightmare. Depending on how beholden to AI and forever prompt-looping with "skills" and MCPs you want to be, I suppose it's possible to just keep mashing the AI button. But at some point, you're going to need to get inside there to fix security problems or bugs that elude this workflow. What then?

[–] Nalivai@lemmy.world 1 points 7 minutes ago

And I’m getting my ass kicked routinely on review for dumb-ass things that I’m letting slide from AI generated output.

Now imagine if you aren't that experienced and the reviewers aren't that thorough, or, and this is the most depressing part, review process doesn't exist. And you get people, even senior engineers, who push that sub-optimal barely working code, but because their project isn't that complicated, it somehow works, so they continue with it, and after some iterations they get code that nobody wrote, nobody knows how to maintain, and nobody reads. But because a lot of modern frameworks are made so monkey can make that barely work by sitting on a keyboard, a lot of the projects didn't collapse on itself yet.
And that's how you get a generation of programmers who lost the ability to program.

[–] tinfoilhat@lemmy.ml 15 points 16 hours ago (1 children)

I joined a project that was forced to use some vibe coded solution that an intern cooked up -- marketed as "solution for data pipelining".

There are no tests, every semantic query calculates embeddings every time, and there is help together with so much bubble gum and "glue code" that nobody feels confident with any of the data were showing our customer.

It's great for rapid prototyping, and then straight to the trash.

[–] northface@lemmy.ml 9 points 14 hours ago* (last edited 14 hours ago)

Thing is, as we all know, prototypes rarely make it to the trash bin if managers and product owners have a stake in the project. Which becomes an even bigger problem now that minimal amounts of humans are involved in producing said prototypes.

I had a meeting with a customer who proudly proclaimed they do "full-on agentic coding" at their startup, and one of their developers mentioned their entire codebase has been rewritten three times in the past week before the meeting took place. I do not have high hopes for their project ever being refactored by humans involved in anything else than light UAT before customer demo time.

[–] vogi@piefed.social 40 points 20 hours ago (2 children)

Its a silver lining of AI that you can easily tell whos a big baby idiot and whos actually worth engaging with.

[–] very_well_lost@lemmy.world 39 points 18 hours ago (6 children)

Preach.

The AI "revolution" is the thing that finally killed my imposter syndrome as a software engineer. Not because I can write better code than AI (that's a very low bar), but from listening to all these breathless idiots talk about how they're "10x-ing my productivity!" or how "AI has replaced search for me!" or how "In 6 months no one will have to manually write code anymore!"

[–] Zagorath@quokk.au 10 points 9 hours ago (1 children)

In 6 months no one will have to manually write code anymore

For the last 18 months

[–] Cypher@aussie.zone 6 points 7 hours ago

Same timeline as Tesla FSD

[–] schema@lemmy.world 11 points 17 hours ago* (last edited 17 hours ago)

Similar for me. What i find ironic is that AI already ran into a brick wall. It's inherent statelessness by design means that AI is unlikely to be suited for anything more than isolated well defined tasks in the near future. Still usable as a tool, but without someone who is actually experienced, it will result in disaster.

and even in smaller tasks it can fucks up, especially if the person prompting it is incapable of writing the code themselves as they don't know how to properly design it and don't spot the issues. Like everything with AI, it looks impressive at first glance until you look at it for more than 10 seconds and spot the metaphorical 6th finger.

What we see currently with AI getting "better" at coding is more or less duct tape to make it work. Basically, they create the agents to bolt on the state, more layers between user and model. Iterative processes to make the answers better, etc, and to create "memory", which in essence is just an ever growing prompt managed by the agent. But in the end, this won't fix the inherent problem, so it will only do so much and is already hitting another ceiling. It introduces state decay. With the agent method its not really possible to "take away" memory, so if you gave it multiple versions of the same code (as you would if you work with AI), the AI never really forgets about old code. It can supress it through agent instructions (more duct tape), but the more there is the more it bleeds through, which can make the AI reintroduce old code or base assumptions on outdated things.

There is no fix without changing the inherent way how models work, which would introduce complexity beyond what is currently feasible in computing (and the current AI is already gobbling up all computing reaoureces as is)

load more comments (4 replies)
[–] TotalCourage007@lemmy.world 4 points 14 hours ago

Honestly yeah its like wearing a huge red AI flag. Can't imagine being stupid enough to fall in love with a not-secure CHATBOT.

[–] collapse_already@lemmy.ml 44 points 21 hours ago (8 children)

We have been interviewing for entry level positions and the new grads know less than ever before. I don't really care what they know, I am looking for evidence that they can think, but I usually ease them into thinking scenarios by asking easy foundational questions like how many bits in a byte. You would think I was asking for them to explain the Shrodinger wave equations... One candidate was waivering between 13 and 17...

[–] Kaligalis@lemmy.world 1 points 1 hour ago

It's called entry-level for a reason. Back in my days, you could start such a position without any formal education as long as you were willing to acquire the required skills and knowledge without needing a nanny. We had to go to the library or actually buy the books for knowledge. Now they can just use the internet.
The actual requirement for doing the job never changed. And it's not knowledge.

[–] andallthat@lemmy.world 24 points 18 hours ago (3 children)
[–] HaraldvonBlauzahn@feddit.org 3 points 14 hours ago

It all depends on whether the CPUs kibibyte flag is set!

[–] collapse_already@lemmy.ml 8 points 18 hours ago (2 children)

Two nibbles is an acceptable answer.

[–] rob_t_firefly@lemmy.world 7 points 17 hours ago (1 children)

I have two nibbles. My cat had six.

load more comments (1 replies)
[–] dejected_warp_core@lemmy.world 5 points 17 hours ago

You say that, but that is (or at least was) a real problem: https://en.wikipedia.org/wiki/Word_(computer_architecture)

[–] MajorasTerribleFate@lemmy.zip 15 points 19 hours ago (1 children)

Computers famously love prime numbers greater than 2 as a foundation for structure and logic.

load more comments (1 replies)
[–] Feathercrown@lemmy.world 7 points 18 hours ago (1 children)

Knowing this is my competition makes me feel much better about myself

load more comments (1 replies)
[–] foodandart@lemmy.zip 5 points 17 hours ago (8 children)

..easy foundational questions like how many bits in a byte..

GTFO.

I mean, yeah.. perhaps it's to be expected. https://www.theverge.com/22684730/students-file-folder-directory-structure-education-gen-z - if this is true, it's as the methods of using computers and various devices has been infantilized and made too easy.

Yeah.. let's obscure the inner working of computing and make the process as opaque to the user as possible. It'll be fine.. no negative consequences at all.

Colleges do not matriculate anymore (that's in the British sense of the word, where one has to show actual knowledge in the degree field one is seeking before enrolling, and TBH, they haven't done so for a very long time, actually..) so this is what we get.

Higher ed in the US is just about da moneys..

[–] mnemonicmonkeys@sh.itjust.works 1 points 9 hours ago (1 children)

I can't wrap my head around how the people in the article get anything done on the computer.

Sure, I could have File Explorer search for a file in theory, but it's ridiculously slow and often fails to find the files I actually want. It's way faster to just have things organized on a day-to-day basis

[–] foodandart@lemmy.zip 1 points 7 hours ago

Oddly enough I've always sorted current working files by date.

Then when backup time comes I'll look at the last dated file in the archive, then go to that date in my current work folder and everything newer goes into the backup. Once it's in the main backup folder, I then sort the files into year and project.

Still, on my system (a MacPro from the Olden Times when Steve Jobs was still kicking) I have 4 drives, so it's crucial to know what is where.

load more comments (7 replies)
load more comments (3 replies)
[–] ImgurRefugee114@reddthat.com 139 points 1 day ago (2 children)

Lol! Losers. I've been programming for almost two decades and extensive use of AI hasn't compromised my skills AT ALL! These slop machines can't hope to compete with the quantity and magnitude of subtle bugs I write. My code was terrible long before I made bots have mental breakdowns trying to work with it.

[–] Goodeye8@piefed.social 21 points 21 hours ago (1 children)

AI also gives you the benefits of a middle manager. If everything works as intended you take the credit but if something breaks that's not your fault, AI made the mistake. If they try to put the blame on you just say you have 6 agents working on 6 different domains all cross-reviewing their commits and you can't be expected to review every single line of code yourself. Time to play corporate like a damned fiddle!

[–] Valmond@lemmy.dbzer0.com 1 points 1 hour ago* (last edited 1 hour ago)

It really is like having your own personal trainee.

If it only could make coffee.

load more comments (1 replies)
[–] jj4211@lemmy.world 35 points 22 hours ago (2 children)

I just don't get it, even the purportedly best models screw things up so much that I can't just leave them to the job without reviewing and fixing the mess they made... And I'm also drowning in pull requests that turn out to be broken as it proudly has "co authored by Claude" in it... Like it manages to pass their test case but it's so messed up that it's either explicitly causing problems, or had a bunch of unrelated changes randomly.

I feel like I'm being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

Closest I got was a chore that had a perfect criteria "address all warnings from the build". Then let it go and iterate. Then after 50 rounds each round saying "ok should be done now, everything is taken care of, just need to do a final check". It burned though most of my monthly quota doing this task before succeeding. Then I look at the proposed change... And it just added directives to the top of every file telling the tools to disable all the warnings... This was the best opus 4.6 could do...

Now sure, I can have it tear through a short boiler plate and it notice a pattern I'm doing and tab through it. But I haven't see this "vibe" approach working at all...

[–] kescusay@lemmy.world 26 points 22 hours ago (7 children)

I feel like I'm being gaslit as I keep reading that there are developers that feel they successfully offloaded the task of coding.

That's because you are being gaslit.

The people making those claims are either a) not developers in the first place, with no awareness of just how shit the "products" they're pushing are, b) paid astroturfers trying to prop up AI, or c) former actual developers who've become addicted to the speed that's possible with AI who are downplaying how crappy their own code quality has become because they have no familiarity with their codebase anymore and have forgotten how to do so much as a for loop.

All these people claiming 10x or 100x gains, and everything they're making is garbage no one should or would touch with a ten-foot pole.

load more comments (7 replies)
load more comments (1 replies)
[–] circuitfarmer@lemmy.world 26 points 22 hours ago (1 children)

When you start relying on something else, it's quite natural and expected to no longer be good at the thing now being done for you.

But in this context, it's a net negative. While you can certainly write more code while using the tool, you're almost always writing worse code. And you still get the atrophy, so the result overall: now you're not good at the thing, and neither is the tool you're using.

And remember, AI models need constant retraining as systems and approaches are updated, languages change, etc. Where is that training data going to come from? From the people now worse at coding than they were before.

[–] foodandart@lemmy.zip 9 points 17 hours ago

The atrophy scares the hell out of me.

Years ago, I would often have long conversations with my dad about how manual skill sets in the trades (my training) and in engineering in the field (which was his bailiwick) were being lost to the pivot towards college degrees for every student, including the ones that preferred to work with their hands.

Three decades on, I witnessed the full turn when construction firms had to - and still have to - mass import workers from Central and South America (legally and illegally) just to get things built. NGL, there are some scary good builders that have been brought in, and those people work insanely hard.

Yes, it's slowly pivoting back as more boys and men opt for the trades and become journeymen and apprentices, but to get the skillsets needed to get to a master's level, you're looking at at least 20k hours. Wer're still a decade out - at best - before we get enough kids through the system and into steady work that they can step up and strike out on their own and make crazy bank. Skilled craftsmen and women can earn 100 bucks an hour - easily - in the right markets, and the rich folks will be glad to pay.

Goddamn, it's gonna be scary until that sorts itself out in another decade or so (and that does pin itself on the hope the financially feckless idiot in the White House doesn't torpedo the economy..)

[–] thericofactor@sh.itjust.works 67 points 1 day ago (7 children)

I notice getting lazier. Even adding a. gitignore file I ask Claude now. It takes longer than typing it myself and costs more probably. But I don't have to do anything but wait a few seconds.

load more comments (7 replies)
[–] raspberriesareyummy@lemmy.world 1 points 12 hours ago

Muhaha. Those morons were never software engineers in the first place. A software engineer would neither benefit from LLM any more than from a deterministic assistant (tenplates), nor would they be stupid enough to label a stochastic slop generator as "AI".

(Yes, this is a "no true scotsman" kind of argument, yet I stand by it. People who call this bullshit AI, as well as people who claim it is better than coding stuff yourself, should not be let anywhere near any kind of software more relevant than a mobile game, and probably not even those)

load more comments
view more: next ›