this post was submitted on 18 Feb 2026
729 points (99.3% liked)
Technology
81373 readers
4152 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I've been writting a lot of code with ai - for every half hour the ai needs to write the code I need a full week to revise it into good code. If you don't do that hard work the ai is going to overwhelm the reviewers with garbage
With proper prompting you can let it do a lot of annoying stuff like refactors reasonably well. With a very strict linter you can avoid the most stupid mistakes and shortcuts. If I work on a more complex PR it can take me a couple days to plan it correctly and the actual implementation of the correct plan will take no time at all.
I think for small bug fixes on a maintainable codebase it works, and it works for writing plans and then implementing them. But I honestly don’t know if it’s any faster than just writing the code myself, it‘s just different.
hmm not in my experience, if you don't care about code-quality you can quickly prototype slop, and see if it generally works, but maintainable code? I always fall back to manual coding, and often my code is like 30% of the length of what AI generates, more readable, efficient etc.
If you constrain it a lot, it might work reasonably, but then I often think, that instead of writing a multi-paragraph prompt, just writing the code might've been more effective (long-term that is).
That's why I don't think AI really helps that much, because you still have to think and understand (at least if you value your product/code), and that's what takes the most time, not typing etc.
Yeah it makes you dumber, because you're tempted to not think into the problem, and reviewing code is less effective in understanding what is going on within code (IME, although I think especially nowadays it's a valuable skill to be able to review quickly and effectively).
Eh I don’t disagree with you, it’s just the reality for me that I am now expected to work on much more stuff at the same time because of AI, it’s exhausting but at least in my job I have no choice and I try to arrange myself with the situation.
I sure lost a lot of understanding of the details of the codebase but I do read every line of code these LLMs spit out and manually review all PRs for obvious bullshit. I also think code quality got worse despite me doing everything I can to keep it decent.
So, what you're saying is, you're not writing code.
I'm writing code because it is often faster than explaining to the ai how to do it. I'm spending this month seeing what ai can do - it ranges from saving me a lot of tedious effort to making a large mess to clean up
I've had better success, when using AI agents in repeated, but small and narrow doses.
It's been kinda helpful in brainstorming interfaces (and I always have to append at the end of every statement "... in the most maintainable way possible.")
It's been really helpful in writing unit tests (I follow Test Driven Development), and sometimes it picks up edge cases I would have overlooked.
I wouldn't blindly trust any of it, as all too often it's happy to just disregard any sort of error handling (unless explicitly mentioned, after the fact). It's basically like being paired up with an over-eager, under-qualified junior developer.
But, yeah, you're gonna have a bad time if you prompt it to "write me a Unix operating system in web assembly".
I totally get it. I've been critical about using AI for code purposes at work and have pleaded to stop using it (management is forcing it, less experienced folk want it). So I've been given a challenge by one of the proponents to use a very specific tool. This one should be one of the best AI slop generators out there.
So I spent a lot of time thoroughly writing specs for a task in a way the tool should be able to do it. It failed miserably, didn't even produce any usable result. So I asked the dude that challenged me to help me refine the specs, tweak the tool, make everything perfect. The thing still failed hard. It was said it was because I was forcing the tool into decisions it couldn't handle and to give it more freedom. So we did that, it made up the rules themselves and subsequently didn't follow those rules. Another failure. So we split up the task into smaller pieces, it still couldn't handle it. So we split it up even further, to a ridiculous level, at which point it would definitely be faster just to create the code manually. It's also no longer realistic, as we pretty much have the end result all worked out and are just coaching the tool to get there. And even then it's making mistakes, having to be corrected all the time, not following specs, not following code guidelines or best practices. Another really annoying thing is it keeps on changing code it shouldn't touch, since we've made the steps so small, it keeps messing up work it did previously. And the comments it creates are crazy, either just about every line has a comment attached and functions get a whole story, or it has zero comments. As soon as you say to limit the comments to where they are useful, it just deletes all the comments, even the ones it put in before or we put in manually.
I'm ready to give up on the thing and have the use of AI tools for coding limited if not outright stopped entirely. But I'll know how that discussion will go: Oh you used tool A? No, you should be using tool B, it's much better. Maybe the tools aren't there now, but they are getting better all the time, so we'll benefit any day now.
When I hear even experienced devs be enthusiastic about AI tools, I really feel like I'm going crazy. They suck a lot and aren't useful at all (on top of the thousand other issues with AI), why are people liking it? And why have we hedged the entire economy on it?
I've started using it as an interactive rubber duck. When I've got a problem, I explain it to the AI, after which it gives a response that I ignore because after explaining it, I figured it out myself.
AI has been very helpful for finding my way around Azure deploy problems, though. And other complex configuration issues (I was missing a certificate to use
az login). I fixed problems I probably couldn't have solved without it.But I've lost a lot of time trying to get it to solve complex coding problems. It makes a heroic effort trying to combine aspects of known patterns and algorithms into something resembling a solution, and it can "reason" about how it should work, but it doesn't really understand what it's doing.
I use colleagues or people on Discord for this. I get the solution immediately after asking AND those that saw me, or heard me, ask now think I'm an idiot. It's my neurodivergent kink!
Which is strange, because Azure's documentation is complete dogshit.
We were trying to solve something at work (send SMTP messages using OAuth authentication, not rocket science) and Azure's own chatbot kept on making up non-existent server commands, rest endpoints that don't exist, and phantom permissions that needed to be added to the account.
Seriously; fuck Azure, fuck Copilot. Made a task that should have taken hours, take weeks.
Nah bro U just prompting wrong trust me bro just one more tool.
/S
Let's get tool B to fix the code from tool A bro, it'll work bro trust me! /s
You will need more than a month to figure out what its good for and what not, and to learn how to effectively utilize it as a tool.
If can properly state a problem, outline the approach I want, and can break it down into testable stages, it can be an accelerator. If not, it's often slop.
The most valuable time is up front design and planning, and learning how to express it. Next up is the ability to quickly make judgement calls, and to backtrack without getting bogged down.
Maybe it's work and it's required 🤷♂️
That is a question I'n trying to answer. Until I know what ai can do I can't have a valid opinion.
We know what “AI” can do.
It maybe being able to save you five minutes of coding in exchange for several hours of debugging (either by you or by whoever is burdened with your horrible slop) is not worth being an active contributor to all that monstrous harm on humanity and the world.
Half of the worlds workplaces are forcing employees to use AI and show proof it was used.
So... leave those already doomed workplaces?!?
Yeah you’re right they should all starve to death instead
If your country lets unemployed people starve its ripe for revolution
Hard to revolt when you’re starving. So people work, and they eat.
Starving is THE motivator for giving the ruling class the french treatment. Look up what started most revolutions: Its starving
Yeah but that’s when the options were starving or revolting. Today there’s a third option, keep working.
Oh, then it's simply treason to your class...
Sounds like that couple that kept rescuing cats that were promptly eaten by coyotes.
Not sure why you're getting down votes, AI is a good tool when used properly.
Its not, its an abomination that should be wiped of the face of this earth and its shills should be shunned