this post was submitted on 11 Mar 2026
280 points (98.6% liked)
Technology
83150 readers
3557 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I followed AI developments in the beginning, but it felt like really effective use cases were always just out of reach.
Last time I was using AI was before "Agentic" AI was a thing (it was just around the corner).
Can anyone clue me in, is AI still making forward progress? I feel like if there was a massive change or breakthrough it would be HUGE news, but I also imagine slow incremental progress could eventually build up to being a breakthrough.
I understand that it is still way too prone to errors and hallucinations to be trusted with serious tasks, but have there been any noteworthy improvements?
LLM-based coding agents have become useful to the point that people are building large software projects without humans writing or reviewing code directly. The naive approach to that will result in disaster if used in a production environment, but practices to improve reliability are evolving.
Popular opinion seems to be that Claude Opus 4.5 was the tipping point for this.
I like AI. I think it's great for quick references or a starting point, but I've already seen projects scrapped and restarted because a bunch of junior devs used AI with no understanding and management gave up on them after a year where the number of significant bugs never decreased. Take one down, feed it to the AI, two more bugs in the tracker.
It's changing rapidly, but handing automation tools to people who don't understand the underlying concepts just gets you a bigger mess. There are no well-established best practices for how to use it safely and effectively because it's too new and changing too fast.
It will settle down eventually, but a lot of people will do a lot of dumb things first.
Best practices are something I’ve rarely ever seen applied at corporations. If I’m lucky, I’m only trying to explain to management why we need source control, if I’m unlucky the tech team needs to be educated and forced to use it.
Really can’t see AI assistance going smooth when it lets people think even less about what they’re doing.
There are definitely companies that can take advantage of it and use it properly, but I think they are going to be a minority.
thanks for the update, glad its improving, hope we reach a tipping point for the hardware side cause the way things are going sucks for the general person.