this post was submitted on 10 Mar 2026
462 points (99.4% liked)

Technology

82488 readers
3968 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”

you are viewing a single comment's thread
view the rest of the comments
[–] pageflight@piefed.social 36 points 13 hours ago (2 children)

We may start to see people realize that "have the AI generate slop, humans will catch the mistakes" actually is different from "have humans generate robust code."

[–] daychilde@lemmy.world 22 points 11 hours ago (3 children)

Not only that, but writing code is so much easier than understanding code you didn't write. Seems like either you need to be able to trust the AI code, or you're probably better of writing it yourself. Maybe there's some simple yet tedious stuff, but it has to be simple enough to understand and verify faster than you could write it. Or maybe run code through AI to check for bugs and check out any bugs it finds…

I definitely have trusted AI to write miniature pointless little projects - like a little PHP page that loaded music for the current directory and showed a simple JS player in a webpage so I could share Christmas music with my family and friends. No database, no file uploading or anything. It worked decently, although not perfectly, and that's all it needed to do.

[–] Hupf@feddit.org 5 points 6 hours ago

Yeah, initially writing the code never was the time sink.

[–] slaacaa@lemmy.world 8 points 7 hours ago

This is true not just with code, but with many types of complex outputs. Going through and fixing somebody’s horrible excel model is much worse than building a good one yourself. And if the quality is really bad, it’s also just easier to do it yourself.

[–] MirrorGiraffe@piefed.social 4 points 7 hours ago

I've been writing a slightly larger project with frontend, bff and backend and I need to take it in small batches so that I can catch when it misunderstands or outright does a piss job of implementing something. I've been focusing a lot on getting all the unit tests I need in place which makes me feel a bunch better.

The bigger and more complex the projects get, the harder it is for the LLM to keep stuff in context which means I'll have to improve my chunking out smaller scoped implementations or start writing code myself I think. 

All in all I feel pretty safe with my project and pleased with the agents work but I need to increase testing further before bringing anything live.