this post was submitted on 21 Apr 2026
134 points (97.9% liked)

Technology

83963 readers
4414 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] audaxdreik@pawb.social 32 points 3 hours ago (3 children)

Your point is well-taken, but this is also exactly why AI reliance is dangerous. Anyone who sees this should realize the precarity of relying on products that can just be locked away from you.

[–] Jrockwar@feddit.uk 1 points 29 minutes ago

Like Gmail? Google drive? Slack?

I'm not defending AI, but I can come up with >10 products that would absolutely cripple the company I work at if the provider suddenly says "Soz, terms of service violation".

Vendor reliance is dangerous. That doesn't just apply to AI. If the company in OP's message had both Claude and Gemini they'd been okay, so the problem isn't with AI explicitly - the problem is with reliance on services that are critical for workflows, and providers being able to change their mind at a moment's notice.

In any case, leaving aside where the problem is, the idea that 60 employees can't use Natural Intelligence to do their jobs means there's something really wrong with that company...

[–] plyth@feddit.org 16 points 3 hours ago

Windows 11, Onedrive, Intel Management Engine, Google accounts, ...

[–] Shizzymcjizzles@lemmy.dbzer0.com 4 points 3 hours ago (1 children)

It's not that they can't be productive. Right now at least, what AI does is amplify how much work you can do. One of my friends codes for a big company that uses state of the art Claude models and he says that the system does 80-90% of the coding grunt work and the job is more of an editor and making sure everything is correctly annotated so that humans can understand what's happening in the code in the future. This means that work that might have taken months he can complete in a week or two.

[–] RedstoneValley@sh.itjust.works 13 points 2 hours ago (2 children)

This approach to coding is exactly what creates the problem. They will find out the hard way if they can continue to be productive when something breaks and AI is not available for whatever reason. Does anyone know how to fix it? Is the documentation sufficient to understand what the AI did?

[–] BlameTheAntifa@lemmy.world 1 points 1 hour ago

This is how the Adeptus Mechanicus is born.

[–] Shizzymcjizzles@lemmy.dbzer0.com 0 points 2 hours ago (1 children)

My friend said early AI iterations were really bad at being opaque and that even now if you're having it design the core architecture you're going to have the problems you mentioned. But his job has basically changed to being focused mostly on being that architect. Using the metaphor of constructing a building. He used to have to do a lot of manual labor too, not just be an architect. Now he just has to tell the AI system what to build AND how. But the majority of the actual "construction" work is done by the AI system.

[–] ramble81@lemmy.zip 5 points 1 hour ago (1 children)

To continue with the analogy though, how many architects create things that an engineer takes one look at and laughs at because it’s structurally impossible (hint: a lot). Knowing the deep parts of the code and how it works becomes even more invaluable otherwise you risk Chinese building practices (quick, looks good, falls apart quickly).

[–] benjirenji@slrpnk.net 0 points 1 hour ago

At least in my experience these models are pretty good now to write code based on best practices. If you ask for impractical things they will start doing ugly shortcuts or workarounds. A good eye catches these and you either rerun with a refined prompt, fix your own design or just keep telling it how you want to have it fixed.

You still gotta know how good code looks like to write it, but the models can help a lot.