this post was submitted on 02 Feb 2026
358 points (96.9% liked)

Technology

81933 readers
2966 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] WanderingThoughts@europe.pub 196 points 3 weeks ago* (last edited 3 weeks ago) (28 children)

Only until AI investor money dries up and vibe coding gets very expensive quickly. Kinda how Uber isn't way cheaper than a taxi now.

[–] percent@infosec.pub 6 points 3 weeks ago (18 children)

I wouldn't be surprised if that's only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we'll be able to run them on consumer or prosumer-grade hardware.

GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.

[–] WanderingThoughts@europe.pub 20 points 3 weeks ago (9 children)

So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window. Debugging something vague doesn't work. Fact checking isn't something they do well.

[–] VibeSurgeon@piefed.social 5 points 3 weeks ago (2 children)

So far, there is serious cognitive step needed that LLM just can't do to get productive. They can output code but they don't understand what's going on. They don't grasp architecture. Large projects don't fit on their token window.

There's a remarkably effective solution for this, that helps both humans and models alike - write documentation.

It's actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?

[–] WanderingThoughts@europe.pub 9 points 3 weeks ago (2 children)

High-quality documentation assumes there's someone with experience working on this. That's not the vibe coding they're selling.

[–] VibeSurgeon@piefed.social 2 points 3 weeks ago

Complete hands-off no-review no-technical experience vibe coding is obviously snake oil, yeah.

This is a pretty large problem when it comes to learning about LLM-based tooling: lots of noise, very little signal.

[–] Zos_Kia@lemmynsfw.com 2 points 3 weeks ago

I am not aware of what they are selling but every vibe coder i know produces obsessive amounts of documentation. It's kind of baked into the tool (if you use Claude Code at least), it will just naturally produce a lot of documentation.

[–] kaljakoripallomaha@sopuli.xyz 2 points 3 weeks ago

Funnily enpugh, AI itself is a great tool to create that high quality documentation fairly efficiently, but obviously not autonomously.

Even complex systems can be documented up to a level that is easy and much less laborious for the subject experts and architects to comb through for fhe final version.

load more comments (6 replies)
load more comments (14 replies)
load more comments (23 replies)