this post was submitted on 30 Oct 2025
954 points (99.2% liked)

Technology

76523 readers
2655 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Jhex@lemmy.world 8 points 2 days ago (2 children)

And even though NVIDIA is better place as they do produce something, but the something in play has little value out of the AI bubble.

NVIDIA could be left holding the bag on a super increased capacity to produce something that nobody wants anymore (or at least nowhere near at the levels we have now) so they are still very much exposed.

[–] enumerator4829@sh.itjust.works 7 points 2 days ago (2 children)

I want cheap GPUs at home please!

[–] Jhex@lemmy.world 2 points 2 days ago (1 children)

me too, but the GPU used for AI are not the same as what we would use at home.

maybe the factories can produce both kinds and they would be cheaper, but it is speculation at this point

It’s literally the same chip designers, production facilities and software. Every product using <5nm silicon fabs compete for the same manufacturing capabilities (fab time at TSMC in Taiwan) and all Nvidia GPUs share lots of commonalities in their software stack.

The silicon fab producing the latest Blackwell AI chips is the same fab producing the latest consumer silicon for both AMD, Apple, Intel and Nvidia. (Let’s ignore the fabs making memory for now.) Internally at Nvidia, I assume they have shuffled lots and lots of internal resources over from the consumer oriented parts of the company to the B2B oriented parts, severely reducing consumer focus.

And then we have any intentional price inflation and market segmentation. Cheap consumer GPUs that are a bit too efficient at LLM inference will compete with Nvidias DC offerings. The amount of consumer grade silicon used for AI inference is already staggering, and Nvidia is actively holding back that market segment.

[–] Dojan@pawb.social 1 points 2 days ago

I'd love this, but not Nvidia.

[–] kadu@scribe.disroot.org 2 points 2 days ago* (last edited 2 days ago) (2 children)

but the something in play has little value out of the AI bubble.

You're delusional if you think GPUs are of little value. LLMs and fancy image generation are a bubble.

The gargantuan computational cost of running the machine learning processing that is now required for protein folding and molecular docking is not.

[–] ayyy@sh.itjust.works 8 points 2 days ago (2 children)

Sure, but the scientists doing those kinds of workflows don’t have anywhere near the money to burn on GPUs. Even before they had all of their funding cut off for being to gay or brown or whatever crap the Nazis have come up with.

[–] bookmeat@lemmynsfw.com 1 points 2 days ago

This is just a small part of the perpetual cycle of growth and contraction. Growth comes from breakthroughs and innovations. Contraction comes from mis-allocation of resources and the need to extract efficiency from the breakthrough and innovation.

So now everything is booming and growing. This will slow down and if it becomes efficient enough it will remain useful and accessible. If not, it will be discarded and another breakthrough will take its place.

[–] kadu@scribe.disroot.org 0 points 2 days ago

Sure, but the scientists doing those kinds of workflows don’t have anywhere near the money to burn on GPUs

I'm working in a lab that is purchasing a cluster with a price tag you wouldn't believe even if I could share it, which I can't. We are publicly funded. Scientists are buying this hardware, for this price, because the speed up we get is tremendous.

[–] Jhex@lemmy.world 4 points 2 days ago

The gargantuan computational cost of running the machine learning processing that is now required for protein folding and molecular docking is not.

Sure but do you need the absolute gargantuan capacity that is being built right now for that? if so, for how long and at what cost?

The point is not that GPU per se are of little value... the point is that what would you do with 10,000 rocket ships if you only have 1000 projects that may be able to use them? and what can those projects actually pay? can they cover the cost of the 10,000 rockets you built?