this post was submitted on 05 Jun 2024
408 points (96.6% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Aceticon@lemmy.world 3 points 5 months ago* (last edited 5 months ago)

Also those thing are highly parallelizable and mainly deal with vector and matrix data, so the same "lots of really simple but fast processing units optimized for vectors and matrix operations working in parallel" that works fine for modern 3D Graphics (for example, each point on a frame image to display on the screen can be calculated in parallel with all the other points - in what's called a fragment shader - and most 3D data is made of 3D vectors whilst the transforms are 3x3 Matrices) turns out to also work fine for things like neural networks were the neurons in each layer are quite simple and can all be processed in parallel (if the architecture of that wasn't layered, GPUs would be far less effective for it).

To a large extent Nvidia got lucky that the stuff that became fashionable now works by doing lots of simple and highly paralellizeable computations, since otherwise it would've been the makers of CPUs that gained from the rise of said computing power demanding tech.