this post was submitted on 12 Mar 2024
71 points (97.3% liked)

Games

16806 readers
897 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] KeenFlame@feddit.nu 2 points 8 months ago (1 children)

Buncha dry students here giving you shit. It is not a stupid question.

Some day we might not need a cpu. The biggest hurdle probably isn't actually even the chip architecture, but that the software needs to be remade and it's not something you do in a day exactly

[โ€“] Socsa@sh.itjust.works 3 points 8 months ago* (last edited 8 months ago)

Right, GPGPU is a thing. You can do branch logic on GPU and you can do SIMD on a CPU. But in general, logic and compute have some orthogonal requirements which means you end up with divergent designs if you start optimizing in either direction.

This is also a software architecture and conceptual problem as well. You simply can't do conditional SIMD. You can compute both graphs in parallel and "branch" when the tasks join (which is a form of speculative execution), but that's rarely more efficient than defining and dispatching compute tasks on demand when you get to the edges of the performance curve.