this post was submitted on 01 May 2024
293 points (97.4% liked)

Technology

59534 readers
3195 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] BilboBargains@lemmy.world 24 points 6 months ago (3 children)

It's kind of lame that they need to junk the entire apparatus after only a decade. I get that processor technology moves on apace but we already know it does that so why doesn't a universal architecture exist where nodes can be added at will?

[–] Almrond@lemmy.world 26 points 6 months ago

It's more of an operating cost issue. It's almost decade-old hardware. It was efficient in its day, but compared to new hardware it just costs so much to run you would be better served investing in something with modern efficiency. It won't be junked, it will be parted out. If you are someone that wants a cheap homelab with infiniband and shitloads of memory you could pick up a blade for a fraction of what it would otherwise cost. I fully expect it to turn into thousands of reasonably powerful servers for the prosumer and nerd markets instead of running as a monolithic cluster.

[–] ryathal@sh.itjust.works 23 points 6 months ago (1 children)

A decade is a lifetime in technology. Moore's law had just ended when this was put together.

[–] afraid_of_zombies@lemmy.world -1 points 6 months ago

One of the reasons why I work in industrial controls. A good day is me sneaking in tech that came after the year 2000. Employment for life and I get to branch out to related stuff. Employer is paying me to take ME and chem-e classes now.

I don't know why anyone would spend their life chasing the newest fad tech when you can pick a slow moving one, master it, and master the ones around it. Would much rather be the person who knew how the entire system works vs knowing the last 8 programming languages/frameworks only 1 of which is relevant.

But hey glad there are people who decide on that lifestyle I like having a better cellphone every year.

[–] trolololol@lemmy.world 12 points 6 months ago* (last edited 6 months ago) (1 children)

If you have too many "slow" modes in a super computer you'll hit a performance ceiling where everything is bottle necked by the speed of things that are not the CPU: memory, disk for swap, and network for sending partial results across nodes for further partial computing.

Source: I've hang up too much around people doing PhD thesis in these kinds of problems.

[–] BilboBargains@lemmy.world 1 points 6 months ago (1 children)

I would imagine it's very difficult to make a universal architecture but if I have learnt anything about computers it's that the manufacturers of software and hardware deliberately created opaque and monolithic systems, e.g. phones. They cynically insert barriers to their reuse and redeployment. There's no profit motive for corporations to make infintitely scalable computers. Short sighted greed is a much more plausible explanation.

[–] trolololol@lemmy.world 1 points 6 months ago

When you get to write and benchmark your own code you'll see technology has limits and how they impact you.

You can have as many raspberry pis as you want, and accomplish faster computation if you can use the same budget with Xeon on dozens of MB in cache and hundreds of gb in ram with gigabit network cards.

10 years from now these Xeon will be like rpi compared to the best your money can buy.

All of those things have to fit in a building, not a desk. The best super computers look like Google's data centers, but their specific needs dictate several tweaks done by very smart people. Super computers are supposed to solve 1 problem with 1 set of data at a time, not 100 problems with 1000,000 data set/people profiles at a time which are much easier to partition and assign to only 1000th of your data center at a time.