this post was submitted on 01 May 2024
293 points (97.4% liked)

Technology

59589 readers
3394 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] mikyopii@programming.dev 24 points 6 months ago* (last edited 6 months ago) (2 children)

I bet manpower costs are significant as well. How many people are needed to run this thing? You probably need engineers with an esoteric set of skills to put it back together and manage it which would not be cheap.

Edit: I looked it up, it is running SUSE Enterprise Linux, so maybe management isn't as specialized as I expected.

[–] gregorum@lemm.ee 8 points 6 months ago (2 children)

It may be running SLED, but just imagine all the specialized, tweaked af code running on top. They didn’t just pop in a LiveCD and click “Install”.

[–] w2tpmf@lemmy.world 9 points 6 months ago (2 children)

No, they probably had to pop the live CD into each node individually and click "instal". Then run a script on each one to join it to the cluster.

[–] Almrond@lemmy.world 3 points 6 months ago* (last edited 6 months ago) (1 children)

Kind of, you would use a deployment node to manage the individual blades, they are running really specialized software that is basically useless without the management nodes. It wouldn't be difficult to spin it up (Terascale would have it ready to batch out jobs within a few hours) but you are going to need to engineer your building around it to even get that far. Your foundation needs to support multiple tons of weight, be perfectly level, be able to deliver megawatts of power, remove megawatts of heat (it is water cooled, so you need to have infrastructure and cooling towers to handle that), and you need to be able to get it into the building to begin with. I have worked on this system a few times, just moving it would literally cost upwards of 7 figures. The computer is pretty easy to use, it's all of the supporting infrastructure that will need a literal team of engineers. I could (and have, kind of) spin the machine up to start crunching data within a day on my own. Fuck moving it, and double fuck re-cabling it. Literal miles of fiber in those racks.

You do literally pop in an image that is pre-configured in and it deploys to everything at once. That's probably the easiest part of the whole setup.

[–] w2tpmf@lemmy.world 1 points 6 months ago

I tried hard to oversimplify. Thanks for spoiling it.

[–] gregorum@lemm.ee 0 points 6 months ago

Of course. I was obviously referring to what it takes to operate it after that. Not to mention how complicated setting that whole mess up is.

[–] billiam0202@lemmy.world 6 points 6 months ago

They didn’t just pop in a LiveCD and click “Install”.

Obviously not. In 2017, they would have used a live USB thumbdrive instead of a CD.

Yup, most of these are just a lot of relatively normal hardware put together into one system.