this post was submitted on 31 May 2024
397 points (97.8% liked)

Technology

59534 readers
3209 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] cyberpunk007@lemmy.ca 24 points 5 months ago (2 children)
[–] qprimed@lemmy.ml 24 points 5 months ago (2 children)

well, I mean... anything can leak memory. but yeah, enterprise/carrier grade devices are designed to be in continuous use for years and they generally do that pretty well.

[–] sugar_in_your_tea@sh.itjust.works 14 points 5 months ago (3 children)

Even then, some places will reboot on a schedule when nobody should be using it.

I have some entry level "enterprise" hardware (Mikrotik router and Ubiquiti access point) and I auto-reboot mine weekly. In addition to maintaining performance and minor security wins, it also helps ensure everything csn survive a reboot (e.g. all configurations have persisted to disk).

It's good practice. Some people brag about continuous uptime, I see it as a liability.

[–] locuester@lemmy.zip 6 points 5 months ago

Absolutely. Nothing scarier than rebooting the computer or router that’s been running for 10 years.

I also enjoy exercising software blue/green rotation weekly. Even if no code changes, have it roll to the alternate infra on an automated schedule. Is a great habit to get into and helps any engineer sleep better. It also results in providing very accurate downtime recovery numbers - not estimates.

[–] cyberpunk007@lemmy.ca 6 points 5 months ago (1 children)

It's good practice for patching purposes. You should always be maintaining stable OS versions and a memory leak or the like is fairly uncommon. I think I've seen it once in my career on a particular check point OS version.

Yeah, I'm more worried about keeping up on patches and ensuring things will start back up properly than memory leaks. But minor security and performance wins are nice too.

[–] dustyData@lemmy.world 4 points 5 months ago

That's why all master systems have a backup At least on datacenters 10 years ago is how we did it. We could run a patch, system update, data backup, system restart or whatever it was required to almost any piece of kit on the racks without losing continuity of service. Just do the backup first, then the same operation on the master, if any of them fails the whole architecture is designed to pick up the tasks and continue as if nothing wrong is going on. It was expensive, but they were mission critical banking infrastructure. The thing only went out for account balancing, but it was at 3am when it was likely that no one would need it, and even then for the user there was no loss of service. Transactions still went through, just with a couple of hours of delay for the whole ordeal to sync up.

[–] tacosplease@lemmy.world 3 points 5 months ago

I leak memory all the time

[–] tal@lemmy.today 7 points 5 months ago

If my router rebooted once a week, it would be in the trash can.