this post was submitted on 21 Jul 2024
191 points (76.5% liked)

Technology

59605 readers
3434 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren't revolutionary. They're basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it's ultimately the customers and their data that suffer.

you are viewing a single comment's thread
view the rest of the comments
[–] Brkdncr@lemmy.world 4 points 4 months ago (2 children)

Because your imaging environment would also be down. And you’re still touching each machine and bringing users into the office.

Or your imaging process over the wan takes 3 hours since it’s dynamically installing apps and updates and not a static “gold” image. Imaging is then even slower because your source disk is only ssd and imaging slows down once you get 10+ going at once.

I’m being rude because I see a lot of armchair sysadmins that don’t seem to understand the scale of the crowdstike outage, what crowdstrike even is beyond antivirus, and the workflow needed to recover from it.

[–] LrdThndr@lemmy.world 6 points 4 months ago (1 children)

FOG ran on Linux. It wouldn’t have been down. But that’s beside the point.

I never said it was a good answer to CrowdStrike. It was just a story about how I did things 10 years ago, and an option for remotely fixing nonbooting machines. That’s it.

I get you’ve been overworked and stressed as fuck this last few days. I’ve been out of corporate IT for 10 years and I do not envy the shit you guys are going through right now. I wish I could buy you a cup of coffee or a beer or something.

[–] Brkdncr@lemmy.world 3 points 4 months ago

Last time I used fog it was only doing static image deployment which has been out of style for a while. I don’t know if there are any serious deployment products for windows enterprise that don’t run on windows.

I’m personally not dealing with this because I didn’t like how Crowdstrike had answered a number of questions in their sales call.

Avoiding telling me their vuln scan doesn’t prob be all hosts after claiming it could replace a real vuln scanner, claiming they are somehow better than others at malware detection without bringing up 3rd party tests, claiming how their product was novel when others have been doing the same for 7+ years.

My fave was them telling me how much easier it is to manage but no one on the call had ever worked as a sysadmin or even seen how their competition works.

Shitshow. I’m so glad this happened so I can block their sales team.

[–] timewarp@lemmy.world -2 points 4 months ago (1 children)

Imaging environment down? If a sysadmin can't figure out how to boot a machine into recovery to remove the bad update file then they have bigger problems. The fix in this instance wasn't even re-imaging machines. It was merely removing a file. Ideal DR scenario would have a recovery image already on the system that can be booted into remotely, so there is minimal strain on the network. Furthermore, we don't live in dial-up age anymore.

[–] Brkdncr@lemmy.world 2 points 4 months ago (1 children)

Imaging environment would be bitlocker’d with its key stuck in AD which is also bitlocker’d.

[–] catloaf@lemm.ee 1 points 4 months ago (1 children)

Only if you're not practicing 3-2-1 with your backups.

[–] Brkdncr@lemmy.world 1 points 4 months ago (1 children)

Backup environment is also bitlocker’d.

[–] catloaf@lemm.ee 1 points 4 months ago

Then you didn't 3-2-1, because you should be able to restore from your alternate format, e.g. tape, without your existing infrastructure. Ideally your second and offsite copies are also offline, so even if you ignored the separate media rule, it wouldn't have been affected by the crowdstrike update.

Ultimately, nobody should have to tell you not to lock your keys in the car.