The Falcon Friday Fiasco: A Global IT Wake-Up Call On Friday, a routine update from Crowdstrike’s FALCON turned into a nightmare, triggering the Blue Screen of Death (BSOD) on computers across the globe. This incident exposed a glaring vulnerability in our IT infrastructure: our over-reliance on a single vendor. It wasn’t just a coding error or a human mistake; the real issue is much more BIGGER. Many of the world’s leading companies, including most of the Fortune 500, depend on the same security vendor. This creates a single point of failure that can have catastrophic consequences, as we saw with the FALCON update. The fact that one company had kernel access to so many critical systems should raise alarm bells. This level of dependency on a single vendor is a recipe for disaster. We need to rethink our approach to IT security and infrastructure, ensuring that we do not put all our eggs in one basket. It's time for companies to diversify their IT solutions and vendors to mitigate the risks associated with such concentrated power. The Falcon Friday incident should serve as a wake-up call for the entire industry to prioritize resilience and redundancy in their IT strategies. 🚫 Reminds me of Kaspersky ban in U.S. #crowdstrike #bsod
Ram Gupta’s Post
More Relevant Posts
-
Just to take a surf on the current #crowstrike wave- here are my 2 cents to the whole situation (2/3): Cent 2 - discussion culture Since Friday, all I can see on LinkedIn are post on this topic. That’s normal and I wouldn’t have a problem with that per se. But how some of these posts are written is „under all canon“ - how we would say in Germany… ;) It starts with comments like „it is stupid that no customer tests their updates before deploying it“ (-> see the issue in my first post (https://lnkd.in/eGBCD4tD)) and it doesn’t end with discussions wether this is a security issue or not („the CIA triad is trash“ or „crowdstrike is wrong calling it a software issue“). Guys- please keep it civil! Those discussions can be done on a technical level and don’t need any emotional outbursts to prove any point!
Just to take a surf on the current #crowstrike wave- here are my 2 cents to the whole situation (1/3): Cent 1 - technical I don’t need to repeat, what happened, I just want to focus on some details I‘ve rarely seen in any comment on the subject. First, according to this article (https://lnkd.in/e8Ku7abi) by crowdstrike, the time between pushing the update and stopping it was roughly one and a half hours, showing how quick this issue came up (no surprise given the devastating impact). This supports my first thought, that this update was obviously never tested completely at crowdstrike. My second thought was „why did nobody test this update prior to installing it in their company?!“. But here is where it becomes tricky: the update in question wasn‘t a new version of the software itself. It was a sensor configuration update, similar to the good old virus signature updates. These updates usually occure multiple times a day and get pushed directly to the clients which is understandable given their sheer amount and the urgency to keep those signatures/configurations up to date to meet all the newest threats. In my opinion the blame is mostly on crowdstrikes side for pushing updates to customers (much of them within critical infrastructure) that haven’t been properly tested. I mean, an outage like that could probably be avoided by pushing it to a test system and letting it run for an hour before making it publicly available. BUT I also understand, that it isn’t that easy, since every minute such configurations are delayed adds to the danger for the customers since they‘re not protected against a known threat.
Technical Details: Falcon Update for Windows Hosts | CrowdStrike
crowdstrike.com
To view or add a comment, sign in
-
Just to take a surf on the current #crowstrike wave- here are my 2 cents to the whole situation (1/3): Cent 1 - technical I don’t need to repeat, what happened, I just want to focus on some details I‘ve rarely seen in any comment on the subject. First, according to this article (https://lnkd.in/e8Ku7abi) by crowdstrike, the time between pushing the update and stopping it was roughly one and a half hours, showing how quick this issue came up (no surprise given the devastating impact). This supports my first thought, that this update was obviously never tested completely at crowdstrike. My second thought was „why did nobody test this update prior to installing it in their company?!“. But here is where it becomes tricky: the update in question wasn‘t a new version of the software itself. It was a sensor configuration update, similar to the good old virus signature updates. These updates usually occure multiple times a day and get pushed directly to the clients which is understandable given their sheer amount and the urgency to keep those signatures/configurations up to date to meet all the newest threats. In my opinion the blame is mostly on crowdstrikes side for pushing updates to customers (much of them within critical infrastructure) that haven’t been properly tested. I mean, an outage like that could probably be avoided by pushing it to a test system and letting it run for an hour before making it publicly available. BUT I also understand, that it isn’t that easy, since every minute such configurations are delayed adds to the danger for the customers since they‘re not protected against a known threat.
Technical Details: Falcon Update for Windows Hosts | CrowdStrike
crowdstrike.com
To view or add a comment, sign in
-
Just to take a surf on the current #crowstrike wave- here are my 2 cents to the whole situation (3/3): Bonus cent 3 - security issue or not?! To be honest, I was a little bit surprised to see this discussion. As far as I see, it was likely triggered by a statement by crowdstrike stressing that this is NOT a security incident but a software error. While I believe, this was mainly said to keep the customers as calm as possible by taking away the fear of a supply chain attack, this prompted some security experts to raise and object, stating that „availability is part of the CIA triad and therefore a security issue“. This again triggered others to post about how the „CIA triad is trash“ (see my second post (https://lnkd.in/eEUTEgJJ)) and this was just a software issue. As always, I don’t think it’s that easy… From a crowdstrike perspective, this was most likely no security incident - assuming it wasn’t the outcome of a cyber attack against them. They delivered a piece of software that didn’t work as intended. This is daily business for every software supplier- at least at a smaller scale. So definitely a software issue. From the customers perspective, things look a little different. - we have a security software that went beserk and broke the OS - Promting a huge loss of availability (here comes the CIA triad- cyber security should preserve confidentiality, integrity and availability) - In the aftermath of this incident, a lot of scammers surfaced trying to take advantage of the situation by offering fixes, deploying fake websites and so on So yes- on the customers side, this is definitely a security incident- at least in the manner that the security team is deeply involved with it to ensure that further damage is prevented as well as supporting the management in how such events could be avoided and mitigated in the future (this is leaning towards information security, talking about procedures, policies, business continuity and disaster recovery plans).
Just to take a surf on the current #crowstrike wave- here are my 2 cents to the whole situation (1/3): Cent 1 - technical I don’t need to repeat, what happened, I just want to focus on some details I‘ve rarely seen in any comment on the subject. First, according to this article (https://lnkd.in/e8Ku7abi) by crowdstrike, the time between pushing the update and stopping it was roughly one and a half hours, showing how quick this issue came up (no surprise given the devastating impact). This supports my first thought, that this update was obviously never tested completely at crowdstrike. My second thought was „why did nobody test this update prior to installing it in their company?!“. But here is where it becomes tricky: the update in question wasn‘t a new version of the software itself. It was a sensor configuration update, similar to the good old virus signature updates. These updates usually occure multiple times a day and get pushed directly to the clients which is understandable given their sheer amount and the urgency to keep those signatures/configurations up to date to meet all the newest threats. In my opinion the blame is mostly on crowdstrikes side for pushing updates to customers (much of them within critical infrastructure) that haven’t been properly tested. I mean, an outage like that could probably be avoided by pushing it to a test system and letting it run for an hour before making it publicly available. BUT I also understand, that it isn’t that easy, since every minute such configurations are delayed adds to the danger for the customers since they‘re not protected against a known threat.
Technical Details: Falcon Update for Windows Hosts | CrowdStrike
crowdstrike.com
To view or add a comment, sign in
-
Security firm CrowdStrike has posted a preliminary post-incident report about the botched update to its Falcon security software that caused as many as 8.5 million Windows PCs to crash over the weekend, delaying flights, disrupting emergency response systems, and generally wreaking havoc. The detailed post explains exactly what happened: At just after midnight Eastern time, CrowdStrike deployed "a content configuration update" to allow its software to "gather telemetry on possible novel threat techniques." CrowdStrike says that these Rapid Response Content updates are tested before being deployed, and one of the steps involves checking updates using something called the Content Validator. In this case, "a bug in the Content Validator" failed to detect "problematic content data" in the update responsible for the crashing systems. #crowdstrike https://lnkd.in/gCY9AhrD
CrowdStrike blames testing bugs for security update that took down 8.5M Windows PCs
arstechnica.com
To view or add a comment, sign in
-
It's going to be a long day... According to Crowdstrike's CEO, "CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts, Mac and Linux hosts are not impacted. This is not a security incident or cyberattack." This type of issue is hard to predict and mitigate for, but Thoropass's CISO Jay Trinckes has this advice: "Organizations could use phase approaches or ‘test’ systems to ensure updates don’t impact operations. Organizations can ensure their business recovery/disaster recovery processes are up-to-date and effective. Operating system vendors can communicate and coordinate better with third-party developers to ensure changes to the systems don’t impact integrated software." In the meantime, some have reported success with the following workaround: 👉 Boot Windows into Safe Mode or the Windows Recovery Environment 👉 Navigate to the C:\Windows\System32\drivers\CrowdStrike directory 👉 Locate the file matching “C-00000291*.sys” and delete it 👉 Boot the host normally Will provide updates as they're uncovered in the comments below. https://lnkd.in/gz84Qa_m
Faulty CrowdStrike Update Crashes Windows Systems, Impacting Businesses Worldwide
thehackernews.com
To view or add a comment, sign in
-
Interesting article by Tom Warren about John Cable blogpost. Although, VBS and enclaves are interesting and definitely a must use to be tamper resistant. There are significant draw backs if you are a security endpoint product which would still be: - The lack of and the dependency on telemetry APIs - The performance delta as I have covered in my previous blogpost: https://lnkd.in/euGgszjw Although, if you are a real-life application that manages sensitive data enclaves/VBS is great to achieve "security by design" and could potentially keep yourself safe from infostealers. But this would have to be implemented on a per application basis which means it's probably not going to happen anytime soon. https://lnkd.in/egNTjZ2n
Microsoft calls for Windows changes and resilience after CrowdStrike outage
theverge.com
To view or add a comment, sign in
-
Key Takeaways from a recent cybersecurity incident involving Crowdstrike:
📚 Learning from the CrowdStrike Outage On July 19, 2024, a configuration update for CrowdStrike's Falcon platform triggered a significant disruption, causing system crashes and blue screens on Windows hosts. This event underscores a critical lesson for IT operations: the importance of cautious update deployment. Technical details: https://lnkd.in/emUBmQ6t Preventing Similar Situations: 🔹 Test Updates on Non-Critical Systems: Before rolling out updates across the entire network, conduct thorough testing on a limited number of non-critical workstations and servers. This helps identify potential issues without risking widespread impact. 🔹 Staggered Deployment: Gradually deploy updates in phases, monitoring performance and system stability closely at each stage. This approach allows for swift rollback if any issues arise. 🔹 Robust Backup Plans: Maintain up-to-date backups and have a clear recovery plan in place. This ensures quick restoration of services in case of unexpected failures. 🔹 Continuous Monitoring and Quick Response: Employ real-time monitoring tools to detect anomalies immediately. A swift response can mitigate the effects of an issue before it escalates. CrowdStrike's proactive remediation efforts and transparency about the root cause analysis are commendable. By learning from such incidents, we can strengthen our IT infrastructures and enhance system resilience. Let's prioritize rigorous testing and cautious deployment to safeguard our operations and maintain business continuity. 🛡️ #CrowdStrike #Cybersecurity #BusinessContinuity
Technical Details: Falcon Update for Windows Hosts | CrowdStrike
crowdstrike.com
To view or add a comment, sign in
-
📚 Learning from the CrowdStrike Outage On July 19, 2024, a configuration update for CrowdStrike's Falcon platform triggered a significant disruption, causing system crashes and blue screens on Windows hosts. This event underscores a critical lesson for IT operations: the importance of cautious update deployment. Technical details: https://lnkd.in/emUBmQ6t Preventing Similar Situations: 🔹 Test Updates on Non-Critical Systems: Before rolling out updates across the entire network, conduct thorough testing on a limited number of non-critical workstations and servers. This helps identify potential issues without risking widespread impact. 🔹 Staggered Deployment: Gradually deploy updates in phases, monitoring performance and system stability closely at each stage. This approach allows for swift rollback if any issues arise. 🔹 Robust Backup Plans: Maintain up-to-date backups and have a clear recovery plan in place. This ensures quick restoration of services in case of unexpected failures. 🔹 Continuous Monitoring and Quick Response: Employ real-time monitoring tools to detect anomalies immediately. A swift response can mitigate the effects of an issue before it escalates. CrowdStrike's proactive remediation efforts and transparency about the root cause analysis are commendable. By learning from such incidents, we can strengthen our IT infrastructures and enhance system resilience. Let's prioritize rigorous testing and cautious deployment to safeguard our operations and maintain business continuity. 🛡️ #CrowdStrike #Cybersecurity #BusinessContinuity
Technical Details: Falcon Update for Windows Hosts | CrowdStrike
crowdstrike.com
To view or add a comment, sign in
-
Hi network! 🌐 Have you been affected by the recent CrowdStrike outage? This event has highlighted the critical importance of cautious update deployment in IT operations. Check out our latest post on ensuring robust IT operations and share your experiences or thoughts. Let's work together to strengthen our IT infrastructures and enhance system resilience! https://lnkd.in/dEx__WBE #CyberSecurity #ITManagement #SystemStability #BusinessContinuity #CrowdStrike
📚 Learning from the CrowdStrike Outage On July 19, 2024, a configuration update for CrowdStrike's Falcon platform triggered a significant disruption, causing system crashes and blue screens on Windows hosts. This event underscores a critical lesson for IT operations: the importance of cautious update deployment. Technical details: https://lnkd.in/emUBmQ6t Preventing Similar Situations: 🔹 Test Updates on Non-Critical Systems: Before rolling out updates across the entire network, conduct thorough testing on a limited number of non-critical workstations and servers. This helps identify potential issues without risking widespread impact. 🔹 Staggered Deployment: Gradually deploy updates in phases, monitoring performance and system stability closely at each stage. This approach allows for swift rollback if any issues arise. 🔹 Robust Backup Plans: Maintain up-to-date backups and have a clear recovery plan in place. This ensures quick restoration of services in case of unexpected failures. 🔹 Continuous Monitoring and Quick Response: Employ real-time monitoring tools to detect anomalies immediately. A swift response can mitigate the effects of an issue before it escalates. CrowdStrike's proactive remediation efforts and transparency about the root cause analysis are commendable. By learning from such incidents, we can strengthen our IT infrastructures and enhance system resilience. Let's prioritize rigorous testing and cautious deployment to safeguard our operations and maintain business continuity. 🛡️ #CrowdStrike #Cybersecurity #BusinessContinuity
Technical Details: Falcon Update for Windows Hosts | CrowdStrike
crowdstrike.com
To view or add a comment, sign in
-
Am I overthinking? Windows allowed named pipe being intercepted, not read only, but OS critical ones be messed with and result BSOD. I wonder if all encryption aren't safe in Windows, at least for all those implemented by pipe mechanism. From Microsoft, the named pipe can have security descriptor & ACL. However, NULL are default to allow everyone access. But if Microsoft can get it wrong, allowing third party to screw it up, I doubt if others are fine. At least in my career, I saw penetration testers and auditors challenge on folders and file access right, but not a single case challenging named pipe security. CrowdStrike: https://lnkd.in/ggySSaM2 Microsoft Named Pipe: https://lnkd.in/gGQ3A52i #Microsoft #Security #CrowdStrike #SecurityArchitecture #namedpipe
Technical Details on July 19, 2024 Outage | CrowdStrike
crowdstrike.com
To view or add a comment, sign in