IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”

He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    193
    ·
    5 months ago

    We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

    Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

    • Zron@lemmy.world
      link
      fedilink
      English
      arrow-up
      52
      ·
      5 months ago

      I remember a few career changes ago, I was a back room kid working for an MSP.

      One day I get an email to build a computer for the company, cheap as hell. Basically just enough to boot Windows 7.

      I was to build it, put it online long enough to get all of the drivers installed, and then set it up in the server room, as physically far away from any network ports as possible. IIRC I was even given an IO shield that physically covered the network port for after it updated.

      It was our air-gapped encryption key backup.

      I feel like that shitty company was somehow prepared for this better than some of these companies today. In fact, I wonder if that computer is still running somewhere and just saved someone’s ass.

    • jet@hackertalks.com
      link
      fedilink
      English
      arrow-up
      43
      ·
      5 months ago

      The good news is! This is a shake out test and they’re going to update those playbooks

      • Justin@lemmy.jlh.name
        link
        fedilink
        English
        arrow-up
        41
        arrow-down
        1
        ·
        5 months ago

        Sysadmins are lucky it wasn’t malware this time. Next time could be a lot worse than just a kernel driver with a crash bug.

        3rd party companies really shouldn’t have access to ship out kernel drivers to millions of computers like this.

      • ɔiƚoxɘup@infosec.pub
        link
        fedilink
        English
        arrow-up
        15
        ·
        5 months ago

        I wish you were right. I really wish you were. I don’t think you are. I’m not trying to be a contrarian but I don’t think for a large number of organizations that this is the case.

        For what it’s worth I truly hope that I’m 100% incorrect and everybody learns from this bullshit but that may not be the case.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        5 months ago

        The bad news is that the next incident will be something else they haven’t thought about

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      5 months ago

      I get storing bitlocker keys in AD, but as a net admin and not a server admin…what do you do with the DCs keys? USB storage in a sealed envelope in a safe (or at worst, locked file cabinet drawer in the IT managers office)?

      Or do people forego running bitlocker on servers since encrypting data-at-rest can be compensated by physical security in the data center?

      Or DCs run on SEDs?

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        ·
        5 months ago

        When I set it up at one company, the recovery keys were printed out and kept separately.

        • modeler@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 months ago

          You need at least two copies in two different places - places that will not burn down/explode/flood/collapse/be locked down by the police at the same time.

          An enterprise is going to be commissioning new computers or reformatting existing ones at least once per day. This means the bitlocker key list would need printouts at least every day in two places.

          Given the above, it’s easy to see that this process will fail from time to time, in ways like accicentally leaking a document with all these keys.

          • JasonDJ@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            I think the idea is to store most of the keys in AD. Then you just have to worry about restoring your DCs.

            • modeler@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              I think that’s a better plan than physically printing keys. I’d also want to save the keys in another format somewhere - perhaps using a small script to export them into a safe store in the cloud or a box I control somewhere

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      5 months ago

      They also don’t seem to have a process for testing updates like these…?

      This seems like showing some really shitty testing practices at a ton of IT departments.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          2
          ·
          5 months ago

          I’ve heard differently. But if it’s true, that should have been a non-starter for the product for exactly reasons like this. This is basic stuff.

          • Entropywins@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            3
            ·
            5 months ago

            Companies use crowdstrike so they don’t need internal cybersecurity. Not having automatic updates for new cyber threats sorta defeats the purpose of outsourcing cybersecurity.

            • hangonasecond@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              5 months ago

              Automatic updates should still have risk mitigation in place, and the outage didn’t only affect small businesses with no cyber security capability. Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

              • kent_eh@lemmy.ca
                link
                fedilink
                English
                arrow-up
                8
                arrow-down
                1
                ·
                5 months ago

                Outsourcing does not mean closing your eyes and letting the third party do whatever they want.

                It shouldn’t, but when the decisions are made by bean counters and not people with security knowledge things like this can easily (and frequently) happen.

            • ripcord@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              5 months ago

              Not bothering doing basic, minimal testing - and other mitigation processes - before rolling out updates is absolutely terrible policy.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        5 months ago

        Unfortunately, the pace of attack development doesn’t really give much time for testing.

        • ripcord@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          5 months ago

          More time that the zero time than companies appear to have invested here.

          • TonyOstrich@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 months ago

            I was just thinking about something similar. I can understand wanting to get a security update as quickly as possible, but it still seems like some kind of rolling update could have mitigated something like this. When I say rolling, I mean for example split all of your customers into 24 groups and push the update once an hour to another group. If it causes a massive fuck up it’s only some or most, but not all.