This is part four in a series of pieces on the FERC summary of the results of their 2018 CIP audits. The earlier pieces are here:
- The introductory piece about why that document was worthy of your attention
- Automated inventory and why it’s the cornerstone of a strong security assurance program
- Remote Configuration Capture and Control for Serial Assets posted
9. Consider incorporating file verification methods, such as hashing, during manual patching processes and procedures, where appropriate.
1.2.5. Verification of software integrity and authenticity of all software and patches provided by the vendor for use in the BES Cyber System - from CIP-013.
A very long time ago, I was the chair of an obscure NERC working group dedicated to addressing something that was called the Boreas vulnerability. That’s been on my mind lately, because it was during the time that the recently-departed Mike Assante was the NERC CISO, and it was the best chance that I had to work with him over the years. Boreas was the software-based follow-up to the Aurora vulnerability (note that there was never a “C” in the sequence; that’s significant) and had plenty of similarities:
- It was announced with minimal industry input by a government agency.
- It was very broad and would require a whole set of practices to address properly.
- It was blindingly obvious to those who actually worked with the equipment in question.
The idea behind Boreas, in its entirety, was that if you have a device which is run by firmware, and someone manages to replace that firmware, then they can subvert the function of that device. This is true and obvious and not necessarily more important than lots of other things, but it did need to be addressed.
Remember, though, that this is going on in the aftermath of what was at that time about three years of pain from the Aurora saga – the industry had been subjected to heavy-handed Congressional oversight and had been required by FERC to produce mountains of paperwork to address a problem that, for almost everyone, boiled down to, “Oh, yeah, we should check those settings and make sure we lock out the ability to change them.” Because Mike’s skills included a heavy dose of the political, we quickly formed a task force and let everyone know it, then we spent a few months working up a legitimate set of materials designed to address the described vulnerability. By that point, we realized that Boreas had failed to catch the imagination of the regulatory class (probably because no one had bothered to take video of something catching fire as a result of a failure), so we quietly stuck those materials in a drawer in case anyone ever asked. They never did.
None of those materials were anything earth-shaking, but they did represent a reasonable approach to the overall problem of prevention of software tampering – change management, change monitoring, and pre-installation software verification – and software tampering is still a legitimate concern. None of these approaches are particularly specific to OT or to firmware; they’re all common to all forms of software. One thing that is perhaps different in the OT world is that the software publishers, being primarily hardware vendors at heart, are often a bit more cavalier than their IT cousins are when it comes to formal software security practices, which makes some of the recommended practices for patch verification harder to pull off. It’s the last of the three – pre-installation software verification – that FERC staff are talking about here. Despite the fact that it’s always been a logical, if small, part of a comprehensive vulnerability management program, it’s still inconsistently applied. I suspect that that’s because it relates to supply chain protection, and that still feels like something out of their control to most folks. As we’re working on closing the last chinks in the CIP armor, though, the time has come to fill this one in.
Unlike the other lessons learned from the report, this one is actually related to a current piece of an as-yet-unimplemented standard – the section from CIP-013 mentioned above. The standard actually goes a bit beyond the lesson learned from the report, since it applies whether patching is manual or automated, but that’s OK, since the automated checking isn’t necessarily all that hard to build in once you know you want it.
The things that you can check to verify that a patch (yes, a new revision of firmware is a patch, even if the nomenclature is a bit different) are typically the same whether you’re doing it manually or having software do it for you. The first of these, if the vendor is exhibiting best practices, is to check for the digital signature in the software. This, for example, is where Verve can help to streamline and automate the storage, distribution and installation of vendor signed software. This same process could be performed manually as part of a patch management process instead but the ability to procure and secure authentic updates from a vendor and be able to ‘drop’ them into your automated inventory to instantly show which systems are in scope is a significant increase in accuracy and a huge time savings as well. It’s also good practice for the vendors to be digitally signing their code, so widespread implementation of this practice should reverberate back up the supply chain to improve practices there, especially if it’s done in support of CIP-013 compliance.
The second method is tougher to implement, but only in the sense that it’s more computationally expensive and requires action for each patch rather than for each vendor. Many vendors provide hash values for each file that they release, and these hashes can be used as last-minute verification before installation that the proper software is being installed. The mechanics, whether for automated or manual verification, are the same as for the digital signature approach; this approach is more labor-intensive, since each patch requires a database entry rather than each vendor, but it can provide a slightly stronger signal that the software is legitimate, since signature spoofing is theoretically possible.
There’s one final potential control that could be implemented as a form of verification. If you use some form of automated change monitoring involving file signatures across the board, you could do a periodic compare of the contents of each system to a trusted good installation. This would not prevent installation of a bad image, but, just as an IDS can’t trigger until bad traffic is already present on the network but still provides detection value, there’s value in knowing that a false software version hasn’t been installed by circumventing your change management controls.