This is the third in a series of pieces on the recently released FERC summary of the results of their 2018 CIP audits. The earlier pieces are here:
- The introductory piece about why that document was worthy of your attention
- Automated inventory and why it’s the cornerstone of a strong security assurance program
"Consider the remote configuration of applicable Cyber Assets via a TCP/IP-toRS232 Bridge during vulnerability assessments."
Very few topics in the history of CIP standard development have generated more fire and fury than the boundaries around what portion of the standards should apply to assets which are serially connected to assets which, themselves, are fully IP-connected and addressable from at least some portion of an operational network.
These serially-connected assets are a subset of the whole group of assets which aren’t externally, routably connected. Assets unconnected present a different risk profile.
At the core of this conflict are two basic truths:
- Many of the controls described in the CIP corpus make no sense in the context of serially-connected assets, even those which are addressable through tactics such as port mapping on the devices to which they’re connected.
- These assets wouldn’t have the serial connections if you didn’t want to communicate with them or at least hear from them in some way, and in the extremely general sense, communication paths always represent weak spots in your overall security and should be managed accordingly.
These truths are addressed by the compromises present in scoping for non-ERC assets in version 5 and later of the standards, and there doesn’t appear to be any strong will on either side to adjust the current boundaries. However, those boundaries create an odd situation where many utilities avoid doing some configuration control on their serially-connected assets because they’re afraid that the communication paths needed to update configurations will introduce ERC and exacerbate their compliance risk. This lack of configuration management is what FERC appears to be attempting to remedy with the new lessons learned guidance.
For assets without direct remote addressability, the most common way to remotely access them is to open a session to the terminal server or other device which serves as the IP boundary in the path to the serial device and then utilize local functionality on that gateway device to manually create a session to the end serial device and perform configuration tasks. While this is a valid means of performing a single asset configuration, knowing which assets require the updates, how many of them you have, where they are located and tracking the execution is a significant challenge in most OT environments.
One truism of most communication devices is that fully manual actions are captured and re-created by session simulation software, given sufficient expertise. If prior to executing such an exercise, we had granular details about our asset fleet (including operational criticality to direct priority and, possibly, special handling, if required), we could be extremely efficient and targeted in our efforts.
A major task performed during installation and setup of the VSC is identifying both endpoint specific data (firmware, serial number, OS, etc.) and operational or contextual data (asset owner, asset criticality, location, redundancy, etc.). This allows for an empirical measurement of scope and effort and for strategic planning (by criticality, location, unit, etc.) to best suit the organization and its risk tolerance. The real time update of the system parameters means tracking progress is near instantaneous and is verified by the assets themselves.
While FERC staff don’t address this extended topic in the report, this programmed communication ability also opens the possibility of remote configuration monitoring for serially-connected assets, which can provide a big win in reducing risk exposure at the point where the hardware is most closely tied to actual operational components.
One reason for opposing the extension of the standards further for these serially-connected assets is because of many classes of attacks on those assets which were deemed to never happen. This argument seems to be based on assumptions about the capabilities of the devices and the capabilities of the attackers, which is extremely dangerous.
Assuming you are smarter than your adversaries is reassuring, but it leaves you exposed. In the end, especially outside the scope of compliance reach, it just makes sense to monitor for conditions that you believe are impossible, because if one of them ever occurs, you really want to know about it in a hurry.
Once you’ve established these bidirectional capabilities to be able to push and pull configurations without extending your ERC boundaries, it’s simply a matter of integration to be able to incorporate these efforts into your enterprise (for whatever operational scope you’ve defined for the enterprise) systems for event monitoring, patch maintenance and installation and incident response.
In the end, whatever the compliance scope, reaching the point where your OT assets are managed in well-understood ways is a good way to boost confidence, even if you know there are some differences under the hood in how it gets done.