Before exploring hardware-related vulnerabilities that represent a set of specific cyber security risks, and potential vectors for threat actors to exploit, I want to introduce you to some basics in embedded systems engineering, their complications, the concept of errata, cyber vulnerabilities, and a few examples of attacks leveraging hardware such as Row Hammer.
In the Personal Computing (PC) world, most people recognize there is at a minimum a Central Processing Unit (CPU), Random Access Memory (RAM), a hard drive in some form, whether a traditional rotating design or a Solid State Disk (SSD), and a Network Interface Card (NIC), which provides Wireless, Ethernet, Bluetooth, LTE etc.
Those familiar with mobile devices such as cellphones or tablets, may recognize these devices have similar components, but some of their components use different technology than that of the PC (x86 or x86-64) architectures. Here it could be lesser amounts of RAM, different CPU vendors and architectures (e.g., ARM), storage, and even other features such as components that enable other technology such as Near Field Communications (NFC).
All the above share functional similarities and could even be the same components, but effectively, they are all tied together in a system of systems as illustrated in the next section.
System of systems
In continuation, let’s look at an Internet of Things (IoT) embedded device – a smart thermostat:
As illustrated, there are several components making up an electronic device, but this is drilled down into another layer of products and components. These are often modular in design (e.g., an Integrated Chip (IC), or even as a System On a Chip (SOC)) and pre-packaged to quicken development time, isolate functionality, reduce compliance efforts and system complexity on the surface (e.g., these chips may be “black box” and simply “speak” via known and documented interfaces by their Original Equipment Manufacturers (OEMs)).
As computers are networked and talk TCP/IP, and a plethora of network protocols depending on the environment, application or task on hand, hardware does the same. Hardware communicates over several different mechanisms on a Printed Circuit Board (PCB) via specialized connectors, and carefully designed traces that connect the components based on various requirements such as latency, Signal-to-Noise Ratios (SNR), bandwidth, power consumption, synchronized and asynchronous protocols.
There are entire schools of thought, degrees, and studies related to the professional and art of designing electronic circuits or hardware, and signal analysis as well, but they key point to understand is that “hardware is quite complicated” and any number of security “bypasses” can exist given the number of integrated components or potential points often used for testing and manufacturing.
Unfortunately, those same complexities, test points, and components provide malicious parties (although not necessarily) opportunities to compromise the system and bypass technologies that enable the system’s cyber security. There is a ton of research to support this whether cryptographic bypasses or JTAG exploitation. Worse yet, many of those components in IoT widgets are exploitable because they are systems with their own resources (especially the 802.11/Bluetooth IC/SOC)!
With the media and many security organizations sensationalizing recent researcher findings such as “all intel processors can leak your secrets in X years," you might be losing sleep with thoughts keeping you up at night such as: Is my computer vulnerable? What about Row Hammer: How will my data be secure in the cloud?
Your front door’s traditional tumbler-style deadbolt is also vulnerable to a capable locksmith or person with lock-picking skills. However, it’s also vulnerable to a pry bar, and/or a SWAT team door ram. And I suspect, there are a lot of valuables in your house too!
But the good news is hardware exploits are dependent on many factors related to time, resources, expertise, and specific revisions and designs of a component.
While it is usually difficult to accomplish, here are a few things to note when exploiting a perceived or theoretical vulnerability:
- It often requires physical access and specialized equipment
- It takes tons of knowledge on the specific application, deployment, and countermeasures such as address space layout randomization (ASLR)
- It is far more complicated to weaponize reliably at scale (e.g., being able to consistently affect multiple systems of multiple types)
A key aspect that is often ignored is specific chip version, revision, and batch. Allow me to explain with an example:
Vulnerabilities inherent to embedded hardware
Below is a screenshot for an errata document published by ARM for their reference Cortex-A7:
As we can see, it is classified as CAT B (which has its own implications under ARM’s nomenclature/classification language), and this document states errata ID 844169 applies to all revisions of the hardware vs. specific revisions.
The concept of hardware-specific vulnerabilities, flaws, or errata is not new. If we include these potential vulnerabilities in vulnerability databases for CVSS scoring, it would be a lot bigger and useful. Many of these vulnerabilities are far scarier and more prevalent than some of the ones seen in the media (e.g., Spectre, Row Hammer, etc.).
It takes a lot of work to exploit a hardware vulnerability, and the presence of a vulnerability does not mean exploitability, but this is only realized under specific conditions or by the implementation and/or configuration of the component. As an example, I’ve seen embedded Physical Access Layer (PHY) ICs receive and transmit network packets from the wire for forwarding to an Operating System (OS) over a Serial Peripheral Interface (SPI) bus, starve a hardware watchdog on a CPU, resulting in a non-avoidable reboot.
This type of bug is not present by looking at the stand-alone components, but rather end-to-end when examining the complete solution (therefore comprehensive Quality Assurance (QA) and compliance testing are required). The impacts of such a bug could have disastrous consequences where continual monitoring of an industrial process is required.
Should I worry about hardware-specific vulnerabilities?
The answer to this question is that it depends. Hardware vulnerabilities and their related likelihood of being realized by a threat are dependent on multiple factors:
- Is physical access to these systems easily obtainable? (where exploits need physical access)
- Are my systems configured/implemented in a way where they are vulnerable?
- Where are these vulnerable systems located? Do I have enough detailed asset information to make an informed risk assessment?
- What are these vulnerable systems being used for? Are there implications on Safety-Reliability-Productivity (SRP)? Or for Confidentiality-Integrity-Availability (CIA)?
- Am I, or is my organization, likely to be targeted for such a tactical attack by resourceful & dedicated adversaries?
- What mitigating measures do I have in place (or lack of) should these vulnerabilities be explored as delivery vectors to compromise my organization, my systems, my data, or my OT processes?
- What processes and procedures do I have to record and monitor the vulnerabilities within my assets?
- What processes, procedures, people, and skills do I have to manage an incident end-to-end that leverages these types of vulnerabilities (but not limited to)?
- And what level of trust do I have towards my suppliers, the product OEM vendors, and their suppliers?
On the latter point, it is a question of risk transference and assurance, but at the end of the day, I do have to possess some degree or implicit trust otherwise the entire system would/will collapse. Hardware vulnerabilities should also be among those same trust but verify discussions with vendors and various independent certification processes and organizations.
What can I do about hardware-specific vulnerabilities?
You could go to your nearest vendor or solution’s provider and purchase the best new shiny solution that does not have known hardware vulnerabilities, but how likely is that? Not very in most situations. Row Hammer (and related e.g. drammer) attacks have not been publicly reported and are generally seen as non-practical, except under limited conditions.
However, Fortinet did find malware samples exploiting Meltdown and Spectre vulnerabilities, which affected all sorts of CPUs from ARM, Intel and even AMD by leveraging a technique intended to garner performance by improving memory caching via speculation. In this case, software patches and configuration are a possible workaround.
More recently, there have been other vulnerabilities in Intel CPUs that can bypass a variety of security measures buried into their silicon and your choice of mitigations is update firmware if possible, and your Intel product is not end-of-life (EoL), or live with the risk that an attacker could theoretically circumvent other controls while compromising your system. Not very helpful, right?
Realistically, not much can be done as a one-time fix (unlike taking your car to the dealership for a recall), so try to get a good night’s sleep to manage these vulnerabilities with a clear head and objective vulnerability management program component.
If you, the asset owner, are notified or aware of potential vulnerabilities in hardware that you may possess, then here are some strategies to implement to reduce your risks:
- Gather and aggregate all the detailed information possible for your deployed assets and those that may be in storage inventory. Cross reference this information to purchase information, and potentially, specific documents that relate to an asset’s part-number(s).
- Record all risks and affected systems in a registrar for continuous monitoring as part of the organization’s risk management program & record
- Define a Cyber Security Management System (CSMS) that is appropriate for your organization. In the realm of Operational Technology, ISA’s 62443 standards outline a great start to examine, classify and explore for Systems under Consideration (SuC) and cyber risks/threats. Another example, could be using an approach such as a Cyber Process Hazards Analysis (Cyber-PHA) as a general direction for OT cyber security too.
- Define a Vulnerability Management (VM) program that is in alignment with your organization’s equivalent of a CSMS, has verified end-to-end processes, and appropriate resource training.
- Define all organizational policies, procedures, standards, and guidelines as relates to VM, CSMS and risk management functions.
- Implement or develop automated vulnerability information gathering that also includes aspects such as determining when products are EoL, newer versions are released, etc.
- Replace systems that are at a higher risk exposure level, and if not possible, implement barriers/compensating controls to monitor, and protect systems from direct access (network or physical) such that these types of vulnerabilities cannot be successfully exploited by malicious parties.
- Monitor these systems for non-typical behavior (either from themselves, or adjacent systems), and even for physical access
- Continually review
Cyber security is never deterministic, and hardware, once built, is even harder to fix when compared to software, especially in environments with extended deployment lifecycles (e.g., such as that in OT or critical infrastructure). But there is hope.
Manage vulnerabilities with continued investment, just as you would constantly update and maintain your vaccinations, or your vehicles tires. Hardware vulnerabilities are not the end of the world, and everything needs context in order to be properly assessed.