Categories: Embedded

Secure Boot

Microcontrollers and electronic controls govern our lives. From nuclear power plants to factories and commuter trains, they are everywhere. Less than a decade ago, most control systems were innocuous little boxes with proprietary hardware and software, completely isolated from the wider world. When one stopped working, the service technician would have to physically come to the device. Time and cost constraints have forced more and more control systems to go online, where service technicians can handle multiple incidents remotely from the comfort of their workstations. This new comfort also means a new threat: Cyber-physical attacks.

Motivations for Attacks

Integrity: Why should people manipulate machine controls? Is this the territory of secret services and terrorist organizations? It can be, as Stuxnet has shown the world. Saboteurs? It might sound unlikely, but what is hacking unprotected control systems if not sabotage? When control systems operate offline, the saboteur needs to be physically present to cause any damage. He needs to gain access, and might be caught in the act. A system operating online minimizes the risk for the hacker using a cyber-attack. The hacker can tap into entire pools of knowledge and even work anonymously with many likeminded attackers. The motivation is irrelevant, be it a political message, an attempted extortion, or simply a hacker showing off his skills.

The facility’s operators could also try to “soup up” their machines and plants. However, operating manufacturing machinery outside of its intended parameters has many risks, with more wear and tear being the least worrying scenario. The machines’ original producers want ways to stop or at least prove such manipulation for warranty and liability reasons.

Confidentiality: Industrial espionage remains a risk that is too often overlooked. But the operating parameters or control concepts of manufacturing facilities are very interesting prey for competitors. Remote connections again make data theft easier. Cinema might have us believe that one could always see who is accessing what data at what time, but real-life systems often only record log-ins via protocols that are too easily manipulated. Data theft often goes by unnoticed, and the thief can analyze his “loot” leisurely offline.

How can I protect myself?

Many modern control systems use standard hardware, such as industry-grade PCs with standard operating systems like VxWorks, QNX, Windows, or Linux Embedded. Run-time environments in control systems often also employ a shared standard (such as CODESYS). Any remote network should be protected by VPNs and firewalls, but these offer no sufficient protection for the control systems themselves. Once past these hurdles, any attacker is free to do as he likes in the network, and many service technicians store the passwords or access keys for the VPNs of their clients on unprotected laptops. No chain is stronger than its weakest link, so a single absent-minded technician or a single weak password can undermine the security of the entire system.

Firewalls and VPNs can have loopholes and backdoors. Encryption keys are often too short, especially when using with RSA. Recent events have shown that the security promised by such systems must not be seen as the ultimate ratio. The downside of the media revelations is that potential attackers now know about new weak points to exploit.

Physical separation is no protection. In any business, many different people can access control systems. Service technicians access devices right on site with their laptops. Backups of the control software and its process parameters are stored elsewhere again.

Protection therefore needs to start on the target system, that is, the controls themselves. The control system must only run code and use only such configurations and parameters that have been cleared by an authorized party.

Most control systems are field-upgradeable. New features can be added and errors remedied. Such updating capabilities are, however, a chink in the system’s armor, which a malevolent attacker can use to inject his own manipulated code remotely or even right on the device. To prevent this, the system needs to boot and run in a secure environment. All of its components from the bootloader up need to be cryptographically authenticated as trustworthy. This is called a secure boot.

Why so complicated? Why not simply use a hash? 

Any asymmetric cryptography like ECC relies on the use of a private and a public key. This makes reversing the encryption mathematically impossible – the private key cannot be recovered from the public key.

The private key is kept safe – for ideal safety, on a CodeMeter Dongle. As the name implies, the public key is available to everyone. 
So why use two keys? The private key is used to create a signature, which only the key holder can do. The public key then verifies the validity of the signature, but it cannot be used to create a valid signature by itself.

A hash function with or without random salt , by contrast, uses the same key for creating and for verifying the hash value. This means that anybody who can test the hash can also create a valid hash. Signatures should never be substituted by hashes. The end result is only a deceptive sense of security. 

How does Secure Boot work?

The individual components of the control system are signed digitally by the producer or plant engineer. But who would check which components? When and where would these checks happen? A first approach is to have each layer verify whether the next layer can be started: The bootloader checks the operating system, the operating system the run-time environment, the run-time environment the application and so on. For this chain to function, the public key must never be changed at the first layer (i.e. it needs to remain authentic). That means that the first layer must be permanent and unchangeable. It is the secure anchor of the chain. The optimum in security would be a pre-bootloader, physically built in as a system-on-chip (SOC). A cheaper alternative is to use a dual bootloader whose first part cannot be updated to offer at least adequate protection against remote threats.

Is my environment safe?

Additional security requirements have each layer check whether the previous layer has been processed correctly. CodeMeter offers both means – forward and backward checks. The backward check is handled by a state engine on the CmDongle and an encrypted solution coupled with that state engine. The next layer can only be decrypted once the previous layer has been processed correctly and the right state is recorded on the CmDongle. This prevents individual parts of the software from being simulated in the attacker’s lab, and it precludes any analysis of the software in the environment. Espionage becomes impossible, as does the search for possible exploits or implementation errors. The device benefits from the additional shield.

Conclusions

Secure boot and integrity protection that use signatures and encryption are a mainstay of all secure controls. They make physical attacks substantially more difficult and prevent virtually all cyber-physical attacks.

CodeMeter offers a solution that is engineered to burrow deep into the device and requires all components to verify each other. Permissions can be defined in fine detail to match the relevant use case. CodeMeter protects – against espionage, manipulation, and sabotage.

 

KEYnote 26 – Edition Fall 2013

To top