Security in Embedded Systems

This article on security in embedded systems provides an overview and insights into the challenges of software development. This knowledge will help you to ensure the security and integrity of your products in a rapidly evolving threat landscape, right from the planning stage of your products.

In a controlled environment, preventing accidental misuse and hardware failure is sufficient to achieve safe behaviour. If an unrecoverable condition is detected, the system can transition to a state with limited or no functionality and still be considered safe.

In an uncontrolled environment, various forms of sabotage can compromise the security or safety of the system. Preventing this is only possible if security is considered at every step of the lifecycle:

  1. Thread Modeling ‑ While designing, the product owner team must identify the security requirements.

  2. Secure Components ‑ The software developers must implement the security requirements correctly. In addition, they must implement all other requirements in such a way that they do not add any vulnerabilities to the system.

  3. Secure Deployment ‑ Whenever software is transferred (e.g. from a supplier to a manufacturer), the supplier must provide mechanisms for checking the integrity and authenticity of the software.

  4. Active Maintenance ‑ During the lifetime of the product, the manufacturer must correct any vulnerabilities discovered in a used component.

  5. Secure Update and Boot ‑ When updating the system, the product manufacturer must ensure the integrity and authenticity of the new software.

Threat Modeling

The first step in creating a secure system is to identify the assets that a system provides. The product manufacturer uses threat modelling to analyse how an attacker can threaten those assets.

Threat modelling must be integrated into the design process and performed at all levels of abstraction. Therefore, when defining the high-level requirements, the software architects identify further abstract threats. These threats lead to additional security requirements to mitigate these threats. This analysis is integrated into each subsequent development step.

Values that many embedded systems must protect:

  • System Safety: Most types of attacks can impact the safety of a system.

  • System Availability: If attackers can shut down the system, the system becomes useless to the customer.

  • Business Secrets: If attackers have access to the firmware, they can extract included trade secrets.

  • Legal Compliance: If attackers can cause the system to violate laws (send spam emails, spy on people), this will cause legal impact for the product manufacturer.

  • Company Reputation: A system malfunction could cast a bad light on the manufacturer.

Once the system assets have been identified, the product manufacturer must estimate the costs and resources that an attacker is willing to invest in an attack on these resources. With this information, the development team can define the required security level that the product must achieve. Based on this security level, the developers can derive security requirements to minimise risk.

IEC 62443 Security Levels

To classify the level of security a component achieves, the IEC62443 defines 5 Security Levels (SL):

  • Security Level 0: No special requirement or protection is required.

  • Security Level 1: Protection against unintentional or accidental misuse.

  • Security Level 2: Protection against intentional misuse by simple means with few resources, general skills, and low motivation.

  • Security Level 3: Protection against intentional misuse by sophisticated means with moderate resources, system‑specific knowledge, and moderate motivation.

  • Security Level 4: Protection against intentional misuse using sophisticated means with extensive resources, system‑specific knowledge, and high motivation.

When developing a system according to Functional Safety standards, the security level SL‑1 is covered without further activities. The terms “few”, “moderate” and “high” aren’t well defined for higher security levels. However, a common understanding for the security levels is:

  • SL‑2 protects against a hobbyist or angry former employee that consults publicly available information about security and the system he wants to attack.

  • SL‑3 protects against professional hackers who intend to make money by blackmail or by selling the exploit or the information they can extract.

  • SL‑4 protects against professional hacker groups that receive extensive funding from companies or governments.

IEC62443 Security Requirements

IEC62443 lists functional requirements that a component must implement to meet a security level. For example, “A human that interacts with the system …”:

  • SL‑1: ...must be identified and authenticated.

  • SL‑2: ...must be uniquely identified and authenticated (no shared admin account).

  • SL‑3: ...must be uniquely identified and authenticated via multi‑factor authentication if accessing from an untrusted network.

  • SL‑4: ...must be uniquely identified and authenticated via multi‑factor authentication on all networks.

Security Zones

Barriers that improve security (walls, doors, security guards, firewalls, virtualization technology, etc.) divide a system into zones, and each zone achieves the lowest security level of any of its components. While the IEC62443 addresses operational technology security in automation and control systems, the defined security levels are a valuable tool for any discussion about security.

Security is not an attribute that a system either has or doesn’t have. Thinking of security as the effort an attacker has to invest makes it comparable with the value that is to be protected.

Secure Components

The second step in achieving a secure system is to ensure that each component is designed and implemented securely and provides a secure interface to other components. A component developer must follow the principle of "secure by design" by taking into account the requirements of the applicable security standard and incorporating the results of threat modelling into each step of the design process.

Interface Contracts

Besides the correct implementation of the functional requirements of a component, the security of an implementation depends on the exact adherence to the contract of all other components used. In this case, contract means the requirements that a component places on its callers.

For example:

  • Each memory block that is taken from a pool shall be returned to the same pool (the pool implementation does not track a block’s size or which pool it came from).

  • The argument to a function shall not be zero (because the argument may be used in a division without being checked).

  • A function shall be called from within a critical section (because it operates on a shared data structure without using synchronization).

There are many reasons to impose such restrictions on the caller (performance, flexibility, portability, etc.), and unfortunately these are not always highlighted in the documentation. Nevertheless, the security of the system depends on these requirements being met.

Enforce Expectations

When a new component is implemented, developers should increase security by verifying compliance with the contracts as much as possible. Since software cannot verify all contracts at runtime, the component author must describe the remaining unchecked expectations very clearly in the corresponding documentation. Component users can follow these contracts and demonstrate compliance with the contracts using various software development techniques such as reviews, static analysis, testing, etc.

Communication Protocol Verification

Components implementing communication endpoints must ensure that all communication messages comply with the agreed‑upon and documented protocol.

Specifically, this means verifying every type, value range, size, and encoding of every field of every message, but also meta data like the number of messages per time, the sender address, and the expected order of messages. The software must check enumerations against a safe list instead of excluding items from a deny list. The reaction to protocol violations must itself be designed not to be abusable.

Isolate Components

Sometimes the development team wants to use a component that doesn't match the targeted security level of the product. In such a case, it is possible to run that component in an isolated environment with restricted privileges. One approach is to restrict memory access with a memory protection unit and restrict the processing runtime with a pre-emptive scheduler. The damage caused by exploiting a vulnerability in the component is now confined to the isolated environment and does not affect the rest of the system.

Security Checklist

The first step to making a component secure is to ensure that it has a relatively simple API, which guides users on how to use it correctly. Such a component requires detailed, up-to-date documentation, including a security manual: a checklist of all the steps that the user must take to ensure the secure use of the component (e.g. “run this validation”, “compile with these options” or “insert your public key in this constant”).

Cryptographic Algorithms

The security of a system is most likely based on some cryptographic operation, either for encryption and decryption or for generation and validation of signatures and checksums. These cryptographic operations only provide security if:

  • well analyzed, standardized, and state of the art algorithms are used, which are

  • expertly implemented and

  • can be replaced by something more secure in the future.

Very few people have the knowledge and mindset to design good cryptographic algorithms. New algorithms are published and deemed secure (for the time being, at least) after cryptanalysis experts have spent years trying and failing to crack them.

Implementing an algorithm securely is almost as tricky. There are countless hurdles to overcome, from choosing the appropriate padding scheme to protecting against timing attacks or using the cryptographic primitives in the correct mode.

It is a widely accepted best practice to rely on the solution of a reputable third party.

System Configuration

If an embedded device has configuration parameters that affect its security, it should have secure defaults. For example, a password-protected interface should either have a unique and strong password, or force the user to change the password before the device is operational.

Using a default password and requiring the user to change the password is never sufficient. Similarly, the system design must enable encryption by default, rather than advising the user to enable it later.

System Logging

Logging security-related events is essential for security breach auditing and analysis, but difficult to implement. The integrity and confidentiality of the log is a security asset that the product manufacturer must analyze in threat modeling.

For security breach analysis, it is desirable to include as much information and detail as possible in the log. However, system resources in embedded devices limit this decision. If it is not possible to ensure the confidentiality of the protocol, a lot of valuable information must be left out. Furthermore, laws such as the GDPR restrict the logging of personal data or require it to delete them after a short time.

Another aspect that the development team must consider is log flooding. If attackers do something that generates many log records, less relevant information could overwrite critical information.

When designing a secure logging scheme, consider that logs are often evidence in liability cases, between device manufacturer and device owner, where the device owner may be interested in manipulating the logs.

Secure Deployment

The third step in achieving a secure system is to ensure that software, both compiled and in source code, and all documentation is stored only on secure media and transferred over a secure channel. Rather than attempting to exploit a weakness in a system, it may be easier for an attacker to create such a weakness by modifying portions of the design, code, or binary before they are transferred to the embedded system.

IT Environment

The attacker could achieve this by accessing or manipulating the developer’s machine, the server with the source code management system, or manipulating the software during transmission. Measures to protect against manipulation of developer machines and servers:

  • Proper IT rights management

  • Regularly updating all used software

  • Limiting physical access

  • Security policies (e.g., employees have to lock their computer when not sitting in front of it)

Software Transmission

There are also various measures to prevent the software from being manipulated while it is transmitted:

  • Establish a secure communication channel using PGP or X.509 certificates to sign (and encrypt) all communication.

  • In parallel to sending a delivery by email, communicate a cryptographically secure fingerprint over a different channel (phone, letter, encrypted chat, etc.)

  • Use a data transfer portal secured via TLS, X.509 certificates, authentication, and authorization.

Active Maintenance

The fourth step in achieving a secure system is to ensure that all components remain secure during operation.

Significant vulnerabilities may be discovered at any time in research on protocols, cryptography, and libraries, and the end users of the product must then update their security systems. Software component vendors must establish a system to notify all their customers of discovered vulnerabilities. Users of these libraries must receive this information in order to create and deploy updates to the software in their systems.

Secure Update and Secure Boot

The fifth step in achieving a secure system is to ensure the integrity and authenticity of all software executed on the system. This can be achieved in principle by

  • Only the manufacturer can install software

  • The update process is secure, or

  • The boot process is secure.

If the end user cannot update the software, a system with a known vulnerability must be disabled and replaced with an improved device. This might be a viable solution in some cases, but the usual embedded system needs a way to install new firmware on it.

If an attacker gains physical access to the embedded system, it is possible to manipulate the contents of the stored firmware. In such a situation, the boot process must be secured. Otherwise, a secure start-up process is unnecessary and a secure update process is sufficient. The secure update process has no impact on start-up times and is suitable for most embedded systems. Both mechanisms must ensure the following for the application:

  • Firmware Authenticity: A trusted party created the firmware

  • Firmware Integrity: No other party modified the firmware

  • Firmware Age: The update must provide a newer firmware than the currently installed one (preventing rollback to a more vulnerable version)

The product manufacturer should use a digital signature scheme to verify the authenticity of the firmware. Message Authentication Codes (MAC) provide the same level of security, but require a shared secret key between the software provider and the embedded device. Multiple devices should not use the same shared secret key.

The product requires a root of trust for the authentication scheme. This root of trust is a certificate or public key that the embedded system can use to verify the signature of the firmware.

The authentication scheme also typically verifies the integrity of the firmware.

Finally, the boot code must check the age of the software. The boot code stores the firmware version number in a secure location where it cannot be tampered with. The boot code will then not install or boot firmware with a lower version number.

Secure Update

In a secure update scheme, the firmware verifies the software once after receiving it and stores it in trusted storage (internal flash). The device uses the updated and verified software on subsequent boots. Challenges of the update process include the fact that the complete firmware often doesn't fit into a temporary communication memory and that the attacker could disconnect power and prevent security checks from executing.

Secure Boot

In a secure boot scheme, the boot code loads and verifies the software at each reboot. This allows us to store the software on an insecure medium (external flash memory, SD card, network server, etc.), which simplifies updates and makes the hardware cheaper to manufacture. However, more memory is required and each reboot takes longer.

The boot code still requires a trusted memory for the firmware loader and the root of trust (the cryptographic key for verifying the firmware).

After reset, the boot code loads the firmware into RAM, verifies the authenticity, integrity, and version, decrypts it, and executes it from RAM.

If your application has a security vulnerability that allows an attacker to modify the boot code, the attacker can take complete control of the device. There are several forms of hardware support that can protect against such an attack:

  • One Time Programmable (OTP) boot records pin the checksum of the bootcode or trust anchor.

  • Hardware Security Modules (HSMs) are dedicated coprocessors with elevated rights and tamper resistant storage.

  • Cryptographic coprocessors with key stores allow using a key (for decryption or verification) while preventing any system part from reading or changing it.

To use this hardware support, the boot code is usually equipped with a three-stage boot process:

  1. The hardware verifies the Boot Manager and Root of Trust.

  2. The Boot Manager verifies and executes the Boot Loader.

  3. The Boot Loader loads, verifies, decrypts and executes the application.

The boot loader and boot manager are separate, so the boot manager is as simple as possible. It is not likely to contain errors and will only need to be updated in rare cases. The boot loader contains all the complexity of device drivers and network protocols, so it is more likely to contain errors. Since the hardware is not involved, updating the boot loader is easier.

Conclusion

We started our journey into the world of embedded system security by asking: “What is security and do we need security in embedded systems?” Moving on, we provided a rough overview of the embedded system development process by highlighting five aspects that improve the security of the resulting product:

  1. Thread Modeling

  2. Secure Components

  3. Secure Deployment

  4. Active Maintenance

  5. Secure Update and Boot

Since each step could fill entire books, this paper focuses on providing an overview and highlighting some of the challenges you may face when developing components for your next secure product.

Embedded Office Color Code