Embedded Device Security Certifications

For those who have not attended previously, Hardwear.io is a technical conference focused on hardware security. While the conference is only in their 4th year, both the training and speakers have been world-class since its inception, and its success is obvious; it has expanded to a twice-a-year format, and even begun holding sessions in North America. Having frequently enjoyed attending the conference myself, I was delighted to be invited to join the CXO panel discussion in The Hague. The topic of discussion was ‘Security Certifications for IoT Devices’, and in preparation for the discussion, I made an effort to collect my thoughts on the subject.

NCC Group’s clients often ask about security certifications for hardware devices. These questions typically come from one of two perspectives: 

  1. “What certifications can I get for my products that will help me improve sales?”
  2. “I have a customer demanding that I certify my product under <SCHEME>, how can I most easily satisfy this need?”

There is a great degree of variability in security certifications. Some are wide reaching and can have a measurable effect on product security, while others have such a narrow focus that they are almost worthless from a technical security perspective. In either case, adding security to a system merely to satisfy a certification program after the product is complete can cause major upheaval in both product design and development processes. Needless to say, this last minute “bolted on” security can be both costly, and ineffective, and so we always recommend starting with the right product security requirements during the initial design phase.

Certification programs can be divided into various categories and there is no single correct way to divide them. Here I have chosen several groupings based on the purpose of the certification program.

Code Quality Frameworks

Code quality standards and certifications like MISRA and Cert-C can have measurable improvements on the security of your codebase. They achieve this primarily by restricting the developer to a subset of the C language that is considered to be less error-prone. While not security focussed per se, an improvement in overall software quality, will often see a reduction in security vulnerabilities as well. This by itself can be a pure compliance effort, and the developer incentives to add exceptions without proper security review can negate their positive effects. There used to be another similar scheme with an even stronger focus on security called the  Coverity Secure Code Certification. This was a static analysis tool and company (since acquired by Synopsys) that offered a certification program (unfortunately now defunct) which helped ensure that there are no detectable errors in the code. Practically, this was a high bar to achieve because it was a moving target as the tools improved over time. As well, the elimination of false positives can be a tedious task on large legacy codebases, especially if they have not seen a lot of prior security hygiene effort. These tools will find real vulnerabilities, and a measurable improvement in code quality can be had by their proper use. They can be integrated into the development process to catch issues at the earliest possible point, the first compile or change submission.

OEM Ecosystem Frameworks

Some large ecosystem providers have begun rolling out security programs of their own.

Alexa Voice Services is one such example; in order for an OEM to make use of the AVS technology in their product, they must adhere to a set of  minimum security-best practices that have been established by Amazon. Similarly, for Android devices, the Android Compatibility Test Suite (CTS) contains many security related tests to help ensure OEMs are implementing security correctly. 

Retrofitting a security requirements framework into an existing OEM ecosystem can be a challenge. Convincing the entire OEM community to adopt new security requirements at their own cost can be economically burdensome. For more established OEM ecosystems, with many years of products in the market, there may be many OEMs that have gone without proper security hygiene for quite some time. This can result in some OEMs being much further behind than desired, amplifying the cost of improvements, and the need to provide external security guidance. For these reasons, it is may be less disruptive to start small and manageable, and implement a regular cadence of improvements to the security requirements. Start with the lowest-effort, highest-value security improvements first. Allowing the OEMs to adapt to the pace of security improvements with each new product generation in this way can help alleviate the stress of implementing security improvements.

The introduction of these ecosystem specific programs is a relatively new and growing trend. While the great improvements that they promise have not yet been delivered, NCC Group firmly believes that it is a viable long-term strategy.

Digital Rights Management

Trusting the valuable audio or video content to a user’s device can be a risk if the device security cannot be attested. For this reason, some video DRM schemes such as Netflix, and PlayReady provide a set of security robustness rules against which OEM devices must be tested before being granted licenses to process the media content. 

Many cellular phone carriers have implemented security requirements testing within their device acceptance processes. This is a carrier-specific testing program that OEMs must submit mobile devices to before being allowed to operate the devices on the carrier’s network. For carriers that sell subsidized devices to users, these programs will enforce tough device security requirements to support the subsidy lock feature that is vital to this particular business model. A security failure of the platform would cause a failure of this DRM mechanism, and lead to large financial losses for a carriers relying on this business model. Such carriers will have very robust security requirements for the mobile devices that they subsidise.

All effective DRM functionality requires that certain security guarantees be provided by a device platform. Because of contractual requirements for content licensing and other business needs, the security requirements for the DRM schemes can be quite stringent, and require very robust foundational product security in order to satisfy them. These can be some of the toughest device security requirements to meet, because the threat model specifically includes the owner/user as a threat actor. This escalates the importance of defending against local and physical attacks.

Industry Specific Frameworks

Many industries specify their own device security requirements. These may be important for your product, as failure to comply with them may prevent your product from ever reaching the market, particularly in regulated industries. Some common examples are listed below, however you should carefully research to see if your industry has appropriate guidance for you.

  • Within the US, the Food and Drug Administration (FDA) is responsible for medical device security. They publish a set of guidelines that cover the security risks of connected medical devices of all sorts. These guidelines generally follow the NIST cybersecurity framework.
  • Smart Meters were rolled out across the UK over the past few years. Because of safety and national security concerns, all smart meter designs are required to submit to a Commercial Product Assurance (CPA) assessment to help identify and remediate the security risks. These requirements were developed in partnership between industry and the NCSC, and cover many common security issues with devices today.
  • The automotive industry has adopted several security standards, largely tailored to fit existing safety frameworks, which are already well understood. SAE J3061 “Cybersecurity Guidebook for Cyber-Physical Vehicle Systems” (paywalled) is one of the primary sources of security requirements in the industry and helps automotive ECU manufacturers and integrators throughout the product development lifecycle.

Legislated Security Requirements

Until recently, holding device manufacturers to account for their insecure devices has been rare, but it does happen. In the US, the Federal Trade Commission (FTC) has been slowly but steadily making examples of some of the most egregious offenders, winning cases against Asus in 2016, BLUPhone in 2018, and D-Link in 2019. All of them knowingly and repeatedly made insecure devices. But the OEMs will always outnumber the FTC, so it remains an uphill battle.

More proactively, some jurisdictions have begun investigating legislation to help provide security guidance to protect the public from the plague of insecure consumer electronics. Recent examples of this are California’s SB-327 and Oregon’s House Bill 2395 which both take effect January 1, 2020. Nearly identical, these laws, essentially reduce to a single requirement: no shared default passwords. This has been widely criticised within the security community in part because it does not go nearly far enough and as a missed opportunity to bring about real meaningful change. Other laws introduced in various jurisdictions include the US’s Internet of Things (IoT) Cybersecurity Improvement Act of 2019 and the UK’s Secure by Design effort and they go only slightly farther and require regular vulnerability patching. Many more jurisdictions are considering the introduction of similar laws.

The criticisms are hard to disagree with, as these laws don’t even come close to enforcing a minimum table-stakes security posture for devices hitting the market today. But these legislated schemes are interesting not because what they say vendors must or should do, but what they say the vendors must NOT do: “below this line, you are not just negligent, but criminally negligent”. If given the choice, most device manufacturers will hopefully not settle for merely “not criminally insecure” and will set the security bar a little higher than that. This is a powerful tool.

But there is an implicit acknowledgement of the economic factors in all of these laws that security engineering is not free, and trying to improve the security all at once is not a viable strategy. So these laws are a soft introduction to security that cuts off the long tail of cheap insecure devices in a way that still allows short term profits and innovation to happen.

I really do expect these programs to ramp up the requirements over time, and I am hopeful in general that this is the start of a long term conversation about consumer data protections. By eliminating the lowest of the low from the market, these laws are a useful beginning.

Other Programs

There are many other certification schemes, and this article cannot pretend to be exhaustive. Some of the more common ones that you may encounter include:

  • Common Criteria (ISO15408) is an internationally recognized security certification program. The Evaluation Assurance Level achieved by any given product is proportional to the cost associated with the validation effort. Only the most mission critical devices will attain a high EAL level. This standard is intended to apply to all IT systems, not just embedded devices, however it can still be used in some cases.
  • FIPS 140-2: This scheme focuses narrowly on validation of cryptographic implementations, and if that resides in the hardware, then this may be a validation you seek. Because the implementer largely decides the boundaries of the assessed portion of the system, this can be as rigorous or as light as you wish. Consequently, we often see devices bearing stickers whose sole purpose is to define the FIPS 140-2 boundary, with their removal voiding the security validation. Lists of validated products are maintained by NIST, which is a great source of security best-practices.
  • NIST is a great source of security best-practices including FIPS 140-2 (above), SP 800-63B: Digital Identity Guidelines, and DFARS Cybersecurity Requirements for manufacturing.
  • The Underwriters Laboratory (UL) has long certified electrical devices for safety within the US. In 2015 they created the ULCAP program to certify systems against their UL 2900 series of standards which helps demonstrate that a product or system is secure to modern standards. NCC Group has seen an increasing amount of attention paid to this standard and it has been adopted by the FDA and other organizations as a foundation of their own security programs.
  • Consumer Reports is an independent consumer testing agency in the US with a long history of testing product quality and publishing trusted and impartial product reviews. Continuing to adapt to the times, they have begun security testing of many connected products under their Digital Standard initiative, which focuses on consumer privacy.
  • EMVCo is a consortium of payment card vendors. They have a certification program which, among other things, is used to assure the baseline security of payment terminals.
  • Global Platform certifies trusted execution environment and secure element based systems against a Common Criteria protection profile.
  • ENISA is a european certifying body that has a mandate (via the EU Cybersecurity Act) for certifying devices against security standards. 
  • The ARM Platform Security Architecture provides security certification of ARM-based devices, intended for chip-vendors.
  • ISO27030 provides some guidance for IoT security, however ISO27030 is still in the DRAFT state and no certification program is as yet available.

Final Thoughts

Fundamentally, if you are trying to decide if a product is secure or not, then looking at the certifications is probably not the best method anyway. Certifications are a point-in-time measurement and, as we know, attackers evolve rapidly over time. It would be far better to look at the actions and track record of the OEM for positive security traits. Things like:

  • Having a clear and open vulnerability reporting system that is responsive to externally reporting issues.
  • Providing regular proactive software updates that include security patches.
  • Seeking impartial third-party reviews of their products.
  • Having no obvious signs of gross security negligence such as ongoing FTC lawsuits or repeated product recalls.

Security certifications rarely prescribe what you must do to be secure, but more commonly detail what you must avoid doing in order to not be insecure. In this light, they almost universally set a minimum bar of security, below which your product will definitely be considered insecure, and in some legislated cases, criminally so. The requirements of such a certification program or regulatory framework should ideally not be a burden; if you have done everything right, then the bar you set for your product security will be much higher than the bare minimum required by the program. In such case, documenting your existing security posture in a format that is compatible with the program or framework should be a mere paperwork exercise. 

Because the variability in effectiveness and practical value of certifications, one might ask if we can even tell if the product is secure by looking at its security certifications. I think the previous paragraph gives us a clue to an answer here: If the product has only one or two security certifications, then these may have been obtained for purely compliance reasons, and not because they’ve made many of the requisite security improvements. If it has achieved many certifications over a short period of time, then there is a strong possibility that it is because it was already in a very good security position to do so.

Call us before you need us.

Our experts will help you.

Get in touch