close

Software partitioning has a role in making industrial infrastructure secure, writes Alexander Damisch, director for industrial markets at Wind River

Security issues in industrial markets have been receiving much attention in the media. The vast majority of devices that power infrastructures offer aging technology in many cases and are not well prepared for the latest cyber security threats.

Taking the energy grid as an example: one does not have to be a scientist to find the holes when approximately 70% of the infrastructure is more than 30 years old.


Devices that were never designed for a connected world in the first place are wide open for attacks. Utility providers and the dominating players that power this market are under intense pressure; critical infrastructure is supposed to be stable, robust and often certified for functional safety requirements.

While safety systems are left untouched following certification for risk, complexity and cost reasons, a secure system is only secure if it is able to withstand the latest vulnerabilities. The contradiction between the lifecycle of safety and security is a very expensive challenge today. However, the good news is that embedded virtualization can alleviate these security challenges.


To provide a solution that can be retrofitted to an existing infrastructure, new security devices are often integrated with existing devices. Firewalls, IPS, IDS or other boxes add to the CAPEX cost, but also increase the complexity of the supply chain management for installations that have may have a lifecycle of 25 years or more.

New systems are designed with both safety and security in mind. Functions that would be in separate boxes in the existing infrastructure can be consolidated to reduce the CAPEX burden and avoid even greater costs in supply chain management.

This idea is not new: consolidating workload into more intelligent systems by leveraging improved hardware architectures that support virtualization creates a significant opportunity to meet today’s architectural challenges.

One proven approach to security is to keep devices that need to be secure away from general access: for example, physically or virtually separated from networks such as the Internet.

The implication of this approach is that physically separate devices and networks need to be built for secure versus insecure devices. In general, this is impractical because of the expense and redundancy involved. A more cost-effective solution is to leverage embedded virtualization.

Virtualization for embedded systems that operates at the processor and board level is called a hypervisor. A hypervisor allows several virtual systems to run on a single piece of hardware efficiently. Hypervisors can be used to consolidate several systems into one, saving material costs; reducing size, weight and power; and reducing supply chain costs and complexity. Virtualization with a hypervisor can also allow developers to partition a system for functional, security and safety reasons.

Virtualization technology can also provide an OS-agnostic, safe and secure partitioning layer. This addresses a key concern of the market today: ensuring that different services on a device do not impact each other for security and safety.

This ability to securely combine different partitions not only reduces the development costs, but also the operating and capital costs. Using fewer chips and boards reduces the capital cost of the product.

OPEX is also reduced with less inventory and spares and a simpler process for upgrading hardware and software. Now, any new patches or updates to parts of the system software will not affect the real-time operation of the system, nor require lengthy testing and re-certification.

The move to virtualization extends the lifecycle of embedded products. Existing code can run on its own secure partition running an RTOS while new features can be added to the non-real-time partition running an OS such as Linux or Microsoft Windows for the user interface.

To implement this efficiently, virtualization uses hardware enhancements specific to a CPU architecture, enabling all the advantages with minimal impact on performance and latency, especially for the hardware-assisted isolation between partitions.

This strategy greatly extends the life of an embedded product without the expense of having to rewrite real-time embedded code, add and re-certify drivers or redesign hardware. This is a particular issue for systems that combine real-time capability and user interface in one operating system – when there are patches or updates to the OS, the whole design has to be re-tested and possibly re-certified to ensure there is no impact on the real-time operation.

The influence of machine-to-machine (M2M) networks is growing and many devices now need additional gateways, firewalls and other communication functions. Virtualization is an excellent way of adding these to the system through the non-real time operating system without having to change and re-certify the real-time elements of the software or change the hardware.

One proposed architecture that is fast gaining ground is to provide more localized and connected processing power close to where it is needed, often as a gateway to the wider Internet. Local traffic can be processed quickly and acted on, while the data is still available to the wider systems across the Internet, whether it is a train, a manufacturing floor or a power plant. This approach provides the ability to consolidate a number of functions from communications to data processing. This is costly and complex when implemented in separate boxes. The ability to consolidate a wide range of functions reliably and securely into an intelligent single unit is more cost effective and becoming increasingly popular.

This trend has implications for security. Consolidating workloads in a single device means communications are linked to real-time operations and the flow of data. This means there is a need to keep certain functions highly separated.

Safety-critical code must be protected and unchanged to retain its certification, and yet the security that protects the system has to be updated regularly to defend against ever changing attacks. At the same time, there are communications protocols and data capture in the system that need real-time performance alongside human interfaces that can be run at slower speeds.

All of this provides a potentially highly complex environment. The traditional approach has been to have separate devices for each of these functions, such as the communications and real-time elements.

However, security needs to be deeply embedded within the system to provide maximum protection; and physical separation leads to a number of architectural challenges that can be expensive to solve.

Virtualization has already opened up a wide range of new applications in IT, but the ability to provide true real-time performance alongside a mainstream OS opens up yet more embedded opportunities in new and existing markets.

Smart-grid networks, manufacturing systems, and transportation are all set to benefit from the consolidation of workloads and the separation of communication and security functions on to a single core. This allows cost-effective development of secure, reliable and future-proof embedded systems. Running the same operating systems on both a single- and multi-core Embedded System device opens up a platform of equipment that can scale from a single core to many, all with the same software base.

Consolidation of workloads also has a significant effect on the capital and operational expenditures. Building a single unit with a single board rather than multiple units with multiple boards reduces the upfront costs. Millions of M2M devices are being rolled out, connected to hundreds of thousands of gateway units, so this is a significant saving in the upfront cost.

Decoupling the software lifecycle of different elements and still being able to use a single device can reduce expenses. All of this can provide dramatic savings in development time and equipment cost, allowing more processing performance to sit closer to where it is needed in the network and support lower cost sensors and terminals in the home or on the factory floor.

While industrial markets are undergoing a revolution, safety and security are the driving forces behind new processes and standards. Increased regulations are subjecting more embedded devices to rigorous and expensive certification processes ensuring standards compliance. Wind River’s safe and secure partitioning solutions for industrial and automotive applications, further demonstrate this shift.

Wind River’s safe and secure partitioning capability is designed and implemented for safety certification and decoupling the lifecycle of certified and non-certified applications. This provides the option for increased innovation of the non-certified applications and reduces ongoing system certification costs while enabling the benefits of consolidation.



 

arrow
arrow
    全站熱搜

    EMBA的小眼睛 發表在 痞客邦 留言(0) 人氣()