Skip to main content

Resilient Networked Control Systems

Final Report Summary - RESILIENTNETCONTROL (Resilient Networked Control Systems)

The compounding complexity of digital devices, the expansion of networks in size and diversity, and the ever increasing dependency of business and government sectors alike on networked infrastructures has undoubtedly resulted in a pressing need for advanced design/analysis tools and for effective monitoring and control strategies. More critically, however, it has become urgently necessary to obtain scalable and effective methodologies for diagnosing faults, assessing and estimating system properties of interest, and operating these complex systems in uncertain environments and possibly in the presence of communication constraints, faults or adversaries. Depending on the underlying application, the causes for these adversarial conditions may range from design inconsistencies, component malfunctions and communication delays/lapses to variability in interconnection topologies and actions by intruders or users/operators that are misbehaving. The implications of faults and adversarial behavior can be far ranging, including user dissatisfaction and nuisance, large economic costs, and even loss of life.

This research project aimed to directly address these needs by focusing primarily on networked control systems (initially within the context of interacting discrete event systems and eventually expanding to switched linear systems and to more general hybrid systems). The project concentrated on the following two objectives:
(i) Establishment of techniques for monitoring and diagnosing faults or, more generally, abnormal behavior and functional changes in dynamic systems and networks, under limited and possibly corrupted information. The aim was to explore a variety of techniques and models, which include both deterministic and probabilistic settings. In particular, considering probabilistic settings, the project studied error bounds using optimal classification rules in hidden Markov models (WP2), and developed extensions of probabilistic model-based diagnosis approaches to distributed settings by combining ideas from distributed fault diagnosis (in deterministic settings) and belief propagation techniques (WP3). Considering deterministic models, in an effort to handle complexity issues that arise in large-scale systems, this part of the project developed distributed synchronisation schemes for fault diagnosis in distributed systems (WP4).
(ii) Development of resiliency- and privacy- ensuring control strategies for networked control systems. In particular, one of the objectives here was to study supervisory control strategies for preserving opacity in discrete event systems (WP1), and also develop a game-theoretic framework for preserving opacity in settings where multiple systems interact (WP5).

The research was completed successfully and significant progress was made in both of the objectives mentioned above. This progress can ultimately enable the automated operation of detection and control mechanisms, which will naturally lead to resilient and safe operation of these complex systems despite the presence of malicious or non-malicious disruptions. Though some of these challenges have been addressed using centralised algorithms (e.g. monolithic diagnosers and controllers for supervisory control), the scientific challenge in the case of the large-scale networked control systems that emerge as a result of the proliferation of networking and digital technology is to fully extend these techniques to distributed/decentralised settings, understand the costs and performance tradeoffs involved, and (if necessary) develop new algorithms that can provide suboptimal but adequate performance at reasonable costs. Some steps towards this direction have already been taken within the context of this project.

Within this reporting period (May 21, 2010-May 20, 2012), progress was made in many of the above-mentioned directions. For example, the work 'Maximum likelihood failure diagnosis in finite state machines under unreliable observations' (by Eleftheria Athanasopoulou, Lingxi Li, and Christoforos N. Hadjicostis which has appeared in IEEE Transactions on Automatic Control, March 2010) developed a probabilistic methodology for failure diagnosis in finite state machines based on a sequence of unreliable (possibly corrupted) observations. The core problem considered was to choose from a pool of known, deterministic finite state machines (FSMs) the one that most likely matches the given sequence of observations, despite sensor failures that may corrupt the output sequence by inserting, deleting, and transposing output observations. This approach has been extended by obtaining bounds on the probability of misclassification (see, for example, C. Keroglou and C. N. Hadjicostis, 'Bounds on the probability of misclassification among hidden Markov models', proceedings of CDC/ECC 2011).

Similarly, the work 'Opacity enforcing supervisory strategies via state estimator constructions' (by A. Saboori and C. N. Hadjicostis which appeared in Institute of Electrical and Electronics Engineers (IEEE) transactions on automatic control, May 2012) developed ways to enforce state-based notions of opacity via supervisory control. In particular, this work considered initial-state opacity and infinite-step opacity in systems that are modeled as partially observed nondeterministic finite automata. A system is initial state opaque if the membership of its initial state to a set of secret states never becomes certain to an external observer of the system behavior; similarly, a system is infinite step opaque if the membership of its state, at any point in time (not simply at initialisation), to the set of secret states never becomes certain to an external observer. Ways to verify such state-based notions of opacity have been developed by the researcher and its collaborators. The work in 'Opacity enforcing supervisory strategies via state estimator constructions' tackled the problem of constructing a minimally restrictive opacity-enforcing supervisor in order to limit the system's behavior within some prespecified legal behavior while enforcing initial-state opacity or infinite-step opacity requirements. The result is a supervisor that achieves conformance to the pre-specified legal behavior while enforcing initial-state opacity by disabling, at any given time, a subset of the controllable system events, in a way that minimally restricts the range of allowable system behavior.

Finally, the work in 'Performance analysis of belief propagation algorithms for multiple fault diagnosis applications', (by Tung Le, S. Tatikonda, and C. N. Hadjicostis, Proceedings of CDC 2010) studied the application of sumproduct algorithms (SPAs) to multiple fault diagnosis (MFD) problems in order to diagnose the most likely state of each component given the status of alarms. SPAs are heuristic algorithms of polynomial complexity that are known to converge to the marginal solutions in settings where the underlying interconnection graph has a tree structure. To determine SPA performance on more general MFD graphs (with cycles), one can use properties of the dynamic range measure for SPA beliefs and take advantage of the bipartite nature of MFD graphs. Following this approach, the work in 'Performance analysis of belief propagation algorithms for multiple fault diagnosis applications' led to the establishment of bounds on the true marginal of each component with respect to the beliefs provided by the SPAs.

The project website can be found at where one can find some details on the project as well as representative publications. A complete list of publications by the researcher and his group can be found in