CORDIS - Forschungsergebnisse der EU
CORDIS

Testing the Untestable: Model Testing of Complex Software-Intensive Systems

Periodic Reporting for period 4 - TUNE (Testing the Untestable: Model Testing of Complex Software-Intensive Systems)

Berichtszeitraum: 2021-03-01 bis 2022-02-28

Software-intensive systems pervade modern society and industry. These systems often play critical roles from an economic, safety or security standpoint, thus making their dependability a crucial matter. Developing technologies to verify and validate that complex systems are reliable, safe, and secure is therefore an essential societal and economic objective.
One key aspect is that the verification and validation (V&V) of software should be automated to scale up to real, complex systems and services. Such automation is truly challenging as it should be both effective at finding critical faults and economically viable.
This research applies the latest Artificial Intelligence developments (e.g. Machine Learning, Evolutionary Computing, Natural Language Processing) to enable cost-effective V&V automation. This endeavor covers all aspects of V&V, from early system requirements analysis to design verification, automated software testing, and run-time monitoring. It also addresses all aspects of dependability including reliability, safety, security, and compliance with regulations.
The research outcomes advance state-of-the-art V&V technologies by introducing novel AI-enabled techniques regarding all the V&V aspects in the development of complex software-intensive systems. Specifically, in collaboration with industry partners in the automotive, satellite, and financial domains, the project proposed scalable, automated V&V techniques for identifying high-risk test scenarios, localizing faults, selecting a high-risk test suite, and analyzing change impact. The developed V&V techniques enable practitioners to develop highly dependable, complex software-intensive systems at a minimum level of operational risks and to reduce development costs.
Overall, the project developed innovative V&V techniques for software-intensive systems in collaboration with industry partners in the automotive, satellite and financial domains. Industrial case studies were used to validate our solutions. Most of the proposed solutions involve the application of machine learning, evolutionary computing, natural language processing, and model-driven engineering. The project outcomes were disseminated through various means, such as publications and presentations. Among them, 73 peer-reviewed publications acknowledge the ERC grant, 31 of them in journals (including the best ones in the field) and the rest in reputable conferences, which are typically selective and prestigious in computer science sub-fields. In addition, the knowledge and technologies produced from the project were transferred to the industry partners, which are in the process of adopting our V&V solutions. Specific project topics addressed during the project period are described below.
● Requirements Quality Assurance
We focused on the automation for some complex and laborious RQA (requirements quality assurance) tasks. Our focus throughout was on requirements stated in natural (human) language, motivated by the prevalent use of natural-language requirements in industry.
● Model-Based Testing of Software-Based Systems and Services
We developed automated testing solutions that leverage the artifacts commonly produced during software analysis and design practices: requirements specifications in natural language, domain models, and timed automata capturing the timing requirements of the system. In addition, we proposed scalable and efficient automated testing solutions through the combination of (1) a methodology to model the input and output of the system and their relationships, and (2) a set of techniques for the automated generation of optimized test suites using model-based data mutation, meta-heuristic search and constraint solving. Furthermore, we devised a technology to support the optimization of hardware-in-the-loop testing, which is usually the last stage before deployment and typically a very time-consuming and expensive activity.
● Testing and Analysis of Product lines
We developed and validated a technique for the automated classification and prioritization of test cases in the context of product lines and requirements-driven testing. The technique relies on change impact analysis to identify obsolete and reusable test cases. To automatically prioritize test cases, the technique relies on a prediction model that computes a prioritization score based on multiple risk factors such as fault-proneness of requirements and requirements volatility.
● Security Testing
The work on security testing led to the development of automated, black-box solutions to identify the most frequent security risks according to OWASP, e.g. SQL injection vulnerabilities, XML injection vulnerabilities. Our approach is however generalizable to most types of vulnerabilities.
● Model Testing
We developed an environment for the co-simulation of software models (in UML) and function models in Simulink, which is a necessary platform for early design verification. In addition, we developed a framework to perform trace checking of simulation results in order to verify the types of properties that are typically checked on input and output signals in cyber-physical systems.
We developed a comprehensive model testing framework for CPS function and design models. Our model testing framework enables automated specification of test oracles for continuous CPS behaviors, analyzes models with uncertain and unknown behaviors, and identifies high-risk CPS behaviors.
We developed techniques for effective testing and safety analysis of AI-based systems used in self-driving systems. In particular, we developed automated testing techniques for DNNs based on different model testing strategies for CPS and provided techniques to help with explaining and interpreting DNN behaviors.
Leveraging our modelling foundation for CPS and our suite of meta-heuristic search algorithms for CPS testing, we developed a simulation framework for systems that are based on Internet of Things and proposed automated techniques for online self-adaptation of such systems to improve their resilience and reliability.
We developed automated strategies for deriving acceptance criteria from requirements and ensuring that the derived criteria are feasible, up-to-date, and accurately targeted at the most important system scenarios.
We developed an innovative run-time verification approach that lifts the verification at the model level, leading to run-time model verification. Our framework addresses the challenge of dealing with incomplete and evolving models and specifications in the run-time verification process.