Skip to main content
European Commission logo print header

Automated Generation and Execution of test suites for DIstributed component based Software

Deliverables

As the staff training and process implementation efforts for integration of model-based testing in a company's software development department are expected to be high, a company may be interested in benefiting from the use of AGEDIS without having to spend these efforts. For those interested companies, specialized service providers may offer model-based testing services, i.e. consultancy in process integration and AML modelling as well as the doing, i.e. set-up and execution of model-based tests in form of a software test outsourcing. These test services have to be well defined by task descriptions, interfaces and service level agreements that clearly show an interested customer the advantages and ROI of such services. From the perspective of the consortium members, offering services based on AGEDIS to external or company-internal customers may evolve as the best way to exploit AGEDIS in the near future, since this will be possible even if the other results (i.e. the tool chain and the training material) will not yet have reached production level that would allow the consortium to place these results on the public market.
The Intermediate Format (IF) is used to model the behaviour of a variety of systems, e.g. asynchronous communicating real-time systems and distributed software systems. More than that, the IF language is suitable to model and analyse systems that rely upon dynamic process creation and dynamic data types. The Test Directives (TD) notation is an extension to the IF format which supports model based test generation by adding the ability to express test goals/purposes and coverage criteria. The formal basis for the language is a dynamic version of extended automata. It focuses on systems composed from several components, called processes, running in parallel and interacting through point-to-point message-passing. The number of processes may change over time: they may be created and destroyed dynamically using specific actions, during the system execution. Each process is an extended timed automaton. IF is used as the interface language between the AGEDIS model compiler and the AGEDIS test generator and can be used for interfacing any kind of model language to the AGEDIS test generator.
The AGEDIS Modelling Language (AML) is based on a subset of the current UML 1.4 standard. Where the UML standard lacks exact definitions of semantics and elements needed for instantiation —such as data types and operations— AML steps in to meet these requirements. In spite of these extensions, any standard-compliant UML tool may be used to create the AML model. The widespread use of UML in software development, together with the need for only rudimentary programming skills, make graphical modelling tools a natural choice. A top-down approach is possible by first modelling the application graphically, and then filling in the details. The models used by AGEDIS do not involve the entire spectrum of UML concepts; AML is only based on class diagrams, object diagrams, and state diagrams. The commitment to standards and the use of UML as a foundation for the AGEDIS Modelling Language frees the AGEDIS user from any commitment to a particular graphical modelling tool. Any UML-conformant tool— commercial or free— that supports the UML extension mechanisms can, in principle, be used for modelling. Many of the UML tools contain a model consistency checker, which helps the test developer avoid inconsistency errors. The AGEDIS requirements on a modelling tool include: - Support for the basic diagram types used by AGEDIS: -- Class diagrams -- Object diagrams -- State diagrams - Ability to save diagrams in XMI (XML based stream representation of UML) format. - Support for the UML extension mechanisms “stereotype” and “tagged values". The AGEDIS consortium monitors the status of different graphical tools. In many cases, the state space has huge dimensions and “state explosion” is a problem that an automatic test generation tool must be able to handle. AGEDIS provides a means to restrict the examined test space using the test generation directives. By defining the test generation directives, the tester has a degree of control over the state automaton derived from the model. There are essentially three distinct types of test generation directives: test purposes, test constraints, and coverage criteria. - Test Purpose: A test purpose is a description of a pattern of system behaviour provided by the tester in the form of an additional, small state model. In this model, the tester may mark particular system states as desired for inclusion in the test case. The transitions in the test purpose model are fired upon signals from the model simulation of the system under test. The generator explores both the model and the test purpose in parallel, and generates a test case, which matches the test purpose. The generator takes into account all possible responses from the system that are included in the model. This means that test cases can be generated for applications with non-deterministic behaviour. If more than one input to the system under test leads to the satisfaction of the test purpose, only one of them is chosen for the test case. - Test Constraint: Test constraints describe additional restrictions that steer the test generator during the selection of relevant execution sequences. These restrictions are represented by object diagrams, which specify states of the software model that must or must not be visited during test execution, in addition to the start state and the envisaged finish state. The start state marks the end of any test preamble and the beginning of the actual test case. - Coverage Criteria: Coverage criteria describe requirements placed on the generated test suite. These requirements are mainly used to induce tests to explore every value of a particular expression (e.g., of a certain class attribute of the model). This results in a set of test cases, where for each required value, at least one test case is generated. The three different kinds of test generation directives are not mutually independent. The same test goals can be reached with different combinations of the three constructs. AGEDIS provides complete flexibility to set the test focus on either state or data-oriented test case generation.
After the concrete execution of the test suite on the system under test, the results are ready for the user to interpret. AGEDIS assists the user in interpreting the results by providing appropriate tools for: - Visualization of the Abstract Test Suite as well as the Suite Execution Trace; the Visualizer also enables the user to trace the errors back to the related objects in the model. - Manual creation of test cases; the ATS editor allows the user to visualize ATS test cases in graphical form and to edit existing and add new (manually specified) test cases to the test suite. - Defect analysis; this tool allows the user to track down the outcomes of the test runs and to cluster and classify the failures detected. - Coverage analysis; the coverage analyser processes the suite execution trace, provides statistical feedback on the test suite and its execution and recommendations for further test generation directives to improve the coverage of the model.
Software testing environments are applications that assist testers in executing tests and collecting results against some other application under test. The AGEDIS Test Execution Engine is an infrastructure to support automated testing of distributed software running on different platforms and coded in different programming languages. It provides solutions for testing distributed software running on Windows or Linux, and coded in Java, C, or C++. Tests are described in the Abstract Test Suite Language, a special XML profile. The Abstract Test Suite is a set of test instructions to be executed against the application. The set contains the test cases, a sequence of stimulations and observations of the system under test in an abstract formulation. Thus, it is independent of the specific test execution engine used. The result of the test execution is recorded in a set of files, the Suite Event Trace (SET), which can be viewed in the Test Suite Browser. The test execution engine supports a graphical user interface and a command line interface. In addition to the ATS, information has to be provided on how to execute the ATS on the system under test. This information is contained in the Test Execution Directives (TED). The data within the TED are organized in XML format. They include, but are not limited to, the following: - Description of the system under test (i.e., host machines, processes, objects, and classes). - Information on distribution, synchronization, and multiplication (i.e., how many clones of processes are running and where they are running). - Action to perform before starting (or after ending) the test suite and each test case. - Mappings of ATS objects, controls, observable responses, and constants to their counterparts on the system under test. The AGEDIS execution engine bridges the gap between the Abstract Test Suite and the system under test – using the information of the TED – by mapping the abstract stimuli to actual points of control (i.e., method invocations and signal injections), and the abstract observations to actual points of observation, such as object states or interface outputs. The engine has 3 main components: - Test Suite Driver – The Test Suite Driver is the brain of the system. It controls all other components. The Test Suite Driver executes the AGEDIS Abstract Test Suite (ATS) on the System Under Test (SUT) by consulting the Test Execution Directives (TED). It writes the results of the execution to the AGEDIS Suite Event Trace (SET). - Host Manager – The Host Manager is the representative of the Test Suite Driver on each host machine in the execution environment. Host Manager also provides a real time view of the host and the status of each Process Controller running on the host. - Process Manager – The Process Manager maintains SUT Objects.
Within the AGEDIS tool chain, tests are described in the Abstract Test Suite Language, a special XML profile. The Abstract Test Suite is a set of test instructions to be executed against the application. The set contains the test cases, a sequence of stimulations and observations of the system under test in an abstract formulation. Thus, it is independent of the specific test execution engine used. The ATS implements the UML testing profile plus some additional functionality. Among others, ATS contains elements to express: - Asynchronous events (waitFor and timeout); - SUT to Environment interactions, e.g. Callback; - SUT to SUT interactions (white box testing); - Creation and destruction of objects; - Simple, complex, alternative values, referenced objects; - Exceptions; - Workflow steering by nextPass, nextFail and links to other test cases (global-default behaviour); - Parameterisation of tests with combinatorial input sources. The result of the test execution is recorded in a set of files, the Suite Event Trace (SET), which can be viewed in the Test Suite Browser. The SET format directly maps to the ATS format. For both formats there are parser libraries available as public domain software, allowing to interface existing test tools using an independent XML-based data format.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available