Skip to main content

Component based open source architecture for distributed telecom applications

Deliverables

The project COACH was very active to contribute to standards, mainly within the OMG. Contributions have been made to the following activities: - UML Profile for CCM - Status: Draft Adopted specification, FTF running The standard contains a graphical notation for the CCM concepts in form of a UML 1.x profile based on the CCM metamodel. - QoS for CCM - Status: revised submission available The standard will contain a platform specific QoS model for the CCM including a negotiation mechanism - Deployment and Configuration of Distributed Applications - Status: FTF running, preliminary report available. - MOF2IDL mapping - Status: Adoption vote passed, FTF will be chartered in April 2004. The standard contains the first MOF 2.0 technology mapping to CORBA IDL. Although this has not been in the primary focus of COACH, this standard is important to provide an open CCM development tool chain. More information on the COACH -project can be found at: http://coach.objectweb.org/
The Parlay Platform is a so called service platform, offering functionality for managing services, service providers, retailers, and customers. It also offers an API for service integration, where 3rd party service providers register their services with the framework. These services finally become available to the user. Any kind of administration and security features will be handled by the platform, allowing the service implement or to focus on the application logic of the service. The CORBA Component Model offers many features useful for the development of large applications. Building a system of components requires a clear separation of functionality throughout the system. Since the components can be built independently from each other, the replacement of components of the system is possible without recompiling the entire system. The CCM platform offers many services useful for the Parlay platform, eg. security services, and deployment functionality. The deployment functionality of the CCM is the best way for installing the client components for the services, thus ensuring that the latest versions are used. The component structure of the platform allowed a clear separation of functionalities, leading to a less complex implementation. It is expected that this clear assignment of functions will simplify the security architecture of the platform. Parlay offers a layered session concept. During the initial session, the user is authenticated and opens the access session. The access session is used for service retrieval, subscription, and finally opening the service session. The user remains authenticated as long as he does not leave the access session. This single sign-on capability facilitates the usage of services by the customer. In the task producing this result we showed that the Parlay platform was in fact relatively easy to implement based on components, although the component structure compromised the compatibility with the parlay standard. However, aside from the security implementation, the component based parlay platform can be made standard compliant with minor effort. The second goal was to show that new services can be easily implemented and used using component inheritance. The parlay implementation will be available on: http://www.mico.org, where you will also find the security implementation.
The open source project Qedo provides a platform implementation of the CORBA Component Model that serves as the runtime for components written in C++. The Qedo runtime currently supports extended components of category service and session and integrates new features and extensions like data stream communication and an enhanced deployment infrastructure. Since component based development in general and in particular CCM is still subject for ongoing research, the intention is to use Qedo by arbitrary partners for projects in this area. Because of its licence of LGPL it can be used without restrictions. The intention is to enhance CCM and promote and facilitate its application in real software development. As a long term goal, this technology shall be established as an advanced technology for software development leading to efficient and fast development cycles. The Qedo runtime is tightly coupled with the Qedo tool chain required for the development of components. It is used for the deployment of applications and by the component implementations at runtime. Currently only component implementations developed with the Qedo tool chain can be used. Every such implementation has to be packaged according to CCM. Afterwards they can be used for the deployment of component based applications. The Qedo runtime will run on multiple platforms, currently this is Windows and Linux.
The Extensible Container Model (ECM) is a specification and a reference implementation (RI), which aims at enhancing the OMG's CORBA Component Model specification to provide a systematic and automated process for the definition of domain specific container models and the generation of the corresponding container runtimes. With the ECM, application architects may define their specific container models by simply assembling a set of packaged services. It is intended that such services are highly reusable software components, potentially bought from third-party vendors and certainly exploited in many products. It is noteworthy that the ECM is (like CORBA) language and platform independent. Therefore, a service package contains implementations for many languages (C++, Java, etc.) and many platforms (Linux, Windows, etc.). The generated container runtime can then be adapted to the constraints of the target platform. The ECM is a free and open source project licensed under GNU LGPL and hosted by the ObjectWeb Consortium. The latest release of the specification is 1.0, the latest release of the RI is 0.2.0.
Overview on the generic concepts: - The generic concepts describe the QoS Meta-Model and an UML-Profile for QoS, allowing the easy design of QoS aware applications. Additionally, the provided framework and the common base functionalities are elaborated. The generic concept describes the following aspects: QoS, objectives, Model Driven Architecture, QoS meta-model (including contracttype, dimension, binding), UML-profile for QoS, framework and common base functionalities. Integration into CCM Meta-Model: - The QoS Meta-Model is bound to the CCM Meta-Model. Particularly this is realized by an integration of namely the QoS model elements, the CCM Meta-Model and the QoS-CCM binding. Specification of interfaces for non-functional services: - The specification of interfaces for non-functional services serves as an implementation reference. This comprises Architecture (container, component identity, interceptor dispatcher, container portable interceptors, and QoS-Provider), Run-time Support (negotiation, QoS call-back interface, QoSUsage interface, and generic QoS context), Deployment of QoS Provider, Extension entry points (in Java and in C++), as well as specifications related to QoSproviderFactory interface, QoSProvider interface, bootstrapping, monitoring, and supervision. Major key innovative features of the result are: - System independent modelling of QoS features; - System dependent generation of code in order to generate a framework for rapid implementation of QoS features based on the Qedo CCM implementation; - Depending on the future business development of CCM the tool-chain for the CCM design, implementation and deployment process might turn out to be extremely time and money-saving. The implementation of the QoS specification is at the stage of an experimental development (laboratory prototype) pre-product. In the exploitation plan the use prospects are described in detail. The following Group Deutsche Telekom internal documents were derived: - Thomas Unterschütz, Konzeptplanung Workshop Open Source (Arbeitstitel), T-Systems Nova, Technologiezentrum, V01, 5.5.03. - Thomas Unterschütz, Michael Geipl, Bericht zum MWQoS II Meilenstein 1: Anforderungen der Telekommunikationsdomäne an Komponentenarchitekturen, T-Systems Technologiezentrum, V1.0, 08.11.2002. - Thomas Unterschütz, Michael Geipl, Bericht zum MWQoS II Meilenstein 2: Spezifikation nicht-funktionaler Komponenten Schnittstellen/Telekommunikationskomponenten, T-Systems Technologiezentrum, V1.0, 08.07.2003. - Marc Born, Olaf Grunow, Fraunhofer Fokus, CC Platin, Middleware Quality of Service MS3 (Installation und Benutzung der MS3 Software), Berlin, v1.0, 19.01.02 In the context of the COACH QoS activities the following papers, presentations and publications were prepared or supported: - Tom Ritter, Marc Born, Thomas Unterschütz, Torben Weis, A QoS Metamodel and its Realization in a CORBA Component Infrastructure, Hawaii International Conference on System Sciences 36 (HICSS 36), 2003. - Object Management Group, CORBA Components 3.0, 2002, formal/02-06-65. - Object Management Group, QoS for CCM RFP, 2003, mars/03-06-12. - Object Management Group, UML for QoS & Fault Tolerance, 2002, ad/02-01-07 - Michael Geipl, Thomas Unterschütz, Application Server im Umfeld von Web Services, Open Source und CORBA Component Model (CCM), TK Aktuell, Verlag für Wissenschaft und Leben, Heft 05/06, Juni 2003.
The CORBA Component Model (CCM) is the new generation of language- and platform independent middleware. Qedo is a C++ implementation of the CCM and adds some more advanced features, which might become standard features of CCM in next versions (Security, QoS support, streaming support). The main parts of Qedo are the Qedo runtime including the Qedo Distributed Computing Infrastructure (DCI) implementation for deployment and configuration and the Qedo tool chain. Qedo is Open Source and is published under the terms of GPL/LGPL. The code generator is published under GPL and the container libraries are under LGPL. This enables the production of commercial application based on Qedo. The Qedo implementation can be found on http://www.qedo.org
OpenCCM is a Java based Open Source implementation of the CORBA Component Model (CCM) specification defined by the Object Management Group (OMG). The main parts of OpenCCM are: - The OpenCCM Production Tool Chain composed of a set of front-end compilers supporting the UML profile for CORBA Components and OMG IDL/PSDL/CIDL languages, a central CORBA 3.0 Interface Repository, and a set of back-end generators; - The OpenCCM Packaging/Assembling Tool Chain providing a graphical user interface to edit any CORBA Components XML descriptors, Component and Assembly ZIP archives; - The OpenCCM Distributed Computing Infrastructure (DCI) implementing the deployment and configuration of component assemblies; - The OpenCCM Container Runtime Framework hosting component instances into extensible containers, and; - The OpenCCM Management Framework providing an extensible graphical user interface to explore any CORBA components, objects, and services. OpenCCM is published under the terms of the GNU Lesser General Public License (LGPL). This enables the production of commercial applications based on OpenCCM. OPenccm can be found on: http://openccm.objectweb.org/
The COACH test framework can test software systems on component level. This means that the framework can be used to identify components that do not behave according to their specifications. Once a component containing a fault has been identified, further localization of the fault within the component can be done using the test and debug facilities that are usually part of the implementation language specific development environment. Tests on CCM components and the observation of interactions between components are expressed using IDL data types and are independent of the data types of the implementation language of the component. The ability to test components may be severely restricted when the components under test depend on interactions between other components that are not yet implemented. To reduce this restriction, the dependent components can be substituted by, so-called, Reactor components for the purpose of testing only. Reactor components provide the same set of facets, operations and events as their real counterparts. Reactor component implementations can be generated automatically from the IDL specification. The Reactor components do not necessarily need to be implemented in the same language as the components they are substituting. For practical purposes we have chosen to generate Reactor components in Java. The implementation of the Reactor component is configurable to allow different kind of responses: The response may be interactive allowing the tester to examine the parameter values and construct a reply using an interactive IDL type editor, or the response is automated. The Reactor can be hard-coded to give an automated response, or by executing a script that can be loaded and interpreted at runtime. When an invocation arrives on a Reactor component facet it can reply (within limits) as if the real component is in place. The range of possible test scenarios is now extended for the components under test and can reduce the probability of errors when the final components replace the Reactor components when they are available. The presence of Reactor components can demonstrate correct behaviour of the components under test for various interaction scenarios. In particular, error conditions occurring in the Reactor components can usually be simulated more easily using Reactors then real implementations. Even when real implementations become available, Reactor components are still useful for regression testing. Another part of the test framework is the Actor component that acts as a general purpose CCM client component that can invoke operations on other components. The Actor can also load and execute test scripts or can be run in interactive mode. In interactive mode the tester can interactively fill in parameter values for a selected operation, invoke the operation and examine the result. References to other components may be passed as return values of operations. References to component facets can be obtained by using the navigation operations provided by the component interface. In addition to providing the tester with a means of testing components using an actor and reactor, the CCM test framework allows the tester to trace and visualize the propagation of invocations between CCM components. Invocation tracing is useful for comparing the runtime behaviour of a planned system with its design specifications. The Tracer framework consists of two parts: - The TraceServer, which is a CCM component containing a collection of events that occurred within the system under test. At each interaction point a trace event must be send to the TraceServer component with timing and identity information about the interaction. This requires that the invocation flow at the interactions points is intercepted to allow for the additional actions to collect and sent the trace information. The CORBA Portable Interceptor is used to intercept the invocation flow of an operation on a CORBA object. Since CCM Component facets are implemented as normal CORBA objects, this mechanism is also suitable for the implementation of invocation tracing for CCM component interactions. The PI mechanism also allows additional service data to be propagated transparently between CORBA invocations. - The TraceServer responds to queries by returning the requested event data formatted in XML, including complex parameter data types. The TraceViewer is a combination of a web server and a web client. The web server translates HTTP requests from the web client into TraceServer queries using CORBA invocations and returns the result as plain text XML. The web client visualizes the data received from the web server in a user-friendly manner. With the combination of Actor, Reactor, and Invocation tracing viewer the implementers of CCM components have a powerful set of tools available to test their CCM components at an early stage.
In our work we have introduced a proxy-based testing approach for a component-based system that provides a generation of test components and supports the application of various test systems for both passive online monitoring but also active (sub) system testing. Details on the proxy information types that are published by the proxy components and the automatic generation of the proxy components have been outlined. Furthermore, the application of testing technologies like TTCN-3 for a particular SUT component technology has been investigated. Due to our practical experiments we found that TTCN-3 is suitable for proxy-based testing of components but needs extra considerations for adaptation. The components related to testing can be found on http://www.qedo.org
For our Open Source implementation of the CCM (Qedo) we have provided a complete MOF and MDA based development tool chain. This tool chain allows the specification of graphical models of the application components with UML and facilitates automatic generation of implementation code. Furthermore, necessary additional artifacts like deployment descriptors, security properties etc. are also automatically generated out of the CCM models. The heart of this tool chain is a model repository generated automatically out of the CCM MOF-metamodel. The models can be specified with the UML profile for CCM or using the textual CIDL language and stored in the repositories. On the other side, several back-ends are available which connect to the repository and transform the contained models to obtain all necessary artifacts (code, descriptors, properties etc.). The realization of the CCM-development tool chain with the CCM repository as the central component allows establishing a real MDA development chain with another platform independent abstraction layer before the CCM. This abstraction layer can be a domain specific modelling language, which is independent of CORBA or CCM. Models of this language can then be transformed to CCM-models and or CCM-backends can be used to generate all CCM-specific artifacts.
In the COACH project, Objectsecurity developed OpenPM, a framework for the definition, management and enforcement of security policies in complex distributed systems. It currently supports access control policies for CORBA and CCM, but can be extended to other security policy types (e.g. information filtering), policies in general (e.g. Quality of Service), security mechanisms (e.g. smart cards for authentication) and platforms like Microsoft .NET, Enterprise Java Beans, Web Services or ERP systems. OpenPMF is inspired by the Object Management Group's Model Driven Architecture and the MetaObject Facilities. Our starting point for the development of OpenPMF was an abstract model of distributed systems and middleware, for which we defined an also abstract model of security policies. Then we transformed (mapped) this platform independent model (PIM) to platform specific models (PSM) for the platforms developed in COACH, CORBA and CCM and to different security mechanisms (SSL and CSIv2). In a similar manner OpenPMF can also be mapped to other platforms. OpenPMF consists of a compiler for the Policy Definition Language (PDL), a Policy Repository, a generic Policy Evaluator, and mappings to CORBA 2.x and the CORBA Components Model (Qedo). The Policy Definition Language is used to describe a security policy in a human readable form. PDL is based on an abstract notation of of the entities in distributed systems: Initiator, client, target and operation to invoke. It supports roles, groups and different delegation modes, and has a single formal model from the protocol level to the abstract policy level. The PDL compiler feeds the policy into a Policy Repository (PR). The PR is derived from a MetaPolicy, a MOF model for policies. This integrates OpenPMF with the Model Driven Architecture and other repositories, e.g. for UML or CCM models, allows different types of processing, including checking of the stored policy for internal contradictions or consistency, and integration with MDA tools or GUIs. Since the repository is generated from a MOF model, it is also possible to use other technologies for access, for example web services or XMI instead of CORBA. During startup, the application to be protected obtains its policy from the Policy Repository and instantiates an internal representation. At runtime, the invocations are intercepted, and the Policy Evaluator checks whether a call is permitted or not. The information for this decision are obtained by Transformers, which are the interfaces to the underlying security mechanism. Special attentention is paid to runtime efficiency. We specified and implemented the following parts of OpenPMF: - Policy Definition Language (PDL) Compiler; - Policy Repository; - Adapter for CORBA 2.x based on Portable Interceptors; - Adapter for CCM based on Component Portable Interceptors (COPI); - Transformer for CORBASecs version 1.7 with support for pulling security attributes from a directory server with different cache modes and a user configurable mapping; - Transformer for the SL3 API. OpenPMF uses MICO and Qedo as reference platform for CORBA and CCM. It is itself based on MICO for internal communications between the PDL compiler, the repository and the different applications. A lot of effort had to be spent for the adapting MICO to the needs of OpenPMF and Qedo, for example to fix bugs, and to implement the low level functionality used by OpenPMF, mainly authentication, message protection, and generation, transport and delegation of security information and credentials. These enhancements of MICO are part of the MICO open source project (http://www.mico.org): - CSIv2 Level 1 and 2 protocol; - Enhanced SL3 API for CSIv2 (SL3 was originally developed by Adiron LLC, used with permission); - ATLAS server for the generation of authorization tokens for CSIv2, with directory server interface. An evaluation showed that OpenPMF is well suited for access control for the CORBA and CCM platforms. We plan to enhance and extend OpenPMF both in the direction of functionality (information filtering, OCL based constraints) and other platforms like EJB, .NET and Web Services. OpenPMF has a lot of benefits compared to older security systems. First of all, it provides a much higher functionality, e.g. fine grained access control based on advanced attributes and delegation. It also reduces the costs and effort for the definition, management and enforcement of complex policies in heterogeneous distributed systems, since security policies are defined in a uniform and manageable manner. OpenPMF is especially useful in components based applications, since it provides a clear separation of functional aspects (implemented in the component) and non-functional aspects (described by policies and enforced in the container). Now the component developer does not need to care about non functionality anymore. This allows a much better reuse of software components and greatly reduces the development effort and costs.
The objective of the Container Virtual Machine (CVM) consists to dynamically adapt the services associated to the applications oriented CCM components. The CVM prototype is running in OpenCCM, an application servers open source implementation of the CORBA Component Model (CCM) specification defined by the Object Management Group (OMG). It is executed on the standard JVM of Sun that does not offer any adaptation's mechanism. The CVM solution introduces the concept of entry point into CCM applications. These entry points are used to configure the associations between the various components and services by indicating to OpenCCM, which are the modifications to do. Thus, the CVM allows the instantaneous adaptation of the system services during the execution, reducing the costs of reconfiguration. It allows an administrator to specify and deploy dynamically system properties (like monitoring, tracing or QoS), not planned initially.
The management operations of a given InfoCom system are accomplished by the Management Gateway (MG) that hosts the Element Management System (EMS). This gateway assumes the responsibility to manage a set of elements and contains the necessary services to autonomously perform management operations. The management gateway represents the “managed system” to a higher-level “managing system” (also referred to as “manager” or “client”) and for this purpose contains the Management Information Base (MIB) of the elements(s) it is responsible for. The gateway becomes the abstraction of the managed system, that can be treated like a single managed element whose MIB is the collection of the resource elements within the managed system. The management gateway contains the application logic and it is element-specific. Nevertheless, the design and implementation of management gateways can highly benefit by the availability of a generic element management framework (EMF) that contains the basic services and core MIB representations. ITU-T has proposed the architecture for such a framework, and provides the standardised approach to its upper-bound interfaces. The standardisation of the upper-bound interfaces is an important factor for the realisation of large-scale EMS systems, since it allows the “plug-and-play” aggregation of MGs, even when these come from third-party providers (conforming to the ITU-T standards). The guidelines and specifications for the TMN EMF can be found in the following ITU-T Recommendations: - ITU-T Q.816 CORBA TMN Services. - ITU-T Q816.1 CORBA-Based TMN services extensions to support coarse-grained interfaces. - ITU-T X.780 Guidelines for the definition of CORBA managed objects. - ITU-T X.780.1 TMN guidelines for defining coarse-grained CORBA managed object interfaces. - ITU-T Recommendation M.3120, CORBA-Based Generic Network Information Model. The key point in this COACH result is thus the EMF prototype, which is re-usable for any element management system and realises the standardised upper-bound interface. This result provides the following building blocks: - Element Management System (EMS) Application: This part is specific to the elements that comprise the management system and their functionality. The prototype has been scoped to MIB II routers and their configuration and monitoring. - Element Management Framework (EMF): Even though the EMS’s logic varies depending on the elements they manage, a lot of functionality is common. All EMS systems need mechanisms to identify the elements they manage, ways to retrieve them, and services to enable operations on them. These requirements can be met by a common and re-usable framework (the EMF), upon which each EMS can be based to realise the application specific functionality. - CORBA Component Infrastructure: The EMF itself should be built utilising flexible design, development and deployment principles, independent as much as possible from computational platforms, programming languages capabilities and system engineering constraints. The CORBA Component Model fulfils these requirements and therefore the key point for the EMF designer and implementer is the maximum utilisation of the potentials offered. CCM encapsulates the CORBA logic and through the Container hides the complexity of building distributed applications by abstracting core services (such as CORBA Naming and Notification). Part of the specification objective is to clarify which common object services are essential for the EMF and if the abstraction offered by the container can meet the application requirements. The COACH result has reached a certain maturity level but still needs further work. This entails to - Populating the EMF with all the ITU specified object logic, - Exploiting the capability of EMF through the development of more complicated and task demanding EMS systems (DSLAMs, SGSN nodes etc), - Carrying out extensive tests for delays and dependability, - Progressing the whole EMF logic to a more autonomous nature of handling management actions, and - Evolving the prototype to a direction that it will be constituted as an integral and valuable part in a complete service management infrastructure (i.e. in such an infrastructure focus will not be solely on network elements but instead on the integrated service and application provision and accountability). Currently, this COACH result offers valuable information, providing a hands-on experience with the ITU specification materialization, which can be used for validation while building more enhanced telecommunication management systems.