Periodic Reporting for period 1 - iTrust6G (Intelligent Trust and Security Orchestration for 6G Distributed Cloud Environments)
Okres sprawozdawczy: 2024-01-01 do 2024-12-31
This would be to improve the accuracy and the efficiency of (i) methodologies used for trust determination (e.g. threat posture assessment from collected and generated cyber-threat intelligence, cyber-risk assessment from vulnerability assessment procedure, number of the assurance conformity check), (ii) the performance and coverage of the security procedures involved (e.g. number of classes of monitored resources, mean time-to-detection of threat, mean time-to-reaction), and (iii) the induced overhead over network resources exploitation (e.g. performance overhead due to the programmability models).
iTrust6G overall objectives are:
Objective 1. Design an End-to-End system security architecture that capitalizes on zero-trust principles to enable a trustworthy 6G service management platform, addressing dynamic network and application-level security requirements.
Objective 2. Exploit AI to detect novel threats from operated assets and generate pertinent cyber threat intelligence enacting their proactive protection.
Objective 3. Conceive novel Trust Algorithms (TA) exploiting AI integrated into trust management system (TM) accounting for forensics evidence.
Objective 4. Design and implement intelligent solutions for AI-driven security orchestration, across extreme edge, edge and public clouds (across the continuum) exploiting programmability models for pervasive enforcement.
Objective 5. Specify, develop, and integrate intent-based security policies/engine, for explainable and automated E2E security orchestration.
Objective 6. Develop solutions for dynamic, configurable and intelligent placement of network functions over network slices and applications, to secure service design and delivery.
Objective 7. Perform trial-based validation of solutions in trusted execution environments (TEE) and on specialized hardware (accelerators, etc.), based on defined use-cases requirements.
Objective 8. Ensure dissemination of project results, contributions to standardisation, exploitation of results and innovation management.
Initial approaches for exploiting artificial intelligence from operated assets and threat intelligence generation (Technical Objective 2) have been investigated. Initial 5G/6G datasets have been selected to define initial monitoring requirements and train machine learning models for threat detection. This allowed an initial work for threat intelligence generations through, i) the construction of STIX 2.1 reports, ii) their storage into a Cyber Threat Intelligence (CTI) platform, and, iii) the application of the gathered CTI for trust score computations on operated assets from their threat exposure.
The conception of a novel trust management system and trust algorithms (technical objective 3) has started during this reporting period. A first version of the remote attestation framework has been developed and integrated in Kubernetes. A preliminary version of identity management version has been tooled via the integration of Keycloack2F with Lightweight Directory Access Protocol (LDAP) and implementing an access control policy framework with Open Policy Agent (OPA) and Apache APISIX3F for fine-grained access control.
The preparation of an intelligent solution for AI-driven security orchestration (Technical Objective 4) have delivered several design and implementation results. The programmability security models and architecture for the resource generator have been placed and define several interfaces for generating service images. The main components of the security orchestrator have been established and positioned for deployment. End-to-end workflows to enable data ingestion, enrichment and correlation from diverse sources.
The consortium also initiated several works to build an intent-based security policy and engine (technical objective 5). Specifically, a trust model has been defined in iTrust6G D2.3 and the specification of intents to enact policy. The design foundation of a notary service to maintain trust records and serve accountability.
Finally, the consortium laid the foundation for a solution for the placement of network functions network slices and applications (technical objective 6). An initial version of a secure service orchestrator is under design by iterating over Opensource Mano solution. A policy verification module has been implemented to input a trustable placement service. The module supports slice management.
• elaboration of use-cases and relevant stakeholders,
• definition of a first version of iTrust6G system architecture,
• definition of the main processes and functions to be implemented by each component of the architecture,
• requirements for the trust model definition, intent-based policy format and explainability framework.
• the designing of Federated Machine Learning (ML) techniques for both supervised and unsupervised data,
• researched on 5G/6G networks and AI applications for cybersecurity threat detection, as well as reporting protocols,
• initiated the design of a Risk Assessment Component, including the clarification of its relations with other components,
• proposed the first design of high-volume metric collection agents.
• design and initial development of a framework for supply chain analysis,
• design and initial development of a compliance checker,
• design and development of an attestation framework for both physical hosts and containers,
• development of an authentication and authorization system that leverages the trust score to block/allow the access of a resource.
• design and development of a security programmability model, defining key enablers for dynamic security deployment,
• evaluation of Katacontainer technology to generate OCI-standardised containers, iii) design a vulnerability assessment service,
• design Asset Discovery, Service Discovery, and Vulnerability Assessment services for comprehensive network scanning,
• proposing workflows for security data ingestion, enrichment and correlation, and designing a Policy Verification Module (PVF).