Periodic Reporting for period 2 - CONVERGE (Telecommunications and Computer Vision Convergence Tools for Research Infrastructures)
Okres sprawozdawczy: 2024-02-01 do 2025-01-31
CONVERGE project brings a pioneering vision-radio paradigm that bridges this gap by leveraging Integrated Sensing and Communication (ISAC) to facilitate a dual “View-to- Communicate, Communicate-to-View” approach. CONVERGE offers tools that merge wireless communications and computer vision. Four major tools will be developed as part of the CONVERGE RI: 1) the vision-aided LIS, 2) the vision-aided fixed and mobile base station, 3) the vision-radio simulator and 3D environment modeler, and 4) the machine learning (ML) algorithms for multimodal data. These tools will integrate with the CONVERGE Chamber (Figure below), enabling the collection of experimental data from radio communications, radio sensing, and vision sensing; the CONVERGE Simulator, where data collection will be possible using the CONVERGE Digital Twin; and the CONVERGE Core, responsible for the application function, session and data orchestration, time synchronization, open data repository, and ML algorithms. CONVERGE is aligned with the ESFRI SLICES-RI. It will serve as an RI that will provide the scientific community with open datasets of both experimental and simulated data. This framework enables research in 6G and beyond addressing various verticals, including telecommunications, automotive, manufacturing, media, and health.
WP1
The work started by describing the set of 4 tools proposed in the project as well as the research questions that can be addressed by a researcher using each tool. The target user groups were described according to the classification of SLICES-RI and a relevant set of use cases were identified along 5 main vertical markets: telecommunications, automotive, manufacturing, media and health. For each use case, we have described its context and relevance, identified the different tools that may be used to address the use case and how they can help solving specific needs, and described the data types involved. Moreover, an initial high-level service-oriented architecture of the CONVERGE infrastructure was proposed. This first phase resulted in the delivery of the D1.1 document which provided the guidelines for WP2, and WP3. ALLBESMART LDA organized regular bi-weekly online meetings to ensure the necessary articulation in the preparation of deliverables, allowing for a tight control of the internal milestones. The second phase of WP1 Started in M7, focused on the definition of the tools’ access policies and interfaces’ specifications. An example of a CONVERGE use case test session was detailed along with a step-by-step flowchart and instructions for initiating and configuring a test session in the CONVERGE chamber. This work was reported in D1.2. A final version will be delivered by M18 (to be referenced as D1.3) where it is expected that both the access policies as well as the interfaces will be described with more detail, allowing for the beginning of the implementation planned within WP3. The CONVERGE research infrastructures will position themselves with respect to the SLICES access policies and guidelines that have been extensively presented and discussed. SORBONNE exchanged with the CONVERGE community substantial about the SLICES access principles and the current state of play.
WP2
WP2's objective is the development and validation of the individual CONVERGE tools according to the requirements and interface specifications defined in WP1. In Task 2.1 led by GREENERWAVE, work is ongoing on the design and fabrication of two Large Intelligent Surfaces, specifically at 5G frequency ranges FR1 and FR2. In Task 2.2 led by EURECOM, a 5G Base-station is under development with integration of Open Air Interface (OAI), having 7.2 split functionality, for both FR1 and FR2 Radio Units and interoperability testing with different types of User Equipment (UE), both commercial and specialized modems. In Task 2.3 led by UOULU, an environment modeller is being developed, to produce electromagnetic augmented 3D environment models for the vision-radio simulator being developed in Task 2.4. In Task 2.4 led by INRIA, a vision-radio simulator is under development. This simulator entails the following modules: electromagnetic modelling of radio signal propagation; radio signal paths computation; antenna simulation engine; data transmission simulation engine and graphics rendering tools. In Task 2.5 led by BSC CNS, we are developing a framework to run Machine Learning (ML) algorithms, considering two types of users, non-experts and experienced ML practitioners, with the following characteristics: modular solutions and pipeline of functional interoperable modules; transversal support to the rest of the tools; default configuration with visual tools as well as capabilities to customize functionalities. In Task 2.6 led by INESC TEC, a camera network system is under development. During this period, it was defined the hardware to be acquired, in order to fulfil the requirements of the use cases and integration with other tools. The characteristics are the following: depth cameras covering all the room area with overlapping field-of-view (FOV); and a motion capture system. In the future it could be integrated additional devices, such as: 360º camera in the centre of the room and cameras with specific modalities (e.g. IR).