Skip to main content
European Commission logo print header

Building Presence through Localization for Hybrid Telematic Systems

Deliverables

This is a theoretical result. The result will affect to research societies but not as is produce business. The use potential however is high. The problem in PeLoTe is how to control multiple entities that are different. The PeLoTe system consists of two different layers: Operational and supervisory. The operator is a human supervising remote entities from a distance. The entities are working in cooperation in the real environment. This sets a new kind of challenge to presence research: PeLoTe is not teleoperation project, neither it is virtual reality project and it is not only studying what is presence or how the human presence is formed. While traditional approaches consist of one robot that is teleoperated, the PeLoTe system is more complex, because it can incorporate many entities both robots and humans. The operator controls several entities, so he/she has to have overview over actual states of all of them and then gives his/her attention to the entity, which needs it most urgently. It means that it is not appropriate for the operator to be fully immersed to someone’s role. Moreover, controlling multiple entities makes impossible to use an egocentric frame of reference that increases operator’s subjective telepresence. Instead of this a graphical user interface based on exocentric frame of reference has been proposed that provides better control of positions and states in a multi-entity system. Teleoperated entities are fully or semi-autonomous. This simplifies operator’s work, because he/she doesn’t have to solve simple tasks like obstacle avoidance, object following, etc. Instead of this, the operator can concentrate on coordination of the entities and solving of unexpected situation. From this point of view, the operator can be called more intuitively a coordinator. On the other hand, autonomous entities can change their state frequently, send huge amount of information to the operator, and cause many various events (both expected and unexpected). All these data should be processed, filtered and the operator should be informed about relevant ones only. Moreover, events caused by autonomous entities disturb operator’s attention to currently performed tasks and decrease his/her feeling of presence consequently. Human is also teleoperated that differs from operating robots. A robot can be controlled by a set of predefined commands (go to [x,y]), while human understand more a spoken language and therefore other terminology is used. Common Presence: The PeLoTe offers a solution to above presented problem is called common presence. The common presence is a model for controlling multiple hybrid telematic entities. It is based on common understanding of mission and environment. The term common presence means that all entities have some common space, which they can understand in similar way and exchange information through it. The common presence can be understood as a virtual working environment for different types of entities. The objects in the virtual environment are understandable to all entities. This kind of virtual space does not satisfy all the components of virtual environment for human, but there are some important key features that are satisfied. Especially the key phrase Being there is satisfied. All entities have location, which puts them inside this virtual space. All entities have capability to modify the environment, through mapping, inserting new objects etc. Through the model the entities see each other identical. This feature makes the system very general and applicable to various applications that combine multiple dynamic entities that operate in the shared space. By the time of ending the project, the theoretical framework is still on going work. The dissemination of the result will be mainly done by presenting the theory in scientific conferences (some ideas already presented in FSR03, TA2004).
Motivation In a rescue situation it is essential to know the position of all human entities. Above all this is a safety issue, but also the modelling and visualization of human perception data depends on accurate position information. Knowing ones position allows efficient coordination of the teams. It also makes sharing of spatial information possible. Sharing of the spatial information is seen as a key factor when modelling a common model presence between humans and robots. Unlike robots the humans only rarely know their accurate position. Human beings use mostly eyes for localization, which is based on recognition of an object and estimation of the distance to the object. If the sight is reduced by darkness, smoke, etc. the accuracy of the localization suffers. Also, localization is always relative in nature. Only in situations when human can identify a known object, like corridor crossing or stairway, he can know his accurate position relative to that object. Thus the transfer of the position information to another person or for a computer system can be difficult. A Personal Navigation System was developed to solve these problems. In this system localization is based on dead reckoning and laser scan matching. These methods are widely used in mobile robotics, but less common in human localization. Short Description of PeNa System: The Personal Navigation system (PeNa) is a system for localizing a human in indoors. In rescue situation infrastructure for localization cannot be assumed. Thus, the system is designed to be a stand-alone localization system. The localization is based on dead reckoning and map-based localization. Dead reckoning includes step measurements (pedometer), magnetic sensors (compass) and inertial measurements (gyro and accelerometers). The laser range finder is used for mapping and position refinement, which is done by laser odometry and map matching. The PeNa hardware includes: batteries, power conversions, Stride Length Measurement Unit (SiLMU), a fiber optic gyro, a compass, SICK laser scanner and a laptop. All the hardware is mounted in a backpack. The laptop is installed in the front to serve also as a display for user interface. Results: The existing PeNa hardware is a demonstrational prototype that is able to localize human in indoors with bounded error. The fact that PeNa does that by using no predefined infrastructure makes it unique in relation to previous work. The system is used to incorporate human into a telematic system of hybrid entities. In this sense the PeNa is unique. It allows the operator to control human as if the human would be teleoperated robot. Future Work and application potential: The personal navigation has application potential as is. The localization of human allows building applications such as PeLoTe. It also allows the location-based services, which is hot topic at the time being. Furthermore, the ever-growing intelligence in the living environments has a clear need for new type of interfacing. As seen in the PeLoTe the interfacing through location and conceptualisation is efficient. The sharing of information, robot programming and information visualization becomes easy and the conceptualisation allows the use of natural language in commanding entities. The environment conceptualisation is made in two parts: - A part, where PeNa is used for automatic (or semiautomatic) mapping of the environment, and - Another part where the located objects in the environment are conceptualised using human cognition. The (semi-) automatic mapping is done through sensor fusion. PeNa will be upgraded with PMD or similar 3D perception sensors to extract objects in the environment. Human conceptualises these objects. The conceptualisation includes giving name, properties and functions (that can be anything). The outcome is a virtual description of the real environment with large amount of objects. This virtual environment can be used as interface to service robots, home automation systems, control of surveillance equipment, etc. In practice this kind of world allows the control of any device that is part of the system (static, dynamic) and the applications variety is unlimited.
Integration of the autonomous robots and humans entities addresses not only a standard concept for tele-presence and tele-operation but utterly novel hybrid approach for data and knowledge sharing. Since in this concept, robots and humans cooperate and coordinate their activity on the same level being equal partners in the given task, they also simultaneously profit from their different nature and abilities. The common data and knowledge representation is used mainly for cooperative localization and mapping, cooperative planning and data sharing. For the data and knowledge sharing the new types of map was designed. The standard for search and rescue map (SRM) is a proposal for standard map to be used in fire or rescue situations. It would be stored in the database of the local rescue center where rescue personnel can download the according SRM after an alarm. SRM consists of object layers presenting relevant information for fire and rescue tasks. The target is that the a-priori information containing ground plan and some additional important information like location of dangerous or flammable materials is generated already when a building is planned and it will be available in digital form. During the operation the perceived information from team members in place is continuously updated in the map. In order to keep the SRM as simple as possible the map is divided into a digitised ground plan (base layer) and several object layers, which are presented as a database. SRM is an enhanced version of fire rescue map that is used nowadays in buildings, which have straight fire alarm connection to alarm control center. The map includes the ground plan of the building, exits, fire alarm board, sprinkler center, sprinkler cover map, and location of toxic and flammable materials. Map includes more information for the firemen, but on the other hand the existing information is already too much on the same paper/display. For example, the important sprinkler map partially overlays the ground plan and makes it difficult to be read. By using layer type structure it is possible to limit the visible information according to certain rules or on demand of the user. In telepresence system it is necessary to share an actual global model of the environment and an actual position of all the entities for the purpose of mutual localization. Unfortunately, information on actual position of other entities within a global map of the environment is not sufficient for determination of actual position of a lost entity. To localize itself, based on the information from other entities, it is necessary to develop a technique allowing finding distances to other entities. Knowing the distance to other entities it is possible to recalibrate entity position using triangulation methods. To accomplish the cooperation task it becomes important to define actual states of entities (e.g. busy, free, need_help, helping_to, searching ...). On the other hand the mechanism of data storage depends on the above mentioned communication model, i.e. whether the most of the information will be stored in operation center or if the data will be distributed among entities and stored locally. The main question in communication is which data is subject to exchange directly between entities and which data are to be shared through the operation center. Generally, the following important actual-type of information that should be shared among entities were identified: Global map of the environment, Position in a global map, Path plans, Operating states of entities.
This result solves the question of optimal activity planning in cooperating teams of entities. The central novelty of the approach stands in investigation of a robust planning technology for heterogeneous teams consisting of robots and human actors, sharing a joint task in a common environment. Combining diverse types of entities takes the advantage of having complementary properties (or abilities) provided by the entities itself, what seems to be efficient for the task solution. On the other hand, substantially different entities have also diverse constrains in their capabilities, which have to be taken into account by the discussed approach. Moreover, the suggested method incorporates the planning constrains in a flexible way, that can be extended to other features as well as modified in time. The option of time-varying conditions supports the on-line re-planning, which allows modifying current plans to variations in the task setup or environment status. Although the search & rescue task has been chosen as a verification scenario and the research has been mainly focused on designing appropriate techniques for this scenario, the developed algorithms can be used in a wide range of applications. Typical application areas can include for example: planning of cooperative delivery of goods to customers, distribution of materials in a plant, food and medicine to patients in hospitals, planning for cooperative guarding or demining, etc. Moreover, the algorithms can be easily modified in order to plan routes for mowing with multiple machines, for cooperative cleaning, etc.
The PeLoTe system concept is suitable for next generation of teleoperation and telediagnosis system incorporating both humans and (semi) autonomous robots. Integration of the autonomous robots (non-living) and humans (living) entities addresses not only a standard concept for tele-presence and tele-operation but utterly novel hybrid approach. The within the project dealt solutions consider both the humans and autonomous robots to participate on the remote-end of the tele-operated task, splitting steps and actions towards the solution between robots and humans in the system. Since in this concept, robots and humans cooperate and coordinate their activity on the same level being equal partners in the given task, they also simultaneously profit from their different nature and abilities. The original motivation for this early concept was a task of a rescue mission in a large public building having highly complex internal structure e.g. the space is split into many small offices, storages, meeting rooms, hallways, having staircases, elevators and other alike facilities, all located on multiple floors. Another similar scenario might be a search and rescue mission in a mine or might be offered by factory environments aiming at a diagnostic (or repair) task at a production line done via tele-maintenance system due to many possible reasons like: large distance to the factory, dangerous environment, inaccessibility for a human, etc. So far, the existing solutions to such systems being autonomous, cooperating and coordinated through a tele-operation centre have not been supported by any global approach enabling integration of human actors sharing the given task at the remote-end yet. The requirement for research and development of a unifying approach has been definitely straightforward here and has enabled further development of the hybrid-based systems. Therefore, this project was aimed to make a breakthrough in providing a general and unifying scheme for global integration of humans and robots at the remote-end of a telematic application based on research and development of particular methods for recovery of the (tele) presence via navigation at a certain level of autonomy, knowledge reuse based on a common reference model, and finally application of the concepts in a telepresence task accompanied by practical experiment. The research results provides methods how effectively share and maintain knowledge among all entities and/or between entity and teleoperated center in order to provide a robust, flexible and autonomous level of cooperation. Our approach uses shared data structures incorporating robust and effective data transfer. The information sharing supports the newly designed format for electronic version of map for search and rescue operations. Each entity in the system can be controlled through the system and it isn't significant whether a human or robot is used. The results were tested in real experiments, where robots and humans cooperate, share data and knowledge in common environment.
AGV - Teleoperated Mobile Robot: The mobile micro-robot MERLIN (Mobile Experimental Robots for Locomotion and Intelligent Navigation) was designed and realized for a broad spectrum of indoor and outdoor tasks on basis of standardized functional modules like sensors, actuators, communication and locomotion. With the on-board sensors MERLIN was capable to perform obstacle detection, modelling of the environment and position estimation and navigation in a global co-ordinate system. The algorithms for sensor pre- and post-processing as well as the control algorithms, necessary for path planning and obstacle avoidance were implemented on a micro-controller 80C167. The sensors were needed to characterize the remote mobile robot by its position and status, as well as to characterize its operational environment in order to provide information about inaccessible and dangerous areas for humans. The tele-diagnosis infrastructure was addressed, allowing a remote tele-operations centre to receive the crucial sensor data, to submit control commands and to establish a robust link by the software implementation for this generic tele-diagnosis approach. In particular the integration of the rover with the sensors into this infrastructure was addressed. A survey on sensors for tele-diagnosis applications such as PMD camera was investigated. The architecture of a distributed tele-diagnosis solution that will not only monitor these sensor data streams, but also troubleshoot and diagnose subsystem failures automatically was analysed. The experience in telematic based on several sensors on board was researched. The approaches for tele-operation of the mobile robot MERLIN by telematic methods were implemented those are haptic interface joystick control and path following with obstacle avoidance. The communication was possible for both via radio link and via wireless LAN. The architecture of software interfaces; Merlin Client and Merlin Server were programmed by java language. Effects related to delays in the signal transfer were investigated. PC104 and 80C167 microcontroller were selected to be a solution for a fast high performance computation during robot navigation and fast real time interrupt response for actuator control, and sensor data diagnosis, and client-server communication via TCP/IP protocol. The sensors selected for on board using were ultrasonic sensor, infrared sensor, gyroscope, hall sensors (odometer), bumper, 3D compass, laser scanner (range finder), Photonic Mixer Device (PMD) 3D vision camera. As an alternative for search and rescue mission, the tracked mobile robot was also designed and produced for rough terrain outdoor navigation with the tank and chain wheel structure. The autonomous obstacle collision avoidance was also implemented using ultrasonic, infrared sensor, PMD camera. For trajectory following, the trajectory planning, wall following, and maneuvering were developed. The localization techniques implemented based on Kalman filter. Many tests and development steps of sensors for accuracy and reliability and stability including time delay analysis of teleoperated control were performed. The sensors selected for on board using were ultrasonic sensor, infrared sensor, gyroscope, hall sensors (odometer)¿.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available