Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
Content archived on 2024-05-24

Interacting With Eyes: Gaze Assisted Access To Information In Multiple Languages

CORDIS provides links to public deliverables and publications of HORIZON projects.

Links to deliverables and publications from FP7 projects, as well as links to some specific result types such as dataset and software, are dynamically retrieved from OpenAIRE .

Deliverables

The aim of the I-EYE project was to develop new, natural forms of interaction for multilingual applications, and to evaluate their usability. The product characterization for iTutor lies in supporting "just-in-time" standardized maintenance for performing, tracking, and monitoring maintenance activities close to any machine. In other words, the final product consists in a set of mobile on-line services for industrial maintenance. Mainly, iTutor has been conceived as a multimedia application, which is able to provide information of various kinds by exploiting more natural human-machine interaction modes. iTutor has been developed by exploiting eye tracking and speech recognition for browsing information in several output modalities: audio, text, animation and video. The general goal of iTutor is to support and ease decision-making in an industrial setting where a user must keep his/her hand free to work with the target. The multimedia information usually stored in the company’s information system (electrical and mechanical schemas, complete information on parts, maintenance procedures, parts availability, etc.) will be provided to maintainers as a response to natural eye fixations and speech requests. Moreover, by using speech recognition or a wrist keyboard, the user himself is allowed to insert short notes about a machine or an intervention, to perform on-site checking of parameters values, to ask for help in real time if needed. At present, iTutor can be looked at as an advanced prototype of a maintenance support system for industrial application. All the basic functionalities that such an application must offer have been identified, developed and integrated. Notwithstanding, from the performance and usability evaluation, which has been carried out in a real automotive plant with actual maintainers, clearly stood out that more functionalities should be added in future versions of iTutor. Moreover, in order to improve iTutor’s capability in fulfilling the end-users needs, the plant structure and the characteristics of the company’s specific workflow must be carefully taken into account. In other words, the presently available prototype of iTutor represents a robust kernel over which specialized and customized services can be built: we offer both the acquired expertise and the developed technology and system architecture to realize such services. The possibility of establishing joint venture relationships with interested end users and/or companies already involved in industrial maintenance-related activities will be evaluated on a case-by-case basis. From a different perspective, further R&D activity is needed to make iTutor a worldwide leading edge product in the market of maintenance support systems. For instance, suitable functionalities for managing 3D contents in a context-aware virtual/augmented reality environment should be developed. To this end further funding has to be found and a partnership with companies/research centres already working in the VR/AR field might by sought. The evolution of iTutor that could be driven by further research will allow to investigate other market sectors and alternatives applications for the proposed technology, besides the industrial maintenance: - Remote maintenance in dangerous or unhealthy areas - Interactive and iterative evaluation of the usability, functionality and ergonomic characteristics of an environment directly during the design phase (architectural design, civil and naval engineering, space engineering); - Hostile environment training course for people that could have to face unusual situation in difficult environmental conditions though being prepared and “used” to do it (nuclear disasters, fires, natural hazards, etc.); - Virtual and customized tours of museum, archaeological sites, unusual and inaccessible environment, for educational purposes and cultural heritage dissemination.
The objective of the I-EYE project was to develop new, natural forms of interaction for multilingual applications, and to evaluate their usability. The Conexor Lexical Module (CLM) has been specified and implemented in the course of the iEye project. The component runs on Windows 98 and Windows 2000 platforms as a COM server. As described in API Specifications for Conexor Modules (iEye Project deliverable D4), CLM maintains external linguistic sources such as bilingual (as in the current project) dictionaries, monolingual dictionaries or encyclopedic databases. The information in these sources is converted into the standardized format called Custom Dictionary Format (CDF). There is also a separate interface for maintaining user databases in the same format. The input for the CLM is a CNXQuery consisting of - Pointer to existing dictionary - Normalised Query Expression (NQE) - Type and specified The output is a CNXQueryStruct, which contains the dictionary information matching to the NQE ordered by frequency, type and specifier information provided by the CIE module. Ideally, the component should output the contextually most appropriate dictionary headword in a target language for the text token in a source language text. CLM is implemented in Visual Basic. There is a developer’s guide that describes how to use CLM as a system component.
The overall objective of the I-EYE project was to develop new, natural forms of interaction for multilingual applications, and to evaluate their usability. iDict is a gaze-assisted environment for reading electronic documents written in a foreign language (English). The use of iDict begins with a calibration of the eye-tracker. Then, while the user reads the text, instant translations are provided automatically, as soon as the user hesitates in reading a word or a phrase. Thanks to the underlying linguistic analysis, the translations are shown in a grammatically correct form: verbs are translated as verbs, nouns as nouns. If the reader wants more help than the primary translation, she can get it by simply looking at the translation frame beside the text window. For the interaction the user does not have to do anything but to read the text. In addition, translations can also be triggered also manually, by clicking the difficult word with a mouse. The target language used for the translations can be chosen from Finnish, Italy, German or English.
The objective of the I-EYE project was to develop new, natural forms of interaction for multilingual applications, and to evaluate their usability. UNO performed a wealth of experimentation that generated data that could be used to develop the thresholds for eye movement triggering of functions. This experimentation examined various eye movement measures for behaviour that related to both the iDict and iTutor applications. The data generated, provided norms and differential values for eye movement behaviour for reading, problem solving and search behaviour. These data give an indication of baseline values, and efficient and problematic behaviour that can be used to ascertain when additional information is required. The data relating to iDict application (another application developed in the framework of the I-EYE project) provided eye movement measures that are indicative of a comprehension breakdown in reading for both readers who are reading text in a native or secondary language and also indicated differences between scan and vigilant reading. The data relating to iTutor provides information in terms of efficient and problematic problem solving as well as differences between various search strategies. The data generated also identified differences in eye movement measures between processing information across different information formats. All the data is of scientific interest and can also be used internally to aid the development of algorithms that use eye movements to trigger the software functioning.

Searching for OpenAIRE data...

There was an error trying to search data from OpenAIRE

No results available

My booklet 0 0