This project aims to carry out research and technology development to be used for deploying innovative tele-medicine services. The new services will be based on an Intelligent Dialog System (IDS), designed and developed to effectively manage an incremental dialog between a tele-medicine system and a patient, taking into account user needs, preferences and time course of her/his disease. The dialog system will require dynamic adaptation, including understanding of the patient's medical problems and physician's goals, misunderstandings of therapy suggestions, etc. To adapt the dialog, an ontology of the medical domain, and the history of user-system interactions need to be used. Patient interactions can be multi-modal, i.e. both voice and graphical interfaces will be accessible through a fixed or a mobile network. Some prototype services, accessible through both a Web Call Centre and standard Internet connections and browsers, will be developed.
The project aims at achieving a better integration between the healthcare domains and the spoken dialogue technology through an intelligent and cooperative dialogue system. The dialogue system should adapt to both the clinical contexts and to the type of patient. The research goals of the projects are:- to implement dialog strategies for monitoring chronic patients capable of adapting to the patient's clinical course and to the physician goals;- to specify a language for defining multimodal interactions and to realise the corresponding interpreter;- to develop techniques for estimating (or adapting) patient specific language models;- to study and develop some high level description of a service from which to automatically generate the dialog strategy;- to efficiently embed semantic dictionaries (i.e. an ontology of tasks for tele-medicine services) in the langauge models used for speech recognition.Some prototypes will be developed based on a Web call centre.
DESCRIPTION OF WORK
The project will start with the specification of the telephone infrastructure (a web call centre) in WP2. The main software development of the project will take place in WP3 and WP7, in which a Multimodal Interpreter will be developed and existing speech recognisers will be extended as to support spoken dialogue interaction with adaptation capabilities. The results of these work packages will be a generic multimodal interpreter, based on the interpretation of XML schemas (e.g. VoiceXML), for accessing information in the Internet. The interpreter will be able to communicate with standard telephones or with next generation mobile phones.Most of the research goals of the project will be carried out in work packages WP4, WP5 and WP6. In WP4 some methodologies for estimating a semantic model of patient interaction will be investigated. In particular, a patient specific conceptual model will be derived starting from data collected during the usage of the service itself. Such a model should allow to modify the dialogue strategy, and specifically the Language Models, in order to better match the requests of each patient. The understanding process will benefit from a domain ontology database represented by a lexicon that relates concepts with medical terms in several Languages. In WP5 methods for efficiently embed these semantic lexicons in both the language and dialogue models of the Dialog Manager (DM) will be developed. In work package WP6 some methods for automatically producing the dialogue specifications for the DM will be investigated and corresponding tools will be developed. The task specification Language will try to integrate all the available information sources (general medical knowledge, patient specific knowledge, etc) according to the actual dialogue state and to its progression. In WP8 some show cases will be evaluated and WP9 will summarise the experiences gained in the project addressing them toward the appropriate market sectors.
Funding SchemeCSC - Cost-sharing contracts
WC2A 3PX London