Skip to main content

Content and Ontology based Search and Retrieval of Medical Images

Final Report Summary - COSERMI (Content and ontology based search and retrieval of medical images)

The aim of the multidisciplinary research proposed in this grant application is to integrate content and context information for search and retrieval of digital medical images.

With the advances in the medical imaging technology, the number of digital medical images acquired, stored, indexed and managed at the healthcare centres is exponentially increasing each day. Medical experts often refer to previously seen cases for diagnosis and treatment planning of a new patient. Automated search and retrieval of similar cases from large medical databases may especially improve the diagnosis and treatment of diseases whose causes and effects have not yet been fully unravelled.

Conventional database management systems, such as hospital information system (HIS), radiology information system (RIS) and picture archiving and communication system (PACS), employed at the healthcare centres today permit only surname, age, sex like keyword-based search of the database items. However, in addition to keyword-based search combining information captured from image content together with anatomical, clinical and disease ontologies may not only result in a breakthrough in computer-based search and retrieval of medical images, but also improve patient care and lower healthcare costs by increasing the acceptance and usage of such search tools in the clinical settings.

The human brain, centre of the nervous system, is a highly complex organ composed of billions of neurons interconnected with each other. Neurons use this complicated topology to transmit crucial information for normal functioning of human body. Thanks to the everlasting scientific progress, we now understand the functioning of individual neurons in detail, but figuring out how they cooperate in ensembles is still an open question. In certain neurological disorders of the brain, such as Alzheimer's and multiple sclerosis diseases, neurodegeneration or progressive loss of neurons occurs. Unfortunately, (early) detection, diagnosis and prognosis of most neurodegenerative diseases are not yet fully solved.

Practice guidelines for diagnosis and management of Alzheimer's and other related brain disorders recommend combining outcomes of neuroimaging investigation (preferably magnetic resonance imaging) with those of cognitive evaluation, behavioural assessment, laboratory examinations and genetic testing. Accordingly, in this multidisciplinary grant proposal we suggest to combine content information present in brain magnetic resonance (MR) images with context information available in patient demographic figures, clinical findings, and knowledge in anatomical and disease ontologies for search and retrieval of medical cases from large repositories. For this purpose, we plan to:

1. integrate content and context information to describe and index medical images:

a. represent images with colour, texture, shape like content attributes;
b. customise context information, such as patient demographic information, clinical findings, anatomical and disease knowledge encoded in ontologies, to the search;
c. describe images with the customised context knowledge;
d. describe images with the integrated content and context information;

2. use this integrated description for search and retrieval among large databases:

a. investigate a similarity metric that can capture both content and context descriptions with high sensitivity;
b. design and implement a search and retrieval tool that realises the integrated medical image search;
c. validate the search tool with large databases;
d. create a graphical user interface that facilitates smooth interaction between the user and the search tool.

To this end, we present a novel image search system for large medical image repositories that strengthen content information with contextual knowledge from demographic and clinical data, and ontologies for improved retrieval. As an exemplary usage scenario, search and retrieval of neuroimaging data from dementia, Alzheimer, and Parkinson cases is realised. Predictive power of the proposed system is evaluated by only content information first, and it is shown that while lateral ventricle shape change can discriminate demented cases from healthy controls and converter, it is not discriminative enough for the Alzheimer or Parkinson cases. When content information is extended with additional image features and/or supported with contextual knowledge, such as neuropsychological tests, the discriminative capability of the search system is improved.

The proposed system can be useful in applications such as diagnosis and education, where experts or medical students and residents from the same clinical site as well as from different distant locations can benefit from the system. In the more common case of users located at different clinical sites, effective usage (and integration to the diagnostic or educational process) of such a search system will require efficient and low-cost networking solutions that need to handle technical issues like limited bandwidth and protection of data privacy.

Comparison of multiple patients' (imaging, demographic and clinical) data by using the proposed search system may further help:

1. neurology-radiology experts to identify previously unknown relations of brain disorders;
2. medical doctors to acquire experience rapidly by searching through large databases and learning from previously diagnosed cases;
3. researchers in information technology to identify and hopefully solve new technical problems related to the multimodal (content and context) search defined in this proposal.