You are here
Multi-Modal Query Expansion Based on Local Analysis for Medical Image Retrieval.
A unified medical image retrieval framework integrating visual and text keywords using a novel multi-modal query expansion (QE) is presented. For the content-based image search, visual keywords are modeled using support vector machine (SVM)-based classification of local color and texture patches from image regions. For the text-based search, keywords from the associated annotations are extracted and indexed. The correlations between the keywords in both the visual and text feature spaces are analyzed for QE by considering local feedback information. The QE approach can propagate user perceived semantics from one modality to another and improve retrieval effectiveness when combined in multi-modal search. An evaluation of the method on imageCLEFmed '08 dataset and topics results in a mean average precision (MAP) score of 0.15 over comparable searches without QE or using only single modality.