You are here
Novel Method for Storyboarding Biomedical Videos for Medical Informatics.
We propose a novel method for developing static storyboard for video clips included with biomedical research literature. The technique uses both visual and audio content in the video to select candidate key frames for the storyboard. From the visual channel, the Intra-frames are extracted using FFmpeg tool. IBM Watson speech-to-text service is used to extract words from the audio channel, from which clinically signiﬁcant concepts (key concepts) are identiﬁed using the U.S. National Library of Medicine’s Repository for Informed Decision Making (RIDeM) service. These concepts are synchronized with the key frames, from which our algorithm selects relevant frames to highlight in the storyboard. In order to test the system, we ﬁrst created a reference set through a semiautomatic approach,and measure the system performance with informativeness and ﬁdelity metrics. Results from pilot testing, both subjective visual and quantitative metrics, are promising. It is our goal to conduct a formal user evaluation in the future.