You are here
A simple error classification system for understanding sources of error in automatic speech recognition and human transcription.
To (1) discover the types of errors most commonly found in clinical notes that are generated either using automatic speech recognition (ASR) or via human transcription and (2) to develop efficient rules for classifying these errors based on the categories found in (1). The purpose of classifying errors into categories is to understand the underlying processes that generate these errors, so that measures can be taken to improve these processes.
We integrated the Dragon NaturallySpeaking v4.0 speech recognition engine into the Regenstrief Medical Record System. We captured the text output of the speech engine prior to error correction by the speaker. We also acquired a set of human transcribed but uncorrected notes for comparison. We then attempted to error correct these notes based on looking at the context alone. Initially, three domain experts independently examined 104 ASR notes (containing 29,144 words) generated by a single speaker and 44 human transcribed notes (containing 14,199 words) generated by multiple speakers for errors. Collaborative group sessions were subsequently held where error categorizes were determined and rules developed and incrementally refined for systematically examining the notes and classifying errors.
We found that the errors could be classified into nine categories: (1) announciation errors occurring due to speaker mispronounciation, (2) dictionary errors resulting from missing terms, (3) suffix errors caused by misrecognition of appropriate tenses of a word, (4) added words, (5) deleted words, (6) homonym errors resulting from substitution of a phonetically identical word, (7) spelling errors, (8) nonsense errors, words/phrases whose meaning could not be appreciated by examining just the context, and (9) critical errors, words/phrases where a reader of a note could potentially misunderstand the concept that was related by the speaker.
A simple method is presented for examining errors in transcribed documents and classifying these errors into meaningful and useful categories. Such a classification can potentially help pinpoint sources of such errors so that measures (such as better training of the speaker and improved dictionary and language modeling) can be taken to optimize the error rates.