Abstract
We are developing technology to translate acoustic characteristics of speech into visual cues that can be used to supplement speechreading when hearing is limited. Research and theory have established that perceivers are influenced by multiple sources of sensory and contextual information in spoken language processing. Previous research has also shown that additional sources of information can be learned and used to supplement those that are normally available but have been degraded by sensory impairment or difficult environments. We tested whether people can combine or integrate information from the face and information from newly learned cues in an optimal manner. Subjects first learned the visual cues and then were tested under three conditions. Words were presented with just the face, just the visual cues, or both together. Performance was much better with both cues than with either one alone. Similar to the description of previous results with audible and visible speech, the present results were well described by the Fuzzy Logical Model of Perception (Massaro, 1998), which predicts optimal or maximally efficient integration.