Multimodal Contextually-Aware Communication Aids
For most of us speaking is effortless. For individuals with severe speech and motor disabilities, it is not only difficult but may be unattainable. Over two million Americans require assistive devices to convey their daily needs and desires. To construct messages using such devices, users point at a series of words or icons that are then spoken out loud using speech synthesis. Unfortunately, communication using currently available assistive communication aids is extremely slow and physically fatiguing. Recent advances in sensor technologies, speech and language processing techniques, and the falling costs of computers, allow us to explore promising new approaches to assistive communication.
At the Communication Analysis and Design Laboratory (CadLab) at Northeastern University we have been exploring vocal control in individuals with severe speech and motor abilities and leveraging their residual control as an alternative communication channel. We have also been designing and developing a novel, contextually-aware assistive communication interface called iconCHAT. In this talk we will demonstrate the current prototype and discuss some of the human computer interaction issues related to developing such a system.