nStudy conducted by
Oviatt to explore multimodal integration and synchronization patterns
that occur during pen and speech based human-computer
interaction.
nEvaluate the linguistic
features of spoken multimodal interfaces and how the differ
from unimodal speech recognition interfaces.
nDetermine how spoken
and written modes are naturally integrated and synchronized during
multimodal input construction