Graphics and User Interface Research References
- Ang, Jeremy C (2002), Prosodic Cues For Emotion Recognition In Communicator
Dialogs. UC Berkely.
- Ang, J., Dhillon R., Krupski, A., Shriberg, E., Stolcke, A. (2002),
Prosody-Based Automatic Detection Of Annoyance And Frustration In Human-Computer.
ICSLP-2002
- Bers, J., Miller, S., & Makhoul, J. (1998). Designing conversational
interfaces with multimodal interaction. DARPA Workshop on Broadcast
News Understanding Systems, 319-321.
- Bolt, R.A. (1980). Put-that-there: Voice and gesture at the graphics
interface. Computer Graphics, 14 (3), 262-270.
- Clarkson, J.D., & Yi, J. (1996). LeatherNet: a synthetic forces tactical
training system for the USMC commander. Proceedings of the Sixth Conference
on Computer Generated Forces and Behavioral Representation. Orlando
Florida, 275-281.
- Cohen, P.R. (1992), The role of natural language in a multimodal
interface, Proceedings of the 5th annual ACM symposium on User interface
software and technology, p.143-149, November 15-18, 1992, Monteray,
California, United States
- Cohen, P.R., Johnston M., McGee D., Oviatt, S. Pittman, J., Smith,
I., Chen, L., & Clow, J., (1997). Quickset: Multimodal Interaction for
Distributed Applications. Proceedings of the fifth ACM international
conference on Multimedia, 31-40, New York: ACM Press.
- Cohen, P. R, Johnston, M., McGee, D., Oviatt, S. L., Clow, J., Smith,
I. (1997). The Efficiency of Multimodal Interaction: A Case Study. Proceedings
of the International Conference on Spoken Language.
- Cohen, P.R. & Oviatt S.L. (1995), The role of voice input for human-machine
communication. Proceedings of the National Academy of Sciences, 92(22):
p.9921-9927.
- Dellaert, F., Polzin T., & Waibel, A. (1996), Recognizing Emotion
in Speech. Carnegie Mellon University, Pittsburgh, PA
- Généreux, M., Klein, A., & Trost, H., (2000), A Multimodal Speech
Interface for Accessing Web Pages. Conference TALN.
- Grasso, Michael A (1996), Speech Input in MutliModal Environments:
A Proposal to Study the Effects of Reference Visability, Reference Number,
and Task Integration. PhD Thesis Proposal, University of Maryland, Baltimore.
- Grasso, Michael A (1997), Speech Input in Multimodal Environments:
Effects of Perceptual Structure on Speed, Accuracy, and Acceptance.
PhD Disseration, University of Maryland, Baltimore.
- Harada S., Hwang J., Lee B, Stone M., (2003) "Put-That-There": What,
Where, How? Integrating Speech and Gesture in Interactive Workspaces,
UbiComp 2003 Workshop 7, October 12, 2003, UBIHCISYS 2003 Online Proceedings
- Holzman, T. G. (1999). Computer-human interface solutions for emergency
medical care. Interactions, 6(3), 13-24
- Johnston, M., Cohen, P. R., McGee D., Oviatt, S. L., Pittman, J.
A. & Smith, I. (1997). Unification-based multimodal integration. In
Proceedings of the Thirty-Fifth Annual Meeting of the Association for
Computational Linguistics and Eighth Conference of the European Chapter
of the Association for Computational Linguistics.
- Karat, C.-M., Halverson, C., Horn, D., & Karat, J. (1999). Patterns
of entry and correction in large vocabulary continuous speech recognition
systems. Proceedings of the International Conference for Computer-Human
Interaction (CHI?99), 568-575. ACM Press.
- Klein, Jonathan T. (1996), Computer Response to User Frustration.
MIT Media Laboratory Vision and Modeling Group Technical Report TR#480.
- C.F. Kurts, D.J. Snowden, The
New Dynamics of Strategy: Sense-Making in a Complex and Complicated
World, IBM Systems Journal, Vol 42, No 3, 2003.
- Neal, J. G. & Shapiro, S. C. (1988). Intelligent multi-media interface
technology. In J. W. Sullivan and S. W. Tyler, editors, Proceedings
of the Workshop on Architectures for Intelligent Interfaces: Elements
and Prototypes, 69-91. ACM/Addison-Wesley, 1988.
- Oviatt, S.L., Multimodal interfaces for dynamic interactive maps,
CHI ?96 Conf. Proc., New York, ACM Press, 1996, 95-102
- Oviatt, S., Angeli, A. D. & Kuhn, K. ) (1997). Integration and synchronization
of input modes during multimodal human-computer interaction, Proc. of
the Conference on Human Factors in Computing Systems (CHI'97).
- Oviatt, S. (1999). Ten Myths of Multimodal Interaction. Communications
of the ACM, Vol. 42, No. 11, 74-81.
- Oviatt S. L., "Mutual disambiguation of recognition errors in a multimodal
architecture," Proc. of the Conference on Human Factors in Computing
Systems (CHI'99), 1999.
- Oviatt, S., Cohen, P, Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers,
J., Holzman, T., Winograd, T., Landay, J., Larson, J., Ferro, D. (2000),
Designing the User Interface for Multimodal Speech and Pen-Based Gesture
Applications: State-of-the-Art Systems and Future Research Directions.,
In Human-Computer Interaction, Volume 15, 263-322, Lawrence Erlbaum
Associates.
- Oviatt, S. (2002). Multimodal Interfaces. Chapter to appear in Handbook
of Human-Computer Interaction, Lawrence Erlbaum: New Jersey, 2002.
- Picard, R.W. (2000), Toward Computers that Recognize and Respond
to User Emotion, IBM Systems Journal, Vol 39
- Shneiderman, B. (1998), Designing the User Interface: Strategies
for Effective Human-Computer Interaction, Third Edition, Addison-Wesley,
Reading, MA
- Vergo, J. (1998). A statistical approach to multimodal natural language
interaction. Proceedings of the AAAI?98 Workshop on Representations
for Multimodal Human-Computer Interaction, 81-85. AAAI Press.
- Wahlster, W. & Kobsa, A. (1986). Distinguishing user models from
discourse models. In Kobsa, A. & Wahlster, W. editors, Computational
Linguistics. Special Issue on User Modeling, 14 (3), 101-103.
- Wahlster, W. (1991). User and discourse models for multimodal communication.
In Sullivan, J. W., Tyler, S. W. & Sherman, W. editors, Intelligent
User Interfaces. New York: ACM Press, S. 45-67
- Wu, L., Oviatt, S. L., Cohen, P. R., Multimodal Integration -- A
Statistical View, IEEE Transactions on Multimedia, Vol. 1, No. 4, December
1999, pp. 334-341.
- Xaio, B., Girand C., Oviatt, S (2002). Multimodal Integration Patterns
in Children, In Proc. of International Conference on Speech and Language
Processing 2002, 629-632.