default search action
7th ICMI 2005: Trento, Italy
- Gianni Lazzari, Fabio Pianesi, James L. Crowley, Kenji Mase, Sharon L. Oviatt:
Proceedings of the 7th International Conference on Multimodal Interfaces, ICMI 2005, Trento, Italy, October 4-6, 2005. ACM 2005, ISBN 1-59593-028-0 - Marc O. Ernst:
The "puzzle" of sensory perception: putting together multisensory information. 1
Recognition and multimodal gesture patterns
- Bee-Wah Lee, Alvin W. Yeo:
Integrating sketch and speech inputs using spatial information. 2-9 - Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, David Demirdjian:
Distributed pointing for multimodal collaboration over sketched diagrams. 10-17 - Louis-Philippe Morency, Candace L. Sidner, Christopher Lee, Trevor Darrell:
Contextual recognition of head gestures. 18-24 - Marc Hanheide, Christian Bauckhage, Gerhard Sagerer:
Combining environmental cues & head gestures to interact with wearable devices. 25-31
Posters
- Oliver Brdiczka, Jérôme Maisonnasse, Patrick Reignier:
Automatic detection of interaction groups. 32-36 - Yingen Xiong, Francis K. H. Quek:
Meeting room configuration and multiple camera calibration in meeting analysis. 37-44 - Giancarlo Iannizzotto, Carlo Costanzo, Francesco La Rosa, Pietro Lanzafame:
A multimodal perceptual user interface for video-surveillance environments. 45-52 - Sy Bor Wang, David Demirdjian:
Inferring body pose using speech content. 53-60 - Kai Nickel, Tobias Gehrig, Rainer Stiefelhagen, John W. McDonough:
A joint particle filter for audio-visual speaker tracking. 61-68 - Maria Danninger, G. Flaherty, Keni Bernardin, Hazim Kemal Ekenel, Thilo Köhler, Robert G. Malkin, Rainer Stiefelhagen, Alex Waibel:
The connector: facilitating context-aware communication. 69-75 - Marc Erich Latoschik:
A user interface framework for multimodal VR interactions. 76-83 - Cyril Rousseau, Yacine Bellik, Frédéric Vernier:
Multimodal output specification / simulation platform. 84-91 - Silvia Berti, Fabio Paternò:
Migratory MultiModal interfaces in MultiDevice environments. 92-99 - Lynne Baillie, Raimund Schatz:
Exploring multimodality in the laboratory and the field. 100-107
Visual attention
- Helmut Prendinger, Chunling Ma, Jin Yingzi, Arturo Nakasone, Mitsuru Ishizuka:
Understanding the effect of life-like interface agents through users' eye movements. 108-115 - Jiazhi Ou, Lui Min Oh, Susan R. Fussell, Tal Blum, Jie Yang:
Analyzing and predicting focus of attention in remote collaborative tasks. 116-123 - Oleg Spakov, Darius Miniotas:
Gaze-based selection of standard-size menu items. 124-128 - Norimichi Ukita, Tomohisa Ono, Masatsugu Kidode:
Region extraction of a gaze object using the gaze point and view image sequences. 129-136 - Hiroshi Ishiguro:
Interactive humanoids and androids as ideal interfaces for humans. 137
Semantics and dialog
- Peter Gorniak, Deb Roy:
Probabilistic grounding of situated speech using plan recognition and reference resolution. 138-143 - Robin Senior, Roel Vertegaal:
Augmenting conversational dialogue by means of latent semantic googling. 144-150 - Shuyin Li, Axel Haasch, Britta Wrede, Jannik Fritsch, Gerhard Sagerer:
Human-style interaction with a robot for cooperative learning of scene objects. 151-158 - Norbert Reithinger, Simon Bergweiler, Ralf Engel, Gerd Herzog, Norbert Pfleger, Massimo Romanelli, Daniel Sonntag:
A look under the hood: design and development of the first SmartWeb system demonstrator. 159-166
Recognizing communication patterns
- Rebecca Lunsford, Sharon L. Oviatt, Rachel Coulston:
Audio-visual cues distinguishing self- from system-directed speech in younger and older adults. 167-174 - Koen van Turnhout, Jacques M. B. Terken, Ilse Bakx, Berry Eggen:
Identifying the intended addressee in mixed human-human and human-computer interaction from non-verbal features. 175-182 - Daniel Gatica-Perez, Guillaume Lathoud, Jean-Marc Odobez, Iain McCowan:
Multimodal multispeaker probabilistic tracking in meetings. 183-190 - Kazuhiro Otsuka, Yoshinao Takemae, Junji Yamato:
A probabilistic inference of multiparty-conversation structure based on Markov-switching models of gaze patterns, head directions, and utterances. 191-198 - Alex Pentland:
Socially aware computation and communication. 199
Affective interaction
- Elena Not, Koray Balci, Fabio Pianesi, Massimo Zancanaro:
Synthetic characters as multichannel interfaces. 200-207 - Koray Balci:
XfaceEd: authoring tool for embodied conversational agents. 208-213 - Alberto Battocchi, Fabio Pianesi, Dina Goren-Bar:
A first evaluation study of a database of kinetic facial expressions (DaFEx). 214-221 - Steve Yohanan, Mavis Chan, Jeremy Hopkins, Haibo Sun, Karon E. MacLean:
Hapticat: exploration of affective touch. 222-229
Posters
- Umberto Giraudo, Monica Bordegoni:
Using observations of real designers at work to inform the development of a novel haptic modeling system. 230-235 - Mounia Ziat, Olivier Gapenne, John Stewart, Charles Lenay:
A comparison of two methods of scaling on form perception via a haptic interface. 236-243 - Meghan Allen, Jennifer Gluck, Karon E. MacLean, Erwin Tang:
An initial usability assessment for symbolic haptic rendering of music parameters. 244-251 - Wen Qi, Jean-Bernard Martens:
Tangible user interfaces for 3D clipping plane interaction with volumetric data: a case study. 252-258 - Adrian Stanciulescu, Quentin Limbourg, Jean Vanderdonckt, Benjamin Michotte, Francisco Montero Simarro:
A transformational approach for multimodal web user interfaces based on UsiXML. 259-266 - Tomoyuki Morita, Yasushi Hirano, Yasuyuki Sumi, Shoji Kajita, Kenji Mase:
A pattern mining method for interpretation of interaction. 267-273 - Fang Chen, Eric H. C. Choi, Julien Epps, Serge Lichman, Natalie Ruiz, Yu (David) Shi, Ronnie Taib, Mike Wu:
A study of manual gesture-based selection for the PEMMI multimodal transport management interface. 274-281 - Liang-Guo Zhang, Xilin Chen, Chunli Wang, Yiqiang Chen, Wen Gao:
Recognition of sign language subwords based on boosted hidden Markov models. 282-287 - Jose L. Hernandez-Rebollar:
Gesture-driven American sign language phraselator. 288-292 - Md. Altab Hossain, Rahmadi Kurnia, Akio Nakamura, Yoshinori Kuno:
Interactive vision to detect target objects for helper robots. 293-300
Tangible interfaces and universal access
- Melanie Baljko:
The contrastive evaluation of unimodal and multimodal interfaces for voice otput communication aids. 301-308 - Rami Saarinen, Janne Järvi, Roope Raisamo, Jouni Salo:
Agent-based architecture for implementing multimodal learning environments for visually impaired children. 309-316 - Anthony Tang, Peter McLachlan, Karen Lowe, Chalapati Rao Saka, Karon E. MacLean:
Perceiving ordinal data haptically under workload. 317-324 - Eiji Tokunaga, Hiroaki Kimura, Nobuyuki Kobayashi, Tatsuo Nakajima:
Virtual tangible widgets: seamless universal interaction with personal sensing devices. 325-332
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.