default search action
35th UIST 2022: Bend, OR, USA - Adjunct Volume
- Maneesh Agrawala, Jacob O. Wobbrock, Eytan Adar, Vidya Setlur:
The Adjunct Publication of the 35th Annual ACM Symposium on User Interface Software and Technology, UIST 2022, Bend, OR, USA, 29 October 2022- 2 November 2022. ACM 2022, ISBN 978-1-4503-9321-8 - Sven Kratz, Andrés Monroy-Hernández, Rajan Vaish:
What's Cooking? Olfactory Sensing Using Off-the-Shelf Components. 1:1-1:3 - Hanbit Kim, Jaeho Sung, Joon Hyub Lee, Seok-Hyung Bae:
RCSketch: Sketch, Build, and Control Your Dream Vehicles. 2:1-2:2 - Ching-Wen Hung, Ruei-Che Chang, Hong-Sheng Chen, Chung-Han Liang, Liwei Chan, Bing-Yu Chen:
Puppeteer: Manipulating Human Avatar Actions with Intuitive Hand Gestures and Upper-Body Postures. 3:1-3:3 - Angelica M. Bonilla Fominaya, Rong Kang (Ron) Chew, Matthew L. Komar, Jeremia Lo, Alexandra Slabakis, Ningjing (Anita) Sun, Yunyi (Joyce) Zhang, David Lindlbauer:
MoonBuddy: A Voice-based Augmented Reality User Interface That Supports Astronauts During Extravehicular Activities. 4:1-4:4 - Don Samitha Elvitigala, Rukshani Somarathna, Yijun Yan, Gelareh Mohammadi, Aaron Quigley:
Towards using Involuntary Body Gestures for Measuring the User Engagement in VR Gaming. 5:1-5:3 - Seokwoo Song, Doil Kwon:
Bodyweight Exercise based Exergame to Induce High Intensity Interval Training. 6:1-6:4 - Shiina Takano, Arinobu Niijima:
Involuntary Exhalation Control by Facial Vibration. 7:1-7:3 - Martin Feick, Anthony Tang, Antonio Krüger:
HapticPuppet: A Kinesthetic Mid-air Multidirectional Force-Feedback Drone-based Interface. 8:1-8:3 - Zhuohao (Jerry) Zhang, Jacob O. Wobbrock:
A11yBoard: Using Multimodal Input and Output to Make Digital Artboards Accessible to Blind Users. 9:1-9:4 - Jay Kolvenbag, Miguel Bruns, Amy Winters:
Rapid Prototyping Dynamic Robotic Fibers for Tunable Movement. 10:1-10:4 - Arthur Caetano, Misha Sra:
ARfy: A Pipeline for Adapting 3D Scenes to Augmented Reality. 11:1-11:3 - Banafsheh Mohajeri, Jinghui Cheng:
"Inconsistent Performance": Understanding Concerns of Real-World Users on Smart Mobile Health Applications Through Analyzing App Reviews. 12:1-12:4 - Joanne Leong, Olivia Seow, Cathy Mengying Fang, Benny Jun-Hong Tang, Rajan Vaish, Pattie Maes:
Wemoji: Towards Designing Complementary Communication Systems in Augmented Reality. 13:1-13:3 - Wanhui Li, Takuto Nakamura, Jun Rekimoto:
RemoconHanger: Making Head Rotation in Remote Person using the Hanger Reflex. 14:1-14:3 - Shunyi Yang, Yun Suen Pai, Kouta Minamizawa:
SpiceWare: Simulating Spice Using Thermally Adjustable Dinnerware to Bridge Cultural Gaps. 15:1-15:3 - Ashraful Islam, Beenish Moalla Chaudhry:
Early Usability Evaluation of a Relational Agent for the COVID-19 Pandemic. 16:1-16:3 - Changsung Lim, Jina Kim, Myung Jin Kim:
Thumble: One-Handed 3D Object Manipulation Using a Thimble-Shaped Wearable Device in Virtual Reality. 17:1-17:3 - Mizuki Yabutani, Yuichi Mashiba, Homura Kawamura, Suzuha Harada, Keiichi Zempo:
Sharing Heartbeat: Toward Conducting Heartrate and Speech Rhythm through Tactile Presentation of Pseudo-heartbeats. 18:1-18:4 - Aryan Saini, Haotian Huang, Rakesh Patibanda, Nathalie Overdevest, Elise van den Hoven, Florian 'Floyd' Mueller:
SomaFlatables: Supporting Embodied Cognition through Pneumatic Bladders. 19:1-19:4 - Peiyu Zhang, Wen Ying, Seongkook Heo:
Fringer: A Finger-Worn Passive Device Enabling Computer Vision Based Force Sensing Using Moiré Fringes. 20:1-20:3 - Naoki Kimura:
Self-Supervised Approach for Few-shot Hand Gesture Recognition. 21:1-21:4 - Marx Boyuan Wang, Zachary Duer, Scotty Hardwig, Sam Lally, Alayna Ricard, Myounghoon Jeon:
Echofluid: An Interface for Remote Choreography Learning and Co-creation Using Machine Learning Techniques. 22:1-22:4 - Zhuoyue Lyu, Jackie (Junrui) Yang, Monica S. Lam, James A. Landay:
HomeView: Automatically Building Smart Home Digital Twins With Augmented Reality Headsets. 23:1-23:6 - Felix Dollack, Ryuta Yamaguchi, Panote Siriaraya, Tomoki Yoshihisa, Shinji Shimojo, Yukiko Kawai:
Detecting Changes in User Emotions During Bicycle Riding by Sampling Facial Images. 24:1-24:3 - Mohammad Imrul Jubair, Ali Ahnaf, Tashfiq Nahiyan Khan, Ullash Bhattacharjee, Tanjila Joti:
PerSign: Personalized Bangladeshi Sign Letters Synthesis. 25:1-25:3 - Hansoo Lee, Sangwook Lee, Youngji Koh, Uichin Lee:
LV-Linker: Supporting Fine-grained User Interaction Analyses by Linking Smartphone Log and Recorded Video Data. 26:1-26:4 - Hirotaka Hiraki, Jun Rekimoto:
SilentWhisper: faint whisper speech using wearable microphone. 27:1-27:3 - Dishita Turakhia, Peiling Jiang, Brent Liu, Mackenzie Leake, Stefanie Müller:
The Reflective Maker: Using Reflection to Support Skill-learning in Makerspaces. 28:1-28:4 - Mohammad Imrul Jubair, Arafat Ibne Yousuf, Tashfiq Ahmed, Hasanath Jamy, Foisal Reza, Mohsena Ashraf:
DIY Graphics Tab: A Cost-Effective Alternative to Graphics Tablet for Educators. 29:1-29:3 - Dominik Schön, Thomas Kosch, Martin Schmitz, Florian Müller, Sebastian Günther, Johannes Kreutz, Max Mühlhäuser:
TrackItPipe: A Fabrication Pipeline To Incorporate Location and Rotation Tracking Into 3D Printed Objects. 30:1-30:5 - Marx Boyuan Wang, Sang Won Lee:
TaskScape: Fostering Holistic View on To-do List With Tracking Plan and Emotion. 31:1-31:4 - Chang Xiao, Ryan A. Rossi, Eunyee Koh:
iMarker: Instant and True-to-scale AR with Invisible Markers. 32:1-32:3 - Yixiao Kang, Zhenglin Zhang, Meiqi Zhao, Xuanhui Yang, Xubo Yang:
Tie Memories to E-souvenirs: Hybrid Tangible AR Souvenirs in the Museum. 33:1-33:3 - Kayhan Latifzadeh, Luis A. Leiva:
Gustav: Cross-device Cross-computer Synchronization of Sensory Signals. 34:1-34:3 - K. J. Kevin Feng, Alice Gao, Johanna Suvi Karras:
Towards Semantically Aware Word Cloud Shape Generation. 35:1-35:5 - Md. Aashikur Rahman Azim, Adil Rahman, Seongkook Heo:
Over-The-Shoulder Training Between Redundant Wearable Sensors for Unified Gesture Interactions. 36:1-36:3 - Sang-Hyun Lee, Taegyu Jin, Joon Hyub Lee, Seok-Hyung Bae:
WireSketch: Bimanual Interactions for 3D Curve Networks in VR. 37:1-37:3 - Kazuki Kawamura, Jun Rekimoto:
AIx speed: Playback Speed Optimization using Listening Comprehension of Speech Recognition Models. 38:1-38:3 - Yuki Sakashita, Yoshio Ishiguro, Kento Ohtani, Takanori Nishino, Kazuya Takeda:
Methods of Gently Notifying Pedestrians of Approaching Objects when Listening to Music. 39:1-39:4 - Luna Takagi, Shio Miyafuji, Jefferson Pardomuan, Hideki Koike:
LUNAChair: Remote Wheelchair System that Links Up a Remote Caregiver and Wheelchair Surroundings. 40:1-40:3 - Derrek Chow, Gracie Xia, Jasmine Ou:
FormSense: A Fabrication Method to Support Shape Exploration of Interactive Prototypes. 41:1-41:3 - Xiemin Wei, Zixia Zheng, Hongning Shi, Yaqing Chai, Jiajia Li:
Little Garden: An augmented reality game for older adults to promote body movement. 42:1-42:3 - Michael Cross, Leping Qiu, Mingyuan Zhong, Yuntao Wang, Yuanchun Shi:
One-Dimensional Eye-Gaze Typing Interface for People with Locked-in Syndrome. 43:1-43:3 - Jefferson Pardomuan:
ASTREL: Prototyping Shape-changing Interface with Variable Stiffness Soft Robotics Module. 44:1-44:3 - Wenxin Sun, Mengjie Huang, Chenxin Wu, Rui Yang:
Exploring Virtual Object Translation in Head-Mounted Augmented Reality for Upper Limb Motor Rehabilitation with Motor Performance and Eye Movement Characteristics. 45:1-45:3 - Mondo Saito, Yasuto Nakanishi:
Amplified Carousel: Amplifying the Perception of Vertical Movement using Optical Illusion. 46:1-46:4 - Marcus Friedel:
HapticLever: Kinematic Force Feedback using a 3D Pantograph. 47:1-47:4 - Zixiong Su, Shitao Fang, Jun Rekimoto:
LipLearner: Customizing Silent Speech Commands from Voice Input using One-shot Lipreading. 48:1-48:3 - Pak Ming Fan, Santawat Thanyadit, Ting-Chuen Pong:
VLOGS: Virtual Laboratory Observation Tool for Monitoring a Group of Students. 49:1-49:2 - Patrick Ebel, Kim Julian Gülle, Christoph Lingenfelder, Andreas Vogelsang:
ICEBOAT: An Interactive User Behavior Analysis Tool for Automotive User Interfaces. 50:1-50:3 - Rachana Sreedhar, Niveditha Samudrala, Nicole Tan, Shrenik Sadalgi:
Search with Space: Find and Visualize Furniture in Your Space. 51:1-51:6 - Tatsuya Maeda, Keita Kuwayama, Kodai Ito, Kazuyuki Fujita, Yuichi Itoh:
FullPull : A Stretchable UI to Input Pulling Strength on Touch Surfaces. 52:1-52:3 - Marx Boyuan Wang, Daniel Manesh, Ruipu Hu, Sang Won Lee:
iThem: Programming Internet of Things Beyond Trigger-Action Pattern. 53:1-53:5 - Ayumu Ogura, Kodai Ito, Yuichi Itoh:
Transtiff: A Stick Interface with Various Stiffness by Artificial Muscle Mechanism. 54:1-54:3
Demo Session
- Shahabedin Sagheb, Frank Wencheng Liu, Alex Vuong, Shiling Dai, Ryan Wirjadi, Yueming Bao, Robert LiKamWa:
Demonstration of Geppetteau: Enabling haptic perceptions of virtual fluids in various vessel profiles using a string-driven haptic interface. 55:1-55:3 - Ticha Sethapakdi, Mackenzie Leake, Catalina Monsalve Rodriguez, Miranda Cai, Stefanie Müller:
KineCAM: An Instant Camera for Animated Photographs. 56:1-56:2 - Mai Ohira, Soya Eguchi, Claire Okabe, Hiroya Tanaka:
Demonstrating ex-CHOCHIN: Shape/Texture-changing cylindrical interface with deformable origami tessellation. 57:1-57:3 - Stephen MacNeil, Parth Patel, Benjamin E. Smolin:
Expert Goggles: Detecting and Annotating Visualizations using a Machine Learning Classifier. 58:1-58:3 - Jiani Zeng, Honghao Deng, Yunyi Zhu, Michael Wessely, Axel Kilian, Stefanie Müller:
Demonstration of Lenticular Objects: 3D Printed Objects with Lenticular Lens Surfaces That Can Change their Appearance Depending on the Viewpoint. 59:1-59:3 - Keito Uwaseki, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura:
ConfusionLens: Dynamic and Interactive Visualization for Performance Analysis of Multiclass Image Classifiers. 60:1-60:3 - Donghyeon Ko, Myeongseong Kim, Woohun Lee:
Thermoformable Shell for Repeatable Thermoforming. 61:1-61:3 - Zeyu Wang, Cuong Nguyen, Paul Asente, Julie Dorsey:
Point Cloud Capture and Editing for AR Environmental Design. 62:1-62:3 - Ryota Gomi, Kazuki Takashima, Kazuyuki Fujita, Yoshifumi Kitamura:
A Triangular Actuating Device Stand that Dynamically Adjusts Mobile Screen's Position. 63:1-63:4 - Lei Gao, James Hardwick, Diego Martínez Plasencia, Sriram Subramanian, Ryuji Hirayama:
DATALEV: Acoustophoretic Data Physicalisation. 64:1-64:3 - Masayasu Sumiya, Wataru Yamada, Keiichi Ochiai:
Anywhere Hoop: Virtual Free Throw Training System. 65:1-65:3 - Mariko Chiba, Wataru Yamada, Keiichi Ochiai:
Shadowed Speech: an Audio Feedback System which Slows Down Speech Rate. 66:1-66:3 - Wataru Yamada:
M&M: Molding and Melting Method Using a Replica Diffraction Grating Film and a Laser for Decorating Chocolate with Structural Color. 67:1-67:3 - Yuhan Hu, Isabel Neto, Jin Ryu, Ali Shtarbanov, Hugo Nicolau, Ana Paiva, Guy Hoffman:
Touchibo: Multimodal Texture-Changing Robotic Platform for Shared Human Experiences. 68:1-68:3 - Yen-Ting Yeh, Fabrice Matulic, Daniel Vogel:
Demonstrating Finger-Based Dexterous Phone Gestures. 69:1-69:3 - Blair Subbaraman, Nadya Peek:
Demonstrating p5.fab: Direct Control of Digital Fabrication Machines from a Creative Coding Environment. 70:1-70:3 - Masatoshi Hamanaka:
Music Scope Pad: Video Selecting Interface by Natural Movement in VR Space. 71:1-71:3 - Yuiko Suyama, Tetsuaki Baba:
Extail: a Kinetic Inconspicuous Wareable Hair Extension Device. 72:1-72:4 - Sungjae Cho, Jungeun Lee, Inseok Hwang:
TouchVR: A Modality for Instant VR Experience. 73:1-73:3 - Myung Jin Kim, Andrea Bianchi:
SpinOcchietto: A Wearable Skin-Slip Haptic Device for Rendering Width and Motion of Objects Gripped Between the Fingertips. 74:1-74:3 - Keiichi Zempo, Ryo Kashiwabara, Naoto Wakatsuki, Koichi Mizutani:
Silent subwoofer system using myoelectric stimulation to presents the acoustic deep bass experiences. 75:1-75:3 - Haruki Takahashi, Jeeeun Kim:
Designing a Hairy Haptic Display using 3D Printed Hairs and Perforated Plates. 76:1-76:3 - Andreas Pointner, Thomas Preindl, Sara Mlakar, Roland Aigner, Mira Alida Haberfellner, Michael Haller:
Knitted Force Sensors. 77:1-77:3 - Hinako Kuroki, Tetsuaki Baba:
Calligraphy Z: A Fabricatable Pen Plotter for Handwritten Strokes with Z-Axis Pen Pressure. 78:1-78:4 - Evan Pezent, Aakar Gupta, Hank Duhaime, Marcia O'Malley, Ali Israr, Majed Samad, Shea Robinson, Priyanshu Agarwal, Hrvoje Benko, Nick Colonnese:
Explorations of Wrist Haptic Feedback for AR/VR Interactions with Tasbi. 79:1-79:5 - Mustafa Doga Dogan, Veerapatr Yotamornsunthorn, Ahmad Taka, Aakar Gupta, Stefanie Müller:
InfraredTags Demo: Invisible AR Markers and Barcodes Using Infrared Imaging and 3D Printing. 80:1-80:5 - Danli Luo, Nadya Peek:
Demonstrating a Fabricatable Bioreactor Toolkit for Small-Scale Biochemical Automation. 81:1-81:3 - Hannah Twigg-Smith, Nadya Peek:
Demonstrating Dynamic Toolchains for Machine Control. 82:1-82:3 - Sarah Anne Kushner, Paul H. Dietz, Alec Jacobson:
Interactive 3D Zoetrope with a Strobing Flashlight. 83:1-83:3 - Ricardo E. Gonzalez Penuela, Wren Poremba, Christina Trice, Shiri Azenkot:
Hands-On: Using Gestures to Control Descriptions of a Virtual Environment for People with Visual Impairments. 84:1-84:4 - Tomohito Suzuki, Yuhei Imai, Hiroyuki Manabe:
A bonding technique for electric circuit prototyping using conductive transfer foil and soldering iron. 85:1-85:3 - Tyler Peng, Mora Pochettino, Stefanie Müller:
CircuitAssist: Automatically Dispensing Electronic Components to Facilitate Circuit Building. 86:1-86:3 - Yuki Kakui, Kota Araki, Changyo Han, Shogo Fukushima, Takeshi Naemura:
Using a Dual-Camera Smartphone to Recognize Imperceptible 2D Barcodes Embedded in Videos. 87:1-87:3
Doctoral Symposium
- Xuhai Xu:
Towards Future Health and Well-being: Bridging Behavior Modeling and Intervention. 88:1-88:5 - Tingyu Cheng:
Environmental physical intelligence: Seamlessly deploying sensors and actuators to our everyday life. 89:1-89:5 - Dishita Turakhia:
Designing Tools for Autodidactic Learning of Skills. 90:1-90:4 - Jingyi Li:
Extending Computational Abstractions with Manual Craft for Visual Art Tools. 91:1-91:5 - Junyi Zhu:
Design and Fabricate Personal Health Sensing Devices. 92:1-92:4 - Zhongyi Zhou:
Exploiting and Guiding User Interaction in Interactive Machine Teaching. 93:1-93:5 - Eunice Jun:
Empowering domain experts to author valid statistical analyses. 94:1-94:5 - John Joon Young Chung:
Artistic User Expressions in AI-powered Creativity Support Tools. 95:1-95:4
Student Innovation Contest
- Wei-Hsin Wang, Hong-En Chen, Mike Y. Chen:
UltraBat: An Interactive 3D Side-Scrolling Game using Ultrasound Levitation. 96:1-96:2 - Jiatong Li, Chenfeng Gao, Ken Nakagaki:
ShadowAstro: Levitating Constellation Silhouette for Spatial Exploration and Learning. 97:1-97:3 - Jiwan Kim, Hyunjae Gil:
Top-Levi: Multi-User Interactive System Using Acoustic Levitation. 98:1-98:3 - Hiroki Kawahara, Kaito Yamao, Kentaro Oda:
Magic Drops: Food 3D Printing of Colored Liquid Balls by Ultrasound Levitation. 99:1-99:2 - Shutaro Aoyama, Kei Asano:
Shadow Play using Ultrasound Levitated Props. 100:1-100:3 - Mehrad Faridan, Marcus Friedel, Ryo Suzuki:
UltraBots: Large-Area Mid-Air Haptics for VR with Robotically Actuated Ultrasound Transducers. 101:1-101:3 - Ching-Yi Tsai, Chen-Kuo Sun, Lung-Pan Cheng:
Garnish into Thin Air. 102:1-102:3 - Marwa Alalawi, Brandon M. Wong:
LeviCircuits: Adhoc Electrical Circuit Prototyping using Ultrasound Levitation. 103:1-103:3 - Antonius Naumann, Paul Methfessel:
Improving 3D-Editing Workflows via Acoustic Levitation. 104:1-104:4 - Mai Kamihori, Ayumu Ogura, Kodai Ito, Yuichi Itoh:
DAWBalloon: An Intuitive Musical Interface Using Floating Balloons. 105:1-105:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.