Sharing Space with Virtual Avatars
We are moving headlong into a place where we will be spending increasing amounts of time with virtual avatars of our co-workers, families and friends. My work includes collaborations on hardware designs for methods of digitally capturing humans, and research exploring the nuances of what it takes to accept and have rapport with an avatar. My early research in this area examined how life-sized screen representations were less profound than those in HMD’s. Further inquiry led me to discover that factors outside of visual fidelity, including accurate gait, proper gaze and highly localized audio were critical to our acceptance of a virtual human or avatar. My core observation is that context is as important as appearance. David Krum and I applied this to a project enabling interaction with a virtual human over a mobile device. Simulated artifacts like, shaky hand-held cameras, and cell-phone audio enhanced the connection between the user and the avatar. I also work on hardware and optical designs with Paul Debevec to create displays for conjuring holographic-like virtual humans.
Selected Publications
Rendering for an Interactive 360° Light Field Display. ACM Trans Graph 26(3), Article No. 40, 2007.
Virtual head mounted camera. Debevec, P., Bolas, M., Fyffe, G. 2015: US Patent Application 14/723,066 (Pending publication)
Rapid avatar capture and simulation using commodity depth sensors. Suma, E., Medioni, G., Bolas, M., Shapiro, A., Feng, W., Wang, R. 2015: U.S. Patent Application No. 14/694,670
(Pending publication)
Virtual Rapport
Rapid Avatar Capture
Projection Systems vs Head-Mounted Displays.
Body Language Cues: Gait
Virtual Humans Over Mobile Devices
Spinning Lightfield Display: Holographic Avatar