Sharing Space with Virtual Avatars
We are moving headlong into a place where we will be spending increasing amounts of time with virtual avatars of our co-workers, families and friends. My work includes collaborations on hardware designs for methods of digitally capturing humans, and research exploring the nuances of what it takes to accept and have rapport with an avatar. My early research in this area examined how life-sized screen representations were less profound than those in HMD’s. Further inquiry led me to discover that factors outside of visual fidelity, including accurate gait, proper gaze and highly localized audio were critical to our acceptance of a virtual human or avatar. My core observation is that context is as important as appearance. David Krum and I applied this to a project enabling interaction with a virtual human over a mobile device. Simulated artifacts like, shaky hand-held cameras, and cell-phone audio enhanced the connection between the user and the avatar. I also work on hardware and optical designs with Paul Debevec to create displays for conjuring holographic-like virtual humans.
Rapid Avatar Capture and Simulation Using Commodity Depth Sensors. Computer Animation and Virtual Worlds 25(3-4), pp. 201-211, 2014.
Achieving Eye Contact in a One-to-Many 3D Video Teleconferencing System. ACM Trans Graph 28(3), 8 pages, 2009.
Rendering for an Interactive 360° Light Field Display. ACM Trans Graph 26(3), Article No. 40, 2007.
An Automultiscopic Projector Array for Interactive Digital Humans. ACM SIGGRAPH 2015 Emerging Technologies, New York, NY, 1 page, 2015. To appear.
Spatial Misregistration of Virtual Human Audio: Implications of the Precedence Eﬀect. International Conference on Intelligent Virtual Agents, pp. 139-145, Springer, 2012.
Head-Mounted Photometric Stereo for Performance Capture. ACM SIGGRAPH 2010 Emerging Technologies New York, NY, Article 14, 1 page. 2010.
HeadSPIN: A One-To-Many 3d Video Teleconferencing System. Proc of ACM SIGGRAPH 2009 Emerging Technologies, 2009.
Rendering for an Interactive 360◦ Light Field Display. Proc of ACM SIGGRAPH 2007 Emerging Technologies, 1 page, 2007.
Virtual Coaches Over Mobile Video. International Conference on Computer Animation and Social Agents (CASA), 4 pages, 2014.
Automatic Acquisition and Animation of Virtual Avatars. IEEE Virtual Reality, pp. 185-186, 2014. Demonstration abstract.
Sharing Space in Mixed and Virtual Reality Environments Using a Low-Cost Depth Sensor. IEEE International Symposium on Virtual Reality Innovations, pp. 353-354, 2011. Poster.
Prototyping A Light Field Display Involving Direct Observation Of A Video Projector Array. Computer Vision and Pattern Recognition Workshops, 2011 IEEE Computer Society Conference, on PROCAMs, pp. 15-20. IEEE, 2011.
Rendering for an interactive 360 degree light field display.* Debevec, P., Jones, A., Bolas, M., McDowall, I. 2013: U.S. Patent 8,432,436
Virtual head mounted camera. Debevec, P., Bolas, M., Fyffe, G. 2015: US Patent Application 14/723,066 (Pending publication)
Rapid avatar capture and simulation using commodity depth sensors. Suma, E., Medioni, G., Bolas, M., Shapiro, A., Feng, W., Wang, R. 2015: U.S. Patent Application No. 14/694,670
Rapid Avatar Capture
Projection Systems vs Head-Mounted Displays.
Body Language Cues: Gait
Virtual Humans Over Mobile Devices
Spinning Lightfield Display: Holographic Avatar