In the last decades, we have witnessed advances in both hardware and associated algorithms resulting in unprecedented access to volumes of 2D and, more recently, 3D data capturing human movement. We are no longer satisfied with recovering human pose as an image-space 2D skeleton, but seek to obtain a full 3D human body representation. The main challenges in acquiring 3D human shape from such raw measurements are identifying which parts of the data relate to body measurements and recovering from partial observations, often arising out of severe occlusion. For example, a person occluded by a piece of furniture, or being self-occluded in a profile view. In this paper, we propose POP, a novel and efficient paradigm for estimation and completion of human shape to produce a full parametric 3D model directly from single RGBD images, even under severe occlusion. At the heart of our method is a novel human body pose retrieval formulation that explicitly models and handles occlusion. The retrieved result is then refined by a robust optimization to yield a full representation of the human shape. We demonstrate our method on a range of challenging real world scenarios and produce high-quality results not possible by competing alternatives. The method opens up exciting AR/VR application possibilities by working on 'in-the-wild' measurements of human motion.
|Titolo:||POP: Full Parametric model Estimation for Occluded People|
|Data di pubblicazione:||2019|
|Appare nelle tipologie:||04.01 Contributo in atti di convegno|