Particle Based 3D Hair Reconstruction Using Kinect and High Resolution Camera

Description
Title: Particle Based 3D Hair Reconstruction Using Kinect and High Resolution Camera
Authors: Li, Zhongrui
Date: 2015
Abstract: Hair modeling based on real-life capturing is a rising and challenging topic in the field of human modeling and animation. Typical automatic hair capture methods use several 2D images to reconstruct 3D hair model. Most of them usually adopt 3D polygons to present hair strands, and a few recent strand-based methods require heavy hardware settings. We introduce an approach to capture real hair using affordable and common devices such as a depth sensor and a camera to reconstruct a 3D hair model based on particle system. KinectTM sensor from Microsoft is chosen to capture 3D depth data. However, as Kinect 3D depth data are known to be noisy and 2D texture image to be of low quality, an additional DSLR camera is employed in the system in order to capture high resolution image for hair strand extraction. The proposed approach registers the 3D hair point cloud and high resolution image in the same space, extracts the hair strands manually from the image, and then generates 3D hair strands based on Kinect depth information. Eventually, a particle based 3D hair model is reconstructed. The proposed method captures 360-degree views by collecting datasets of real-life hair with four sets of Kinect sensors and DSLR cameras in four viewpoints. We register the DSLR camera image in the space of Kinect to build the mapping relationship between 2D and 3D. Therefore, the image from the DSLR camera can be mapped on the point cloud replacing the existing Kinect texture image, resulting in a new high-quality texture image of the 3D data. Next we manually select the hair strands in the high resolution image and we use control points to represent hair strand as a spline curve. These 2D control points are then projected on the 3D point cloud in order to obtain the corresponding 3D information. In 2D image, some hair strands are partially occluded by some other hair strands, the result is that the occluded hair strand is separated into two segments in 3D. An algorithm is applied to analyze and build the connection between the hair strand segments. Meanwhile some refinement works are done with the 3D hair strands, filtering and interpolation techniques are utilized on the 3D hair strand splines to generate smoother 3D hair strands. Finally we reconstruct the 3D hair model, where the strands are represented in the particle system. Our method, combining a depth sensor and an high resolution camera, is novel and has many advantages which other approaches do not have; (i) hardware setting is simple and affordable; (ii) combination of high-quality image of DSLR and depth of Kinect takes advantage of each of them; (iii) the 2D and 3D combined method allows us to repair and refine the 3D data; (iv) Spline-based hair representation can be used to construct a hair particle system which has many advantages of hair animation and simulation.
URL: http://hdl.handle.net/10393/32068
http://dx.doi.org/10.20381/ruor-2763
CollectionThèses, 2011 - // Theses, 2011 -
Files
Li_Zhongrui_2015_Thesis.pdfThesis6.14 MBAdobe PDFOpen
Li_Zhongrui_2015_video.wmv146.81 MBWMV (Windows Media Video)Open