Cordea, Marius Daniel2013-11-082013-11-0820072007Source: Dissertation Abstracts International, Volume: 68-11, Section: B, page: 7534.http://hdl.handle.net/10393/29467http://dx.doi.org/10.20381/ruor-19762The work of this thesis focuses in key areas of human-computer interaction (HCI), namely rigid facial motion recovery and facial expression analysis, and interpretation. Rigid motion recovery from image sequences is based on Structure-From-Motion (SFM) using Kalman Filter-based recursive algorithms. Facial expression analysis is performed by an Active Appearance Model (AAM), which is a statistical model, based on the estimation of linear models of shape and texture variation. The thesis integrates new developed algorithms into an Automatic Facial Tracking System (AFTS) for a low bit-rate videophone system. The first contribution of this thesis is a new method for modeling the shape and appearance of three-dimensional (3D) human faces using a constrained 3D Active Appearance Model (AAM). The proposed algorithm is an extension of the classical 2D Active Appearance Model. It uses a generic 3D wireframe model of the face, based on two sets of controls: the anatomically motivated muscle actuators to model facial expressions and statistically based anthropometrical controls to model different facial types (3D Anthropometric Muscle-Based Active Appearance Model (3D AMB AAM). This allows describing a facial image in terms of a controlled model parameter set, hence providing both, a natural and a constrained basis for face segmentation and analysis. The generated face models are consequently simpler and less memory intensive compared to the classical appearance based models. The proposed method provides accurate fitting results by constraining solutions to be valid instances of a face model. Extensive image segmentation experiments demonstrate the accuracy of the proposed algorithm against the classical AAM. The second contribution of this thesis is a new 3D tracking algorithm allowing real-time recovery of 3D position, orientation and facial expressions of a moving head. The described method uses a recursive motion estimation algorithm, namely an Extended Kalman Filter (EFK) to extract the head pose (global motion) and the newly developed 3D AMB AAM to extract the facial expressions (local motion). The resulting motion tracking system works in a realistic environment without makeup on the face, with an uncalibrated camera, and unknown lighting conditions and background. In order to validate the accuracy of the 3D head tracking system, a rapid calibration technique was developed using a sequence of images of a synthetic "standard" 3D head in lieu of a real head.159 p.enEngineering, Electronics and Electrical.A three-dimensional anthropometric muscle-based active appearance model for model-based video codingThesis