Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language

Description
Title: Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
Authors: Zhi, Da
Date: 2018-06-07
Abstract: This thesis presents a novel depth camera-based real-time hand gesture recognition system for training a human-like robot hand to interact with humans through sign language. We developed a modular real-time Hand Gesture Recognition (HGR) system, which uses multiclass Support Vector Machine (SVM) for training and recognition of the static hand postures and N-Dimensional Dynamic Time Warping (ND-DTW) for dynamic hand gestures recognition. A 3D hand gestures training/testing dataset was recorded using a depth camera tailored to accommodate the kinematic constructive limitations of the human-like robotic hand. Experimental results show that the multiclass SVM method has an overall 98.34% recognition rate in the HRI (Human-Robot Interaction) mode and 99.94% recognition rate in the RRI (Robot-Robot Interaction) mode, as well as the lowest average run time compared to the k-NN (k-Nearest Neighbour) and ANBC (Adaptive Naïve Bayes Classifier) approaches. In dynamic gestures recognition, the ND-DTW classifier displays a better performance than DHMM (Discrete Hidden Markov Model) with a 97% recognition rate and significantly shorter run time. In conclusion, the combination of multiclass SVM and ND-DTW provides an efficient solution for the real-time recognition of the hand gesture used for training a robot arm to perform sign language.
URL: http://hdl.handle.net/10393/37768
http://dx.doi.org/10.20381/ruor-22030
CollectionThèses, 2011 - // Theses, 2011 -
Files
Zhi_Da_2018_thesis.pdfDa Zhi Masters thesis3.3 MBAdobe PDFOpen