Depth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language

dc.contributor.authorZhi, Da
dc.description.abstractThis thesis presents a novel depth camera-based real-time hand gesture recognition system for training a human-like robot hand to interact with humans through sign language. We developed a modular real-time Hand Gesture Recognition (HGR) system, which uses multiclass Support Vector Machine (SVM) for training and recognition of the static hand postures and N-Dimensional Dynamic Time Warping (ND-DTW) for dynamic hand gestures recognition. A 3D hand gestures training/testing dataset was recorded using a depth camera tailored to accommodate the kinematic constructive limitations of the human-like robotic hand. Experimental results show that the multiclass SVM method has an overall 98.34% recognition rate in the HRI (Human-Robot Interaction) mode and 99.94% recognition rate in the RRI (Robot-Robot Interaction) mode, as well as the lowest average run time compared to the k-NN (k-Nearest Neighbour) and ANBC (Adaptive Naïve Bayes Classifier) approaches. In dynamic gestures recognition, the ND-DTW classifier displays a better performance than DHMM (Discrete Hidden Markov Model) with a 97% recognition rate and significantly shorter run time. In conclusion, the combination of multiclass SVM and ND-DTW provides an efficient solution for the real-time recognition of the hand gesture used for training a robot arm to perform sign language.
dc.publisherUniversité d'Ottawa / University of Ottawa
dc.subjectHuman-Robot Interaction
dc.subjectHand Gesture Recognition
dc.titleDepth Camera-Based Hand Gesture Recognition for Training a Robot to Perform Sign Language
dc.contributor.supervisorPetriu, Emilénie / Engineering
uottawa.departmentScience informatique et génie électrique / Electrical Engineering and Computer Science
CollectionThèses, 2011 - // Theses, 2011 -

Zhi_Da_2018_thesis.pdfDa Zhi Masters thesis3.3 MBAdobe PDFOpen