Repository logo

A Deep-Learning Approach for Marker-less Stride Parameters Analysis with Two Cameras

dc.contributor.authorDorrikhteh, Masoud
dc.contributor.supervisorPeyton, Liam
dc.date.accessioned2021-08-10T18:56:31Z
dc.date.available2021-08-10T18:56:31Z
dc.date.issued2021-08-10en_US
dc.description.abstractHuman gait analysis is an essential indicator for physical and neuroglial health of an individual. Recent developments in deep-learning approaches to computer vision make possible new techniques for body segment and joint detection from photos and video frames. In this thesis, we propose a deep learning approach for non-invasive video-based gait analysis using two RGB cameras that would be suitable for routine gait monitoring in senior care and rehabilitation centers. Due to modularity and the low cost of implementation, it is considered an affordable solution for such centers. Furthermore, since the solution does not require any markers or sensors to be worn, it is a pervasive and easy method for daily usage. Our proposed deep-learning approach starts by calibrating both the intrinsic and extrinsic parameters of the cameras. Next, video streams captured from two RGB cameras are used as input, and OpenPose and HyperPose deep-learning frameworks are used to localize the main body key points, including the joints and skeleton based on Body 25 and COCO models, respectively. The 2D parameter outputs from the frameworks are triangulated into 3D vector spaces for further analysis. In order to reduce the noises in our data, we applied median and dual pass butter worth filters to the data. Finally gait parameters has been extracted measured and compared to the manually evaluated ground truth data which has been capture via manual measurement of a domain expert. The approach was evaluated in a laboratory setting similar to an institutional hallway in five types of trials: walking back and forth in a straight line while turning out of frame, walking back and forth in a straight line while turning in frame, circular walking, walking with a cane and a walker. The method brings promising results compared to more expensive and restrictive approaches that use up to 16 cameras and require markers or sensors.en_US
dc.identifier.urihttp://hdl.handle.net/10393/42511
dc.identifier.urihttp://dx.doi.org/10.20381/ruor-26731
dc.language.isoenen_US
dc.publisherUniversité d'Ottawa / University of Ottawaen_US
dc.subjectGait Analysisen_US
dc.subjectDeep-Learningen_US
dc.subjectMarker-less Analysisen_US
dc.subjectAIen_US
dc.subjectComputer Visionen_US
dc.subjectAI in Healthen_US
dc.titleA Deep-Learning Approach for Marker-less Stride Parameters Analysis with Two Camerasen_US
dc.typeThesisen_US
thesis.degree.disciplineGénie / Engineeringen_US
thesis.degree.levelMastersen_US
thesis.degree.nameMScen_US
uottawa.departmentScience informatique et génie électrique / Electrical Engineering and Computer Scienceen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
Dorrikhteh_Masoud_2021_thesis.pdf
Size:
4.32 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
license.txt
Size:
6.65 KB
Format:
Item-specific license agreed upon to submission
Description: