Hu, Xiao2020-08-132020-08-132020-08-13http://hdl.handle.net/10393/40829http://dx.doi.org/10.20381/ruor-25055A notable limitation of feature based vision simultaneous localization (vSLAM) in dynamic environments is the disastrous drift of the position estimate resulting in a complete loss of localization. State-of-the-art dynamic monocular vSLAM methods mask out all foreground objects and only use background features. This improves accuracy but also reduces the number of usable features in many scenes leading to unstable tracking. Instead, we formulate a novel strategy for monocular vSLAM that uses moving objects in the scene to improve accuracy, and extend ORB-SLAM2 to adapt to dynamic environments, estimating not only the camera trajectory based on background features but also foreground object motion. In the case where there are not enough background features for tracking, our method can use the features from the object and the prediction of the object motion to approximate the camera pose. We evaluate our system on various datasets, and our analysis shows that we achieve better pose estimation accuracy and robustness over state-of-the-art monocular vSLAM systems.enVisual SLAMMotion DetectionARDOE-SLAM: Dynamic Object Enhanced Visual SLAMThesis