Azadi, Amirhossein2025-12-172025-12-172025-12-17http://hdl.handle.net/10393/51184https://doi.org/10.20381/ruor-31620Ice hockey poses a significant risk of head impacts, increasing the likelihood of traumatic brain injuries (TBIs), including concussions. Youth players constitute the majority of the ice hockey population, accounting for approximately 69% of all registered players in the United States alone. Notably, 15-25% of all hockey-related injuries in this age group involve head trauma, underscoring the high prevalence of TBIs among youth participants. Younger players face distinct vulnerabilities due to their larger head-to-body ratio, lower skill levels, weaker neck muscles, and less refined motor control, all of which contribute to a playing style that elevates their risk of head impacts. Furthermore, youth leagues often lack access to professional-grade monitoring systems, such as helmet-based sensors or on-site clinical staff, making early detection and response to brain injuries particularly challenging. To address these limitations, this thesis introduces a video-based, artificial intelligence (AI)-powered system designed to automatically detect and analyze head impacts in youth sports, facilitating both large-scale dataset generation and injury assessment. Chronologically, the first component of this work, referred to as Study 2 in the thesis, focused on head impact detection. This study was conducted using professional hockey footage, which offered consistent lighting, high video resolution, and clear player visibility, allowing for early testing under ideal visual conditions. In this phase, we manually annotated head impact moments by reviewing full-game recordings and cropping video segments to center on and isolate each player involved in the contact. To automate this process and enable future scalability, we also introduced a detection and tracking pipeline tailored to professional players, combining You Only Look Once version 8 (YOLOv8)x for player detection - achieving precision 0.97, recall 0.97, and mean average precision (mAP)50 0.99 (at a 50% intersection over union (IoU) threshold) - with StrongSORT for maintaining player identity across frames. The resulting player-centered clips were used to train a long-term recurrent convolutional network (LRCN), which achieved an accuracy of 0.87 in distinguishing head impacts from non-head impact clips. While this study demonstrated the feasibility of video-based head impact detection, it was limited by a small, manually constructed dataset, which restricted scalability and introduced potential inconsistencies in annotation. Building on the head impact detection framework, the second phase focused on developing a scalable, youth-specific contact detection pipeline. This shift was motivated by a key limitation of the first phase: head impacts are relatively rare events, especially in youth games, making it difficult to build large training datasets for reliable head impact classification. By broadening the scope to detect all physical contact events, not just head impacts, we were able to substantially increase the pool of relevant video clips and create a more scalable and efficient system for data generation and pre-filtering. A fine-tuned YOLOv8 model demonstrated strong player detection performance across diverse youth age groups and challenging visual environments, achieving precision 0.96, recall 0.97, average precision (AP)0.7 (at a 70% IoU threshold) of 0.94, mAP50 of 0.97, and mAP50-95 (averaged over IoU thresholds from 50% to 95%) of 0.82. For tracking, we evaluated the StrongSORT-based module on 100 benchmark clips characterized by high occlusion and rapid movement. The integration of an IoU-based cost term reduced identity switches from 172 to 53 (a 69% improvement) and improved Multiple Object Tracking Accuracy (MOTA) from 89.0% to 94.5%, confirming the system's robustness under realistic youth hockey conditions. A temporal shift module (TSM) network was then applied to classify physical contact events (e.g., player-to-player, player-to-board), achieving a recall of 0.94 and reducing manual review time from several hours to under 30 minutes per game. The final study introduced a deep learning model to estimate maximum principal strain (MPS), a validated biomechanical indicator of brain tissue deformation, directly from video-derived features. Using 477 reconstructed youth hockey impacts involving players aged 5-17, four features were extracted from video for each event: impact velocity, surface compliance, impact location, and elevation, with player age group included as an additional non-video input. A fully connected neural network trained on this dataset achieved strong predictive performance, with a test-set coefficient of determination (R2) = 0.89 and mean squared error (MSE) = 0.0015. Permutation importance analysis confirmed that impact velocity was the most influential predictor. These results demonstrate the feasibility of estimating brain strain from video features and basic contextual information, offering a scalable, non-invasive approach for monitoring injury risk in youth sports. Together, these three studies form a comprehensive, video-based pipeline for detecting impactful events, identifying head impacts, and estimating head impact severity in youth ice hockey, advancing injury surveillance in settings where traditional tools are often unavailable.enDeep learningInjury surveillancePhysical contact detectionTemporal Shift ModuleYouth ice hockeyBrain Strain PredictionMPSFinite Element Brain ModelA Deep Learning Pipeline for Impact Event Detection, Head Impact Classification, and Brain Strain Estimation in Youth Ice Hockey VideosThesis