Quantitative Wind Speed Estimation Using AI-driven Image Processing
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Université d'Ottawa | University of Ottawa
Abstract
Conventional weather stations are spaced tens of kilometres apart, which means localised wind events often go unmeasured. This thesis asks whether wind speed can be estimated from ordinary video of trees using convolutional neural networks (CNNs), and tests the idea against ground-truth measurements from Environment Canada weather stations. The work is divided into two phases. Phase 1 uses a stationary camera pointed at a single isolated tree at a rural site near Ottawa. A lightweight CNN (∼0.9M parameters) is trained to classify wind speed into eight 5 km/h bins and to predict continuous speed via regression. On a chronologically held-out test set, the classifier reaches 68.1% accuracy (random baseline: 12.5%) and the regressor achieves a mean absolute error (MAE) of 9.54 km/h. These results show that tree motion carries information related to wind speed, though the single-tree approach has clear limitations. Phase 2 extends the pipeline to scenes containing multiple trees, using video collected by the National Research Council of Canada from a vehicle travelling under controlled conditions (constant speed, straight-line travel, no external disturbances). Under these constraints, the moving-camera problem reduces to the stationary case. The author’s contribution was developing the detection, tracking, and estimation pipeline: a YOLOv8 detector finds tree crowns in each frame, a SORT tracker links them across frames, a motion-CNN with an added optical-flow channel estimates wind speed per tree, and the per-tree estimates are combined by confidence-weighted averaging. On the Phase 2 test set, the pipeline achieves an MAE of 1.56 km/h and a Pearson correlation of 0.985. The main contribution is the multi-tree pipeline and its experimental validation. Ablation tests show that adding an optical-flow channel cuts per-tree MAE by 19%, and that aggregating over multiple trees cuts the segment-level MAE by a further 33% compared to a single-tree estimate. Taken together, the two phases confirm that tree motion contains wind speed information and that aggregating over multiple trees reduces estimation error,
within the conditions tested.
Description
Keywords
wind speed estimation, visual anemometry, convolutional neural network, tree canopy motion, object detection, YOLOv8, optical flow, image processing
