Repository logo

Quantitative Wind Speed Estimation Using AI-driven Image Processing

dc.contributor.authorKumar, Jatin
dc.contributor.supervisorPirani, Mohammad
dc.date.accessioned2026-03-19T16:06:29Z
dc.date.available2026-03-19T16:06:29Z
dc.date.issued2026-03-19
dc.description.abstractConventional weather stations are spaced tens of kilometres apart, which means localised wind events often go unmeasured. This thesis asks whether wind speed can be estimated from ordinary video of trees using convolutional neural networks (CNNs), and tests the idea against ground-truth measurements from Environment Canada weather stations. The work is divided into two phases. Phase 1 uses a stationary camera pointed at a single isolated tree at a rural site near Ottawa. A lightweight CNN (∼0.9M parameters) is trained to classify wind speed into eight 5 km/h bins and to predict continuous speed via regression. On a chronologically held-out test set, the classifier reaches 68.1% accuracy (random baseline: 12.5%) and the regressor achieves a mean absolute error (MAE) of 9.54 km/h. These results show that tree motion carries information related to wind speed, though the single-tree approach has clear limitations. Phase 2 extends the pipeline to scenes containing multiple trees, using video collected by the National Research Council of Canada from a vehicle travelling under controlled conditions (constant speed, straight-line travel, no external disturbances). Under these constraints, the moving-camera problem reduces to the stationary case. The author’s contribution was developing the detection, tracking, and estimation pipeline: a YOLOv8 detector finds tree crowns in each frame, a SORT tracker links them across frames, a motion-CNN with an added optical-flow channel estimates wind speed per tree, and the per-tree estimates are combined by confidence-weighted averaging. On the Phase 2 test set, the pipeline achieves an MAE of 1.56 km/h and a Pearson correlation of 0.985. The main contribution is the multi-tree pipeline and its experimental validation. Ablation tests show that adding an optical-flow channel cuts per-tree MAE by 19%, and that aggregating over multiple trees cuts the segment-level MAE by a further 33% compared to a single-tree estimate. Taken together, the two phases confirm that tree motion contains wind speed information and that aggregating over multiple trees reduces estimation error, within the conditions tested.
dc.identifier.urihttp://hdl.handle.net/10393/51461
dc.identifier.urihttps://doi.org/10.20381/ruor-31805
dc.language.isoen
dc.publisherUniversité d'Ottawa | University of Ottawa
dc.rightsAttribution-NonCommercial-ShareAlike 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/4.0/
dc.subjectwind speed estimation
dc.subjectvisual anemometry
dc.subjectconvolutional neural network
dc.subjecttree canopy motion
dc.subjectobject detection
dc.subjectYOLOv8
dc.subjectoptical flow
dc.subjectimage processing
dc.titleQuantitative Wind Speed Estimation Using AI-driven Image Processing
dc.typeThesisen
thesis.degree.disciplineGénie / Engineering
thesis.degree.levelMasters
thesis.degree.nameMASc
uottawa.departmentScience informatique et génie électrique / Electrical Engineering and Computer Science

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
Kumar_Jatin_2026_thesis.pdf
Size:
5.64 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
license.txt
Size:
6.65 KB
Format:
Item-specific license agreed upon to submission
Description: