Repository logo

Performance Enhancement Schemesand Effective Incentives for Federated Learning

dc.contributor.authorWang, Yuwei
dc.contributor.supervisorKantarci, Burak
dc.date.accessioned2021-11-16T20:06:03Z
dc.date.available2021-11-16T20:06:03Z
dc.date.issued2021-11-16en_US
dc.description.abstractThe advent of artificial intelligence applications demands for massive amount of data to supplement the training of machine learning models. Traditional machine learning schemes require central processing of large volumes of data that may contain sensitive patterns such as user location, personal information, or transactions history. Federated Learning (FL) has been proposed to complement the traditional centralized methods where multiple local models are trained and aggregated over a centralized cloud server. However, the performance of FL needs to be further improved, since its accuracy is not on par with traditional centralized machine learning approaches. Furthermore, due to the possibility of privacy information leakage, there are not enough clients willing to participate in FL training process. Common practice for the uploaded local models is an evenly weighted aggregation, assuming that each node of the network contributes to advancing the global model equally, which is unfair with higher contribution model owners. This thesis focuses on three aspects of improving a whole federated learning pipeline: client selection; reputation enabled weight aggregation; and incentive mechanism. For client selection, a reputation score consists of evaluation metrics is introduced to eliminate poor performing model contributions. This scheme enhances the original implementation by up to 10% for non-IID datasets. We also reduce the training time of selection scheme by roughly 27.7% compared to the baseline implementation. Then, a reputation-enabled weighted aggregation of the local models for distributed learning is proposed. Thus, the contribution of a local model and its aggregation weight is evaluated and determined by its reputation score, which is formulated as same above. Numerical comparison of the proposed methodology that assigns different aggregation weights based on the accuracy of each model to a baseline that utilizes standard average aggregation weight shows an accuracy improvement of 17.175% over the standard baseline for not independent and identically distributed (non-IID) scenarios for an FL network of 100 participants. Last but not least, for incentive mechanism, we can reward participants based on data quality, data quantity, reputation and resource allocation of participants. In this thesis, we adopt a reputation-aware reverse auction that was earlier proposed to recruit dependable participants for mobile crowdsensing campaigns, and modify that incentive to adapt it to a FL setting where user utility is defined as a function of the assigned payment from the central server and the user’s service cost, such as battery and processor usage. Through numerical results, we show that: 1) the proposed incentive can improve the user utilities when compared to the baseline approaches, 2) platform utility can be maintained at a close value to that under the baselines, 3) the overall test accuracy of the aggregated global model can even slightly improve.en_US
dc.identifier.urihttp://hdl.handle.net/10393/42926
dc.identifier.urihttp://dx.doi.org/10.20381/ruor-27143
dc.language.isoenen_US
dc.publisherUniversité d'Ottawa / University of Ottawaen_US
dc.subjectFederated Learningen_US
dc.subjectMachine Learningen_US
dc.titlePerformance Enhancement Schemesand Effective Incentives for Federated Learningen_US
dc.typeThesisen_US
thesis.degree.disciplineGénie / Engineeringen_US
thesis.degree.levelMastersen_US
thesis.degree.nameMAScen_US
uottawa.departmentScience informatique et génie électrique / Electrical Engineering and Computer Scienceen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
Wang_Yuwei_2021_thesis.pdf
Size:
1.33 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail ImageThumbnail Image
Name:
license.txt
Size:
6.65 KB
Format:
Item-specific license agreed upon to submission
Description: