AddShare+: An Efficient Additive Secret Sharing Approach for Private Federated Learning
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Université d'Ottawa | University of Ottawa
Abstract
Federated Learning (FL) has emerged as a transformative approach for collaboratively training Machine Learning (ML) models without the need to share raw data. Despite its benefits, FL is susceptible to significant privacy threats, such as inference attacks that allow adversaries to reconstruct training data from shared model updates and eavesdropping attacks where malicious actors intercept communications to access sensitive information. This thesis addresses these challenges by introducing the AddShare and AddShare+ frameworks, which leverage additive secret sharing and advanced encryption techniques as means of protection during the FL process.
The AddShare framework transforms model updates into multiple shares distributed among clients, rendering it impossible for any single client or adversary to reconstruct the original model. AddShare offers three primary advantages: enhanced privacy through additive secret sharing, secure data transmission using Rivest–Shamir–Adleman (RSA) encryption, and improved efficiency via client grouping. By organizing clients into groups and distributing shares within these groups, AddShare significantly reduces communication overhead while maintaining robust privacy guarantees.
Building on these foundations, AddShare+ introduces two key enhancements. First, it employs selective weight sharing, where only a subset of model weights are shared, reducing computational and communication costs. AddShare+ utilizes a magnitude-based weight selection strategy, focusing on the most impactful weights to optimize the balance between model performance and privacy. Second, it replaces RSA with the more efficient Elliptic Curve Integrated Encryption Scheme (ECIES), enabling faster and more secure share distribution. These frameworks ensure strong privacy preservation while maintaining model accuracy, making them suitable for sensitive applications such as healthcare, finance, and the Internet of Things (IoT).
Comprehensive experimental evaluations demonstrate the efficacy of these frameworks. For instance, AddShare+ achieves comparable model accuracy to traditional FL methods on the Fashion - Modified National Institute of Standards and Technology (F-MNIST) dataset, maintaining a Server-side Accuracy (SA) of 78.9%, compared to 79.3% with Federated Averaging (FedAvg), while reducing Federated Learning Time (FLT) by 13% relative to AddShare.
The thesis concludes with a case study on crop yield prediction, showcasing the practical application of AddShare+ in a smart farming context. While aggregated farming data holds immense potential for improving agricultural productivity and sustainability through ML and FL, farmers are often reluctant to share their operational data due to concerns about its impact on loan negotiations, insurance rates, land valuations, and competitive advantages in local markets. This real-world scenario demonstrates how the proposed framework can be applied to sensitive agricultural data, balancing the need for collaborative learning with data privacy concerns. These frameworks significantly enhance the privacy and security of FL systems against inference and eavesdropping attacks, promoting their broader adoption in various critical applications. Future research directions include further optimization of these frameworks and exploration of additional privacy-preserving techniques to enhance the security and efficiency of decentralized ML systems.
Description
Keywords
federated learning, privacy preservation, additive secret sharing, secure aggregation, encryption
