Towards Fair and Privacy-Preserving Federated Deep Models
dc.contributor.author | Lyu, Lingjuan | |
dc.contributor.author | Yu, Jiangshan | |
dc.contributor.author | Nandakumar, Karthik | |
dc.contributor.author | Li, Yitong | |
dc.contributor.author | Ma, Xingjun | |
dc.contributor.author | Jin, Jiong | |
dc.contributor.author | Yu, Han | |
dc.contributor.author | Ng, Kee Siong | |
dc.date.accessioned | 2024-01-09T23:06:12Z | |
dc.date.issued | 2020 | |
dc.date.updated | 2022-09-25T08:16:29Z | |
dc.description.abstract | The current standalone deep learning framework tends to result in overfitting and low utility. This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates. Server-based solutions are prone to the problem of a single-point-of-failure. In this respect, collaborative learning frameworks, such as federated learning (FL), are more robust. Existing federated learning frameworks overlook an important aspect of participation: fairness. All parties are given the same final model without regard to their contributions. To address these issues, we propose a decentralized Fair and Privacy-Preserving Deep Learning (FPPDL) framework to incorporate fairness into federated deep learning models. In particular, we design a local credibility mutual evaluation mechanism to guarantee fairness, and a three-layer onion-style encryption scheme to guarantee both accuracy and privacy. Different from existing FL paradigm, under FPPDL, each participant receives a different version of the FL model with performance commensurate with his contributions. Experiments on benchmark datasets demonstrate that FPPDL balances fairness, privacy and accuracy. It enables federated learning ecosystems to detect and isolate low-contribution parties, thereby promoting responsible participation. | en_AU |
dc.description.sponsorship | This work was supported, in part, by IBM PhD Fellowship; ANU Translational Fellowship; Nanyang Assistant Professorship (NAP); and NTU-WeBank JRI (NWJ-2019-007). This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. | en_AU |
dc.format.mimetype | application/pdf | en_AU |
dc.identifier.issn | 1045-9219 | en_AU |
dc.identifier.uri | http://hdl.handle.net/1885/311293 | |
dc.language.iso | en_AU | en_AU |
dc.provenance | https://v2.sherpa.ac.uk/id/publication/3536..."The Accepted Version can be archived in a Non-Commercial Institutional Repository. " from SHERPA/RoMEO site (as at 15/1/2023). © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE Inc) | en_AU |
dc.relation | http://purl.org/au-research/grants/arc/LE170100200 | en_AU |
dc.rights | © 2020 IEEE | en_AU |
dc.source | IEEE Transactions on Parallel and Distributed Systems | en_AU |
dc.subject | Federated learning | en_AU |
dc.subject | privacy-preserving | en_AU |
dc.subject | deep learning | en_AU |
dc.subject | fairness | en_AU |
dc.subject | encryption | en_AU |
dc.title | Towards Fair and Privacy-Preserving Federated Deep Models | en_AU |
dc.type | Journal article | en_AU |
dcterms.accessRights | Open Access | |
local.bibliographicCitation.issue | 11 | en_AU |
local.bibliographicCitation.lastpage | 2541 | en_AU |
local.bibliographicCitation.startpage | 2524 | en_AU |
local.contributor.affiliation | Lyu, Lingjuan, National University of Singapore | en_AU |
local.contributor.affiliation | Yu, Jiangshan, Monash University | en_AU |
local.contributor.affiliation | Nandakumar, Karthik, IBM Singapore Lab | en_AU |
local.contributor.affiliation | Li, Yitong, The University of Melbourne | en_AU |
local.contributor.affiliation | Ma, Xingjun, The University of Melbourne | en_AU |
local.contributor.affiliation | Jin, Jiong, Swinburne University of Technology | en_AU |
local.contributor.affiliation | Yu, Han, Nanyang Technological University | en_AU |
local.contributor.affiliation | Ng, Kee Siong, College of Engineering and Computer Science, ANU | en_AU |
local.contributor.authoremail | u9914730@anu.edu.au | en_AU |
local.contributor.authoruid | Ng, Kee Siong, u9914730 | en_AU |
local.description.notes | Imported from ARIES | en_AU |
local.identifier.absfor | 460402 - Data and information privacy | en_AU |
local.identifier.absfor | 461103 - Deep learning | en_AU |
local.identifier.ariespublication | a383154xPUB13281 | en_AU |
local.identifier.citationvolume | 31 | en_AU |
local.identifier.doi | 10.1109/TPDS.2020.2996273 | en_AU |
local.identifier.scopusID | 2-s2.0-85086230067 | |
local.identifier.thomsonID | WOS:000543007900001 | |
local.identifier.uidSubmittedBy | a383154 | en_AU |
local.publisher.url | https://www.ieee.org/ | en_AU |
local.type.status | Accepted Version | en_AU |
Downloads
Original bundle
1 - 1 of 1
Loading...
- Name:
- Towards Fair and Privacy-Preserving Federated Deep Models.pdf
- Size:
- 5.88 MB
- Format:
- Adobe Portable Document Format