Towards Fair and Privacy-Preserving Federated Deep Models

dc.contributor.authorLyu, Lingjuan
dc.contributor.authorYu, Jiangshan
dc.contributor.authorNandakumar, Karthik
dc.contributor.authorLi, Yitong
dc.contributor.authorMa, Xingjun
dc.contributor.authorJin, Jiong
dc.contributor.authorYu, Han
dc.contributor.authorNg, Kee Siong
dc.date.accessioned2024-01-09T23:06:12Z
dc.date.issued2020
dc.date.updated2022-09-25T08:16:29Z
dc.description.abstractThe current standalone deep learning framework tends to result in overfitting and low utility. This problem can be addressed by either a centralized framework that deploys a central server to train a global model on the joint data from all parties, or a distributed framework that leverages a parameter server to aggregate local model updates. Server-based solutions are prone to the problem of a single-point-of-failure. In this respect, collaborative learning frameworks, such as federated learning (FL), are more robust. Existing federated learning frameworks overlook an important aspect of participation: fairness. All parties are given the same final model without regard to their contributions. To address these issues, we propose a decentralized Fair and Privacy-Preserving Deep Learning (FPPDL) framework to incorporate fairness into federated deep learning models. In particular, we design a local credibility mutual evaluation mechanism to guarantee fairness, and a three-layer onion-style encryption scheme to guarantee both accuracy and privacy. Different from existing FL paradigm, under FPPDL, each participant receives a different version of the FL model with performance commensurate with his contributions. Experiments on benchmark datasets demonstrate that FPPDL balances fairness, privacy and accuracy. It enables federated learning ecosystems to detect and isolate low-contribution parties, thereby promoting responsible participation.en_AU
dc.description.sponsorshipThis work was supported, in part, by IBM PhD Fellowship; ANU Translational Fellowship; Nanyang Assistant Professorship (NAP); and NTU-WeBank JRI (NWJ-2019-007). This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200.en_AU
dc.format.mimetypeapplication/pdfen_AU
dc.identifier.issn1045-9219en_AU
dc.identifier.urihttp://hdl.handle.net/1885/311293
dc.language.isoen_AUen_AU
dc.provenancehttps://v2.sherpa.ac.uk/id/publication/3536..."The Accepted Version can be archived in a Non-Commercial Institutional Repository. " from SHERPA/RoMEO site (as at 15/1/2023). © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE Inc)en_AU
dc.relationhttp://purl.org/au-research/grants/arc/LE170100200en_AU
dc.rights© 2020 IEEEen_AU
dc.sourceIEEE Transactions on Parallel and Distributed Systemsen_AU
dc.subjectFederated learningen_AU
dc.subjectprivacy-preservingen_AU
dc.subjectdeep learningen_AU
dc.subjectfairnessen_AU
dc.subjectencryptionen_AU
dc.titleTowards Fair and Privacy-Preserving Federated Deep Modelsen_AU
dc.typeJournal articleen_AU
dcterms.accessRightsOpen Access
local.bibliographicCitation.issue11en_AU
local.bibliographicCitation.lastpage2541en_AU
local.bibliographicCitation.startpage2524en_AU
local.contributor.affiliationLyu, Lingjuan, National University of Singaporeen_AU
local.contributor.affiliationYu, Jiangshan, Monash Universityen_AU
local.contributor.affiliationNandakumar, Karthik, IBM Singapore Laben_AU
local.contributor.affiliationLi, Yitong, The University of Melbourneen_AU
local.contributor.affiliationMa, Xingjun, The University of Melbourneen_AU
local.contributor.affiliationJin, Jiong, Swinburne University of Technologyen_AU
local.contributor.affiliationYu, Han, Nanyang Technological Universityen_AU
local.contributor.affiliationNg, Kee Siong, College of Engineering and Computer Science, ANUen_AU
local.contributor.authoremailu9914730@anu.edu.auen_AU
local.contributor.authoruidNg, Kee Siong, u9914730en_AU
local.description.notesImported from ARIESen_AU
local.identifier.absfor460402 - Data and information privacyen_AU
local.identifier.absfor461103 - Deep learningen_AU
local.identifier.ariespublicationa383154xPUB13281en_AU
local.identifier.citationvolume31en_AU
local.identifier.doi10.1109/TPDS.2020.2996273en_AU
local.identifier.scopusID2-s2.0-85086230067
local.identifier.thomsonIDWOS:000543007900001
local.identifier.uidSubmittedBya383154en_AU
local.publisher.urlhttps://www.ieee.org/en_AU
local.type.statusAccepted Versionen_AU

Downloads

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Towards Fair and Privacy-Preserving Federated Deep Models.pdf
Size:
5.88 MB
Format:
Adobe Portable Document Format