Download PDFOpen PDF in browserResource-Aware Heterogeneous Federated Learning with Specialized Local ModelsEasyChair Preprint 1337615 pages•Date: May 20, 2024AbstractFederated Learning (FL) is extensively used to train AI/ML models in distributed and privacy-preserving settings. Participant edge devices in FL systems typically contain non-independent and identically distributed~(Non-IID) private data and unevenly distributed computational resources. Preserving user data privacy while optimizing AI/ML models in a heterogeneous federated network requires us to address data and system/resource heterogeneity. To address these challenges, we propose \underline{R}esource-\underline{a}ware \underline{F}ederated \underline{L}earning~(\proj). \proj allocates resource-aware specialized models to edge devices using Neural Architecture Search~(NAS) and allows heterogeneous model architecture deployment by knowledge extraction and fusion. Combining NAS and FL enables on-demand customized model deployment for resource-diverse edge devices. Furthermore, we propose a multi-model architecture fusion scheme allowing the aggregation of the distributed learning results. Results demonstrate \proj's superior resource efficiency compared to SoTA. Keyphrases: Edge Computing, Federated Learning, Neural Architecture Search
|