Download PDFOpen PDF in browser

Federated Learning Methods for Analytics of Big and Sensitive Distributed Data and Survey

EasyChair Preprint 9819

5 pagesDate: March 5, 2023

Abstract

This article focuses on analytics of big distributed sensitive data on a federated learning base. The main current focus is on the most common use technology platforms: TensorFlow Federated, PySyft, Flower and IBM Federated Learning of the point of view edge computing usability. Training PyTorch models with differential privacy (DP) is more scalable than existing state-of-the-art methods. Differential privacy is a mathematically rigorous framework for quantifying the anonymisation of sensitive data. It’s often used in analytics, with growing interest in the machine learning (ML) community. Training distributed data at the edge is interesting for privacy sensitivity and the transfer of huge data. Sensitivity and huge data is the main challenge in federated learning. Federated learning is a solution for protecting huge device data through model updates.

Keyphrases: Artificial Intelligence, Federated Learning, TensorFlow Federated, differential privacy, flower

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:9819,
  author    = {Michal Staňo and Ladislav Hluchý and Martin Bobák and Peter Krammer and Viet Tran},
  title     = {Federated Learning Methods for Analytics of Big and Sensitive Distributed Data and Survey},
  howpublished = {EasyChair Preprint 9819},
  year      = {EasyChair, 2023}}
Download PDFOpen PDF in browser