Robustness Analytics to Data Heterogeneity in Edge Computing

HARI MOHAN PANDEY, Jia Qiuan, Lars Hansen, Xenofon Fafoutis, Prayag Tiwari

Research output: Contribution to journalArticle (journal)peer-review

4 Citations (Scopus)
55 Downloads (Pure)

Abstract

Federated Learning is a framework that jointly trains a model with complete knowledge on a remotely placed centralized server, but without the requirement of accessing the data stored in distributed machines. Some work assumes that the data generated from edge devices are identically and independently sampled from a common population distribution. However, such ideal sampling may not be realistic in many contexts. Also, models based on intrinsic agency, such as active sampling schemes, may lead to highly biased sampling. So an imminent question is how robust Federated Learning is to biased sampling? In this work1
, we experimentally investigate two such scenarios. First, we study a centralized classifier aggregated from a collection of local classifiers trained with data having categorical heterogeneity. Second, we study a classifier aggregated from a collection of local classifiers trained by data through active sampling at the edge. We present evidence in both scenarios that Federated Learning is robust to data heterogeneity when local training iterations and communication frequency are appropriately chosen.
Original languageEnglish
Article numberCOMCOM_2020_1377R2
Pages (from-to)229-239
JournalComputer Communications
Volume164
Early online date2 Nov 2020
DOIs
Publication statusE-pub ahead of print - 2 Nov 2020

Keywords

  • Intelligent Edge Computing
  • Fog Computing
  • Active Learning
  • Federated Learning
  • Distributed Machine Learning
  • User Data Privacy

Fingerprint

Dive into the research topics of 'Robustness Analytics to Data Heterogeneity in Edge Computing'. Together they form a unique fingerprint.

Cite this