Robustness Analytics to Data Heterogeneity in Edge Computing

HARI MOHAN PANDEY, Jia Qiuan, Lars Hansen, Xenofon Fafoutis, Prayag Tiwari

Research output: Contribution to journalArticle (journal)peer-review

3 Citations (Scopus)
50 Downloads (Pure)


Federated Learning is a framework that jointly trains a model with complete knowledge on a remotely placed centralized server, but without the requirement of accessing the data stored in distributed machines. Some work assumes that the data generated from edge devices are identically and independently sampled from a common population distribution. However, such ideal sampling may not be realistic in many contexts. Also, models based on intrinsic agency, such as active sampling schemes, may lead to highly biased sampling. So an imminent question is how robust Federated Learning is to biased sampling? In this work1
, we experimentally investigate two such scenarios. First, we study a centralized classifier aggregated from a collection of local classifiers trained with data having categorical heterogeneity. Second, we study a classifier aggregated from a collection of local classifiers trained by data through active sampling at the edge. We present evidence in both scenarios that Federated Learning is robust to data heterogeneity when local training iterations and communication frequency are appropriately chosen.
Original languageEnglish
Article numberCOMCOM_2020_1377R2
Pages (from-to)229-239
JournalComputer Communications
Early online date2 Nov 2020
Publication statusE-pub ahead of print - 2 Nov 2020


  • Intelligent Edge Computing
  • Fog Computing
  • Active Learning
  • Federated Learning
  • Distributed Machine Learning
  • User Data Privacy


Dive into the research topics of 'Robustness Analytics to Data Heterogeneity in Edge Computing'. Together they form a unique fingerprint.

Cite this