International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with ICML 2021 (FL-ICML'21)

Submission Due: 02 June, 2021 10 June, 2021 (23:59:59 AoE)
Notification Due: 28 June, 2021 07 July, 2021
Workshop Date: 24 July, 2021
Venue: Online

Call for Papers

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical challenges, and discuss potential solutions. This will lead to an overall advancement of FL and its impact in the community, while noting that FL has become an increasingly popular topic in the ICML community in recent years.

Topics of interest include, but are not limited to, the following:
  • Adversarial attacks on FL
  • Applications of FL
  • Blockchain for FL
  • Beyond first-order methods in FL
  • Beyond local methods in FL
  • Communication compression in FL
  • Data heterogeneity in FL
  • Decentralized FL
  • Device heterogeneity in FL
  • Fairness in FL
  • Hardware for on-device FL
  • Variants of FL like split learning
  • Local methods in FL
  • Nonconvex FL
  • Operational challenges in FL
  • Optimization advances in FL
  • Partial participation in FL
  • Personalization in FL
  • Privacy concerns in FL
  • Privacy-preserving methods for FL
  • Resource-efficient FL
  • Systems and infrastructure for FL
  • Theoretical contributions to FL
  • Uncertainty in FL
  • Vertical FL

The workshop will have invited talks on a diverse set of topics related to FL. In addition, we plan to have an industrial panel (over Zoom) and booth (on GatherTown), where researchers from industry will talk about challenges and solutions from an industrial perspective.

More information on previous workshops can be found here.

Invited Talks


Title: Trustworthy Federated Learning

Speaker: Salman Avestimehr, University of Southern California (USC), USA

Salman Avestimehr is a Dean's Professor, the inaugural director of the USC-Amazon Center on Secure and Trusted Machine Learning (Trusted AI), and the director of the Information Theory and Machine Learning (vITAL) research lab at the Electricaland Computer Engineering Department of University of Southern California. He is also an Amazon Scholar at Alexa AI. He received his Ph.D. in 2008 and M.S. degree in 2005 in Electrical Engineering and Computer Science, both from the University of California,Berkeley. Prior to that, he obtained his B.S. in Electrical Engineering from Sharif University of Technology in 2003. His research interests include information theory and coding theory, and large-scale distributed computing and machine learning, secure andprivate computing, and blockchain systems.

Dr. Avestimehr has received a number of awards for his research, including the James L. Massey Research & Teaching Award from IEEE Information Theory Society, an Information Theory Society and Communication Society Joint Paper Award, a Presidential Early Career Award for Scientists and Engineers (PECASE) from the White House (President Obama), a Young Investigator Program (YIP) award from the U. S. Air Force Office of Scientific Research, a National Science Foundation CAREER award, the David J. Sakrison MemorialPrize, and several Best Paper Awards at Conferences. He has been an Associate Editor for IEEE Transactions on Information Theory and a general Co-Chair of the 2020 International Symposium on Information Theory (ISIT). He is a fellow of IEEE.


Title: Optimization Aspects of Personalized Federated Learning

Speaker: Filip Hanzely, Toyota Technological Institute at Chicago (TTIC), USA

Filip Hanzely is a Research Assistant Professor at the Toyota Technological Institute at Chicago (TTIC). His research focuses mostly on various aspects of stochastic optimization for machine learning, and on designing provably efficient algorithms for solving big data problems. Filip is also interested in related topics such as federated learning, distributed optimization, optimization for deep learning, and higher-order methods. Filip received his PhD degree in Applied Mathematics and Computational Science from KAUST in 2020. Prior to that, he received an MSc degree in Mathematics from the University of Edinburgh. He was a recipient of an EPSRC CASE Award (industrial PhD scholarship funded by EPSRC and Amazon) and Dean's Award at KAUST (awarded to a few best incoming PhD students).


Title: Dreaming of Federated Robustness: Inherent Barriers and Unavoidable Tradeoffs

Speaker: Dimitris Papailiopoulos, The University of Wisconsin–Madison (UW–Madison), USA

Dimitris is an Assistant Professor of ECE at the UW-Madison. His research interests span machine learning, information theory, and optimization, with a current focus on efficient large-scale learning algorithms and coding-theoretic techniques for robust machine learning. Between 2014 and 2016, Dimitris was a postdoc at UC Berkeley and a member of the AMPLab. He earned his Ph.D. in ECE from UT Austin in 2014, under the supervision of Alex Dimakis. In 2007 he received his ECE Diploma and in 2009 his M.Sc. degree from the Technical University of Crete, in Greece. Dimitris is a recipient of the NSF CAREER Award (2019), two Sony Faculty Innovation Awards (2019 and 2020), a joint IEEE ComSoc/ITSoc Best Paper Award (2020), an IEEE Signal Processing Society, Young Author Best Paper Award (2015), the Vilas Associate Award (2021), the Emil Steiger Distinguished Teaching Award (2021), and the Benjamin Smith Reynolds Award for Excellence in Teaching (2019). In 2018, he co-founded MLSys, a new conference that targets research at the intersection of machine learning and systems. In 2018 and 2020 he was program co-chair for MLSys, and in 2019 he co-chaired the 3rd Midwest Machine Learning Symposium.


Title: Pandemic Response with Crowdsourced Data: The Participatory Privacy Preserving Approach

Speaker: Ramesh Raskar, Massachusetts Institute of Technology (MIT), USA

Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), Marr Prize honorable mention (2009), TR100 Award from MIT Technology Review (2004) and Global Indus Technovator Award (2003), LAUNCH Health Innovation Award, presented by NASA, USAID, US State Dept and NIKE, (2010), Vodafone Wireless Innovation Project Award (first place) (2011). He has worked on special research projects at Google [X] and Facebook and co-founded/advised several companies.


Title:Algorithms for Efficient Federated and Decentralized Learning

Speaker: Sebastian U. Stich, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland

Bio: Sebastian Stich is a research scientist at the EPFL. His research interests span machine learning, optimization and statistics, with a current focus on efficient parallel algorithms for training ML models over decentralized datasets. Since 2016 he is hosted in the machine learning and optimization lab of Prof. Martin Jaggi. Between 2014 and 2016 he stayed as a postdoctoral researcher at UCLouvain with Prof. Yurii Nesterov, supported by an SNSF mobility grant. He received his PhD in Computer Science from ETH Zurich in 2014 and prior to that his MSc (2010) and BSc (2009) degrees in Mathematic from ETH Zurich. He is co-founder of the workshop series "Advances in ML: Theory meets practice" run at the Applied Machine Learning Days 2018-2020 and co-organizer of the "Optimization for Machine Learning" workshop 2019 and 2020 (at NeurIPS). Since 2020 he is a member of the European Lab for Learning and Intelligent Systems (ELLIS).


Title: Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing

Speaker: Ameet Talwalkar, Carnegie Mellon University (CMU), USA

Ameet Talwalkar is an assistant professor in the Machine Learning Department at CMU, and also co-founder and Chief Scientist at Determined AI. His interests are in the field of statistical machine learning. His current work is motivated by the goal of democratizing machine learning, with a focus on topics related to automation, fairness, interpretability, and federated learning. He led the initial development of the MLlib project in Apache Spark, is a co-author of the textbook 'Foundations of Machine Learning' (MIT Press), and created an award-winning edX MOOC on distributed machine learning. He also helped to create the MLSys conference, serving as the inaugural Program Chair in 2018, General Chair in 2019, and currently as President of the MLSys Board.

Proceedings and Dual Submission Policy

Our workshop has no formal proceedings. Accepted papers will be posted on the workshop webpage. We welcome submissions of unpublished papers, including those that are submitted/accepted to other venues if that other venue allows so. We will not accept papers that are already published though, because the goal of the workshop is to share recent results and discuss open problems.

Submission Instructions

Submissions are recommended (but not mandatory) to be no more than 6 pages long, excluding references, and follow ICML-21 template. Submissions are double-blind (author identity shall not be revealed to the reviewers). An optional appendix of arbitrary length is allowed and should be put at the end of the paper (after references).

Easychair submission link:

If you have any enquiries, please email us at:

Organizing Committee

Program Committee

Sponsored by


Organized by