International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with ICML 2021 (FL-ICML'21)


Submission Due: 02 June, 2021 10 June, 2021 (23:59:59 AoE)
Notification Due: 28 June, 2021 07 July, 2021
Workshop Date: Saturday, 24 July, 2021 ()
Venue: https://icml.cc/virtual/2021/workshop/8359 (ICML'21 Registration Required)

Workshop Program

Timezone:


Accepted papers can be accessed at: https://fl-icml.github.io/2021/papers/
  
Time Activity
  
Opening Remarks
Invited Talk 1: Algorithms for Efficient Federated and Decentralized Learning, by Sebastian U. Stich, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Contributed Oral Presentation Session 1 (15 minutes per talk including Q&A)
  1. Chuhan Wu, Fangzhao Wu, Yang Cao, Lingjuan Lyu, Yongfeng Huang and Xing Xie. FedGNN: Federated Graph Neural Network for Privacy-Preserving Recommendation
  2. Xinyi Xu and Lingjuan Lyu. A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning
  3. Best Student Paper: Dmitry Kovalev, Elnur Gasanov, Peter Richtarik and Alexander Gasnikov. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization over Time-Varying Networks
  4. Peter Richtarik, Igor Sokolov and Ilyas Fatkhullin. EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
Invited Talk 2: The ML data center is dead: What comes next?, by Nic Lane, the University of Cambridge, UK
Break
Contributed Oral Presentation Session 2 (15 minutes per talk including Q&A)
  1. Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni and Richard Vidal. Federated Multi-Task Learning under a Mixture of Distributions
  2. Best Paper: Felix Grimberg, Mary-Anne Hartley, Sai Praneeth Karimireddy and Martin Jaggi. Optimal Model Averaging: Towards Personalized Collaborative Learning
Invited Talk 3: Pandemic Response with Crowdsourced Data: The Participatory Privacy Preserving Approach, by Ramesh Raskar, Massachusetts Institute of Technology (MIT), USA
Industrial Panel
Topic: Challenges and Experiences in Practical Federated Learning Systems
Questions:
  1. What challenges do you see in applying federated learning in practice?
  2. What are your current experiences in developing and deploying practical federated learning systems?
  3. What are the privacy requirements in practice and methods for preserving such privacy?
Panel:
  • Nathalie Baracaldo & Shiqiang Wang (IBM)
  • Peter Kairouz & Zheng Xu (Google)
  • Kshitiz Malik (Facebook)
  • Tao Zhang (Amazon)
Poster Session 1 (for both Oral and Poster Presenters) & Industrial Booths (Gathertown)
  1. Chuhan Wu, Fangzhao Wu, Yang Cao, Lingjuan Lyu, Yongfeng Huang and Xing Xie. FedGNN: Federated Graph Neural Network for Privacy-Preserving Recommendation
  2. Xinyi Xu and Lingjuan Lyu. A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning
  3. Dmitry Kovalev, Elnur Gasanov, Peter Richtarik and Alexander Gasnikov. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization over Time-Varying Networks
  4. Peter Richtarik, Igor Sokolov and Ilyas Fatkhullin. EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
  5. Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni and Richard Vidal. Federated Multi-Task Learning under a Mixture of Distributions
  6. Felix Grimberg, Mary-Anne Hartley, Sai Praneeth Karimireddy and Martin Jaggi. Optimal Model Averaging: Towards Personalized Collaborative Learning
  7. Yen-Hsiu Chou, Shenda Hong, Chenxi Sun, Derun Cai, Moxian Song and Hongyan Li. GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization
  8. Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee and Jaekyun Moon. Accelerating Federated Learning with Split Learning on Locally Generated Losses
  9. Jungwuk Park, Dong-Jun Han, Minseok Choi and Jaekyun Moon. Handling Both Stragglers and Adversaries for Robust Federated Learning
  10. Xiaolin Chen, Shuai Zhou, Kai Yang, Hao Fan, Zejin Feng, Zhong Chen, Yongji Wang and Hu Wang. Fed-EINI: An Efficient and Interpretable Inference Framework for Decision Tree Ensembles in Federated Learning
  11. Mher Safaryan, Rustem Islamov, Xun Qian and Peter Richtarik. FedNL: Making Newton-Type Methods Applicable to Federated Learning
  12. Grigory Malinovsky and Peter Richtárik. Federated Random Reshuffling with Compression and Variance Reduction
  13. Hyunsin Park, Hossein Hosseini and Sungrack Yun. Federated Learning with Metric Loss
  14. Bokun Wang, Mher Safaryan and Peter Richtarik. Smoothness-Aware Quantization Techniques
  15. Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris and Nicholas Lane. FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
  16. Laurent Condat and Peter Richtárik. MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
  17. Yongxin Guo, Tao Lin and Xiaoying Tang. A New Analysis Framework for Federated Learning on Time-Evolving Heterogeneous Data
  18. Ke Zhang, Carl Yang, Xiaoxiao Li, Lichao Sun and Siu Ming Yiu. Subgraph Federated Learning with Missing Neighbor Generation
  19. Hankyul Baek, Won Joon Yun, Jihong Park, Soyi Jung, Joongheon Kim, Mingyue Ji and Mehdi Bennis. Communication and Energy Efficient Slimmable Federated Learning via Superposition Coding and Successive Decoding
  20. Pengwei Xing, Songtao Lu, Lingfei Wu and Han Yu. BiG-Fed: Bilevel Optimization Enhanced Graph-Aided Federated Learning
  21. Jiacheng Liang, Wensi Jiang and Songze Li. OmniLytics: A Blockchain-based Secure Data Market for Decentralized Machine Learning
  22. Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh and Jungseul Ok. Gradient Inversion with Generative Image Prior
Break
Invited Talk 4: Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing, by Ameet Talwalkar, Carnegie Mellon University (CMU), USA
Contributed Oral Presentation Session 3 (15 minutes per talk including Q&A)
  1. John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek Esmaeili and Dzmitry Huba. Federated Learning with Buffered Asynchronous Aggregation
  2. Zachary Charles, Zachary Garrett, Zhouyuan Huo, Sergei Shmulyian and Virginia Smith. On Large-Cohort Training for Federated Learning
  3. Charlie Hou, Kiran Thekumparampil, Giulia Fanti and Sewoong Oh. Multistage stepsize schedule in Federated Learning: Bridging Theory and Practice
  4. Xiyang Liu, Weihao Kong, Sham Kakade and Sewoong Oh. Robust and Differentially Private Mean Estimation
Invited Talk 5: Optimization Aspects of Personalized Federated Learning, by Filip Hanzely, Toyota Technological Institute at Chicago (TTIC), USA
Poster Session 2 (for both Oral and Poster Presenters) & Industrial Booths (Gathertown)
  1. John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek Esmaeili and Dzmitry Huba. Federated Learning with Buffered Asynchronous Aggregation
  2. Zachary Charles, Zachary Garrett, Zhouyuan Huo, Sergei Shmulyian and Virginia Smith. On Large-Cohort Training for Federated Learning
  3. Charlie Hou, Kiran Thekumparampil, Giulia Fanti and Sewoong Oh. Multistage stepsize schedule in Federated Learning: Bridging Theory and Practice
  4. Xiyang Liu, Weihao Kong, Sham Kakade and Sewoong Oh. Robust and Differentially Private Mean Estimation
  5. Dmitrii Avdiukhin, Nikita Ivkin, Sebastian U. Stich and Vladimir Braverman. Bi-directional Adaptive Communication for Heterogenous Distributed Learning
  6. Ravikumar Balakrishnan, Tian Li, Tianyi Zhou, Nageen Himayat, Virginia Smith and Jeff Bilmes. Diverse Client Selection for Federated Learning: Submodularity and Convergence Analysis
  7. Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu and Gauri Joshi. Local Adaptivity in Federated Learning: Convergence and Consistency
  8. Jayanth Regatti, Hao Chen and Abhishek Gupta. BYGARS: Byzantine SGD with Arbitrary Number of Attackers Using Reputation Scores
  9. Siddharth Divi, Yi-Shan Lin, Habiba Farrukh and Z. Berkay Celik. New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning
  10. Elnur Gasanov, Ahmed Khaled, Samuel Horvath and Peter Richtarik. FedMix: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning
  11. Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari and Ramtin Pedarsani. Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity
  12. Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat and Pramod K. Varshney. Achieving Optimal Sample and Communication Complexities for Non-IID Federated Learning
  13. Amit Portnoy, Yoav Tirosh and Danny Hendler. Towards Federated Learning With Byzantine-Robust Client Weighting
  14. Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram and Salman Avestimehr. SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
  15. Jiankai Sun, Yuanshun Yao, Weihao Gao, Junyuan Xie and Chong Wang. Defending against Reconstruction Attack in Vertical Federated Learning
  16. Han Xie, Jing Ma, Li Xiong and Carl Yang. Federated Graph Classification over Non-IID Graphs
  17. Parikshit Ram and Kaushik Sinha. FlyNN: Fruit-fly Inspired Federated Nearest Neighbor Classification
  18. Yatin Dandi, Luis Barba and Martin Jaggi. Implicit Gradient Alignment in Distributed and Federated Learning
  19. Edvin Listo Zec, Noa Onoszko, Gustav Karlsson and Olof Mogren. Decentralized federated learning of deep neural networks on non-iid data
  20. Nirupam Gupta, Thinh Doan and Nitin Vaidya. Byzantine Fault-Tolerance of Local Gradient-Descent in Federated Model under 2f-Redundancy
  21. Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu and Jinfeng Yi. Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy
Break
Invited Talk 6: Dreaming of Federated Robustness: Inherent Barriers and Unavoidable Tradeoffs, by Dimitris Papailiopoulos, The University of Wisconsin–Madison (UW–Madison), USA
Invited Talk 7: Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning, by Salman Avestimehr, University of Southern California (USC), USA
Closing Remarks
  

Invited Talks

   

Title: Algorithms for Efficient Federated and Decentralized Learning

Speaker: Sebastian U. Stich, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland

Biography
Bio: Sebastian Stich is a research scientist at the EPFL. His research interests span machine learning, optimization and statistics, with a current focus on efficient parallel algorithms for training ML models over decentralized datasets. Since 2016 he is hosted in the machine learning and optimization lab of Prof. Martin Jaggi. Between 2014 and 2016 he stayed as a postdoctoral researcher at UCLouvain with Prof. Yurii Nesterov, supported by an SNSF mobility grant. He received his PhD in Computer Science from ETH Zurich in 2014 and prior to that his MSc (2010) and BSc (2009) degrees in Mathematic from ETH Zurich. He is co-founder of the workshop series "Advances in ML: Theory meets practice" run at the Applied Machine Learning Days 2018-2020 and co-organizer of the "Optimization for Machine Learning" workshop 2019 and 2020 (at NeurIPS). Since 2020 he is a member of the European Lab for Learning and Intelligent Systems (ELLIS).

   

Title: The ML data center is dead: What comes next?

Speaker: Nic Lane, the University of Cambridge, UK

Biography
Nic Lane is an Associate Professor at the University of Cambridge, where he leads the Machine Learning Systems lab (https://mlsys.cst.cam.ac.uk). Alongside his academic role, Nic is also a Director at Samsung AI in Cambridge where his teams focus on on-device and distributed forms of machine learning. In 2018 and 2019, he (and his co-authors) received the ACM SenSys Test-of-Time award and ACM SIGMOBILE Test-of-Time award for devising learning algorithms now used on devices such as smartphones. And in 2020, Nic was the ACM SIGMOBILE Rockstar award winner for his contributions to “the understanding of how resource-constrained mobile devices can robustly understand, reason and react to complex user behaviors and environments through new paradigms in learning algorithms and system design.” Additional details are available from: http://niclane.org/.

   

Title: Pandemic Response with Crowdsourced Data: The Participatory Privacy Preserving Approach

Speaker: Ramesh Raskar, Massachusetts Institute of Technology (MIT), USA

Biography
Ramesh Raskar is an Associate Professor at MIT Media Lab and directs the Camera Culture research group. His focus is on AI and Imaging for health and sustainability. They span research in physical (e.g., sensors, health-tech), digital (e.g., automated and privacy-aware machine learning) and global (e.g., geomaps, autonomous mobility) domains. He received the Lemelson Award (2016), ACM SIGGRAPH Achievement Award (2017), DARPA Young Faculty Award (2009), Alfred P. Sloan Research Fellowship (2009), Marr Prize honorable mention (2009), TR100 Award from MIT Technology Review (2004) and Global Indus Technovator Award (2003), LAUNCH Health Innovation Award, presented by NASA, USAID, US State Dept and NIKE, (2010), Vodafone Wireless Innovation Project Award (first place) (2011). He has worked on special research projects at Google [X] and Facebook and co-founded/advised several companies.

   

Title: Federated Hyperparameter Tuning: Challenges, Baselines, and Connections to Weight-Sharing

Speaker: Ameet Talwalkar, Carnegie Mellon University (CMU), USA

Biography
Ameet Talwalkar is an assistant professor in the Machine Learning Department at CMU, and also co-founder and Chief Scientist at Determined AI. His interests are in the field of statistical machine learning. His current work is motivated by the goal of democratizing machine learning, with a focus on topics related to automation, fairness, interpretability, and federated learning. He led the initial development of the MLlib project in Apache Spark, is a co-author of the textbook 'Foundations of Machine Learning' (MIT Press), and created an award-winning edX MOOC on distributed machine learning. He also helped to create the MLSys conference, serving as the inaugural Program Chair in 2018, General Chair in 2019, and currently as President of the MLSys Board.

   

Title: Optimization Aspects of Personalized Federated Learning

Speaker: Filip Hanzely, Toyota Technological Institute at Chicago (TTIC), USA

Biography
Filip Hanzely is a Research Assistant Professor at the Toyota Technological Institute at Chicago (TTIC). His research focuses mostly on various aspects of stochastic optimization for machine learning, and on designing provably efficient algorithms for solving big data problems. Filip is also interested in related topics such as federated learning, distributed optimization, optimization for deep learning, and higher-order methods. Filip received his PhD degree in Applied Mathematics and Computational Science from KAUST in 2020. Prior to that, he received an MSc degree in Mathematics from the University of Edinburgh. He was a recipient of an EPSRC CASE Award (industrial PhD scholarship funded by EPSRC and Amazon) and Dean's Award at KAUST (awarded to a few best incoming PhD students).

   

Title: Dreaming of Federated Robustness: Inherent Barriers and Unavoidable Tradeoffs

Speaker: Dimitris Papailiopoulos, The University of Wisconsin–Madison (UW–Madison), USA

Biography
Dimitris is an Assistant Professor of ECE at the UW-Madison. His research interests span machine learning, information theory, and optimization, with a current focus on efficient large-scale learning algorithms and coding-theoretic techniques for robust machine learning. Between 2014 and 2016, Dimitris was a postdoc at UC Berkeley and a member of the AMPLab. He earned his Ph.D. in ECE from UT Austin in 2014, under the supervision of Alex Dimakis. In 2007 he received his ECE Diploma and in 2009 his M.Sc. degree from the Technical University of Crete, in Greece. Dimitris is a recipient of the NSF CAREER Award (2019), two Sony Faculty Innovation Awards (2019 and 2020), a joint IEEE ComSoc/ITSoc Best Paper Award (2020), an IEEE Signal Processing Society, Young Author Best Paper Award (2015), the Vilas Associate Award (2021), the Emil Steiger Distinguished Teaching Award (2021), and the Benjamin Smith Reynolds Award for Excellence in Teaching (2019). In 2018, he co-founded MLSys, a new conference that targets research at the intersection of machine learning and systems. In 2018 and 2020 he was program co-chair for MLSys, and in 2019 he co-chaired the 3rd Midwest Machine Learning Symposium.

   

Title: Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning

Speaker: Salman Avestimehr, University of Southern California (USC), USA

Biography
Salman Avestimehr is a Dean's Professor, the inaugural director of the USC-Amazon Center on Secure and Trusted Machine Learning (Trusted AI), and the director of the Information Theory and Machine Learning (vITAL) research lab at the Electricaland Computer Engineering Department of University of Southern California. He is also an Amazon Scholar at Alexa AI. He received his Ph.D. in 2008 and M.S. degree in 2005 in Electrical Engineering and Computer Science, both from the University of California,Berkeley. Prior to that, he obtained his B.S. in Electrical Engineering from Sharif University of Technology in 2003. His research interests include information theory and coding theory, and large-scale distributed computing and machine learning, secure andprivate computing, and blockchain systems.

Dr. Avestimehr has received a number of awards for his research, including the James L. Massey Research & Teaching Award from IEEE Information Theory Society, an Information Theory Society and Communication Society Joint Paper Award, a Presidential Early Career Award for Scientists and Engineers (PECASE) from the White House (President Obama), a Young Investigator Program (YIP) award from the U. S. Air Force Office of Scientific Research, a National Science Foundation CAREER award, the David J. Sakrison MemorialPrize, and several Best Paper Awards at Conferences. He has been an Associate Editor for IEEE Transactions on Information Theory and a general Co-Chair of the 2020 International Symposium on Information Theory (ISIT). He is a fellow of IEEE.


Awards


Accepted Papers (Oral Presentation)

  1. Zachary Charles, Zachary Garrett, Zhouyuan Huo, Sergei Shmulyian and Virginia Smith. On Large-Cohort Training for Federated Learning
  2. Chuhan Wu, Fangzhao Wu, Yang Cao, Lingjuan Lyu, Yongfeng Huang and Xing Xie. FedGNN: Federated Graph Neural Network for Privacy-Preserving Recommendation
  3. Xiyang Liu, Weihao Kong, Sham Kakade and Sewoong Oh. Robust and Differentially Private Mean Estimation
  4. Xinyi Xu and Lingjuan Lyu. A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning
  5. Charlie Hou, Kiran Thekumparampil, Giulia Fanti and Sewoong Oh. Multistage stepsize schedule in Federated Learning: Bridging Theory and Practice
  6. Dmitry Kovalev, Elnur Gasanov, Peter Richtarik and Alexander Gasnikov. Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization over Time-Varying Networks
  7. Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni and Richard Vidal. Federated Multi-Task Learning under a Mixture of Distributions
  8. Peter Richtarik, Igor Sokolov and Ilyas Fatkhullin. EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
  9. Felix Grimberg, Mary-Anne Hartley, Sai Praneeth Karimireddy and Martin Jaggi. Optimal Model Averaging: Towards Personalized Collaborative Learning
  10. John Nguyen, Kshitiz Malik, Hongyuan Zhan, Ashkan Yousefpour, Michael Rabbat, Mani Malek Esmaeili and Dzmitry Huba. Federated Learning with Buffered Asynchronous Aggregation

Accepted Papers (Poster Presentation)

  1. Yatin Dandi, Luis Barba and Martin Jaggi. Implicit Gradient Alignment in Distributed and Federated Learning
  2. Edvin Listo Zec, Noa Onoszko, Gustav Karlsson and Olof Mogren. Decentralized federated learning of deep neural networks on non-iid data
  3. Yen-Hsiu Chou, Shenda Hong, Chenxi Sun, Derun Cai, Moxian Song and Hongyan Li. GRP-FED: Addressing Client Imbalance in Federated Learning via Global-Regularized Personalization
  4. Dong-Jun Han, Hasnain Irshad Bhatti, Jungmoon Lee and Jaekyun Moon. Accelerating Federated Learning with Split Learning on Locally Generated Losses
  5. Jungwuk Park, Dong-Jun Han, Minseok Choi and Jaekyun Moon. Handling Both Stragglers and Adversaries for Robust Federated Learning
  6. Amit Portnoy, Yoav Tirosh and Danny Hendler. Towards Federated Learning With Byzantine-Robust Client Weighting
  7. Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram and Salman Avestimehr. SpreadGNN: Serverless Multi-task Federated Learning for Graph Neural Networks
  8. Jiankai Sun, Yuanshun Yao, Weihao Gao, Junyuan Xie and Chong Wang. Defending against Reconstruction Attack in Vertical Federated Learning
  9. Xiaolin Chen, Shuai Zhou, Kai Yang, Hao Fan, Zejin Feng, Zhong Chen, Yongji Wang and Hu Wang. Fed-EINI: An Efficient and Interpretable Inference Framework for Decision Tree Ensembles in Federated Learning
  10. Parikshit Ram and Kaushik Sinha. FlyNN: Fruit-fly Inspired Federated Nearest Neighbor Classification
  11. Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris and Nicholas Lane. FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
  12. Laurent Condat and Peter Richtárik. MURANA: A Generic Framework for Stochastic Variance-Reduced Optimization
  13. Bokun Wang, Mher Safaryan and Peter Richtarik. Smoothness-Aware Quantization Techniques
  14. Mher Safaryan, Rustem Islamov, Xun Qian and Peter Richtarik. FedNL: Making Newton-Type Methods Applicable to Federated Learning
  15. Jiacheng Liang, Wensi Jiang and Songze Li. OmniLytics: A Blockchain-based Secure Data Market for Decentralized Machine Learning
  16. Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari and Ramtin Pedarsani. Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity
  17. Grigory Malinovsky and Peter Richtárik. Federated Random Reshuffling with Compression and Variance Reduction
  18. Hyunsin Park, Hossein Hosseini and Sungrack Yun. Federated Learning with Metric Loss
  19. Elnur Gasanov, Ahmed Khaled, Samuel Horvath and Peter Richtarik. FedMix: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning
  20. Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat and Pramod K. Varshney. Achieving Optimal Sample and Communication Complexities for Non-IID Federated Learning
  21. Yongxin Guo, Tao Lin and Xiaoying Tang. A New Analysis Framework for Federated Learning on Time-Evolving Heterogeneous Data
  22. Ke Zhang, Carl Yang, Xiaoxiao Li, Lichao Sun and Siu Ming Yiu. Subgraph Federated Learning with Missing Neighbor Generation
  23. Nirupam Gupta, Thinh Doan and Nitin Vaidya. Byzantine Fault-Tolerance of Local Gradient-Descent in Federated Model under 2f-Redundancy
  24. Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu and Jinfeng Yi. Understanding Clipping for Federated Learning: Convergence and Client-Level Differential Privacy
  25. Han Xie, Jing Ma, Li Xiong and Carl Yang. Federated Graph Classification over Non-IID Graphs
  26. Jianyu Wang, Zheng Xu, Zachary Garrett, Zachary Charles, Luyang Liu and Gauri Joshi. Local Adaptivity in Federated Learning: Convergence and Consistency
  27. Siddharth Divi, Yi-Shan Lin, Habiba Farrukh and Z. Berkay Celik. New Metrics to Evaluate the Performance and Fairness of Personalized Federated Learning
  28. Ravikumar Balakrishnan, Tian Li, Tianyi Zhou, Nageen Himayat, Virginia Smith and Jeff Bilmes. Diverse Client Selection for Federated Learning: Submodularity and Convergence Analysis
  29. Jayanth Regatti, Hao Chen and Abhishek Gupta. BYGARS: Byzantine SGD with Arbitrary Number of Attackers Using Reputation Scores
  30. Dmitrii Avdiukhin, Nikita Ivkin, Sebastian U. Stich and Vladimir Braverman. Bi-directional Adaptive Communication for Heterogenous Distributed Learning
  31. Hankyul Baek, Won Joon Yun, Jihong Park, Soyi Jung, Joongheon Kim, Mingyue Ji and Mehdi Bennis. Communication and Energy Efficient Slimmable Federated Learning via Superposition Coding and Successive Decoding
  32. Pengwei Xing, Songtao Lu, Lingfei Wu and Han Yu. BiG-Fed: Bilevel Optimization Enhanced Graph-Aided Federated Learning
  33. Jinwoo Jeon, Jaechang Kim, Kangwook Lee, Sewoong Oh and Jungseul Ok. Gradient Inversion with Generative Image Prior

Call for Papers

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical challenges, and discuss potential solutions. This will lead to an overall advancement of FL and its impact in the community, while noting that FL has become an increasingly popular topic in the ICML community in recent years.

Topics of interest include, but are not limited to, the following:
  • Adversarial attacks on FL
  • Applications of FL
  • Blockchain for FL
  • Beyond first-order methods in FL
  • Beyond local methods in FL
  • Communication compression in FL
  • Data heterogeneity in FL
  • Decentralized FL
  • Device heterogeneity in FL
  • Fairness in FL
  • Hardware for on-device FL
  • Variants of FL like split learning
  • Local methods in FL
  • Nonconvex FL
  • Operational challenges in FL
  • Optimization advances in FL
  • Partial participation in FL
  • Personalization in FL
  • Privacy concerns in FL
  • Privacy-preserving methods for FL
  • Resource-efficient FL
  • Systems and infrastructure for FL
  • Theoretical contributions to FL
  • Uncertainty in FL
  • Vertical FL

The workshop will have invited talks on a diverse set of topics related to FL. In addition, we plan to have an industrial panel (over Zoom) and booth (on GatherTown), where researchers from industry will talk about challenges and solutions from an industrial perspective.

More information on previous workshops can be found here.


Proceedings and Dual Submission Policy

Our workshop has no formal proceedings. Accepted papers will be posted on the workshop webpage. We welcome submissions of unpublished papers, including those that are submitted/accepted to other venues if that other venue allows so. We will not accept papers that are already published though, because the goal of the workshop is to share recent results and discuss open problems.


Submission Instructions

Submissions are recommended (but not mandatory) to be no more than 6 pages long, excluding references, and follow ICML-21 template. Submissions are double-blind (author identity shall not be revealed to the reviewers). An optional appendix of arbitrary length is allowed and should be put at the end of the paper (after references).

Easychair submission link: https://easychair.org/conferences/?conf=flicml21

If you have any enquiries, please email us at: flicml21@easychair.org


Organizing Committee


Program Committee


Sponsored by

 

Organized by