International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with ICML 2020 (FL-ICML'20)

Workshop Date: July 18, 2020
How to join the workshop: Please go to https://icml.cc/virtual/2020/workshop/5730 and click on "Join Zoom" at the top of the page (requires ICML registration).

Workshop Program (Eastern Daylight Time (EDT))

Time Activity
08:45 – 09:00 Presenters to connect and test the system
09:00 – 09:10 Opening Address
09:10 – 09:35 Keynote Session 1: Balancing Efficiency and Security in Federated Learning, by Qiang Yang (WeBank)
09:35 – 10:25 Technical Talks Session 1 (4 talks, 12 mins each)
  1. Best Student Paper Award: Wonyong Jeong, Jaehong Yoon, Eunho Yang and Sung Ju Hwang. Federated Semi-Supervised Learning with Inter-Client Consistency
  2. Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin and Pengtao Xie. Differentially-private Federated Neural Architecture Search
  3. Laura Rieger, Rasmus Malik Thaarup Høegh and Lars Kai Hansen. Client Adaptation improves Federated Learning with Simulated Non-IID Clients
  4. Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang and Kevin S. Chan. Sharing Models or Coresets: A Study based on Membership Inference Attack
10:25 – 10:40 Break (Presenters should connect and test the system)
10:40 – 11:05 Keynote Session 2: Federated Learning in Enterprise Settings, by Rania Khalaf (IBM Research)
11:05 – 11:35 Lightning Talks Session 1 (9 talks, 3 mins each)
  1. Zhaohui Yang, Mingzhe Chen, Walid Saad, Choong Seon Hong, Mohammad Shikh-Bahaei, H. Vincent Poor and Shuguang Cui. Delay Minimization for Federated Learning Over Wireless Communication Networks
  2. Angel Navia Vázquez, Manuel-Alberto Vázquez-López and Jesús Cid-Sueiro. Double Confidential Federated Machine Learning Logistic Regression for Industrial Data Platforms
  3. Kun Li, Fanglan Zheng, Jiang Tian and Xiaojia Xiang. A Federated F-score Based Ensemble Model for Automatic Rule Extraction
  4. Hajime Ono and Tsubasa Takahashi. Locally Private Distributed Reinforcement Learning
  5. Yang Liu, Zhihao Yi and Tianjian Chen. Defending backdoor attacks in feature-partitioned collaborative learning
  6. Tianyi Chen, Xiao Jin, Yuejiao Sun and Wotao Yin. VAFL: a Method of Vertical Asynchronous Federated Learning
  7. Shahab Asoodeh and Flavio Calmon. Differentially Private Federated Learning: An Information-Theoretic Perspective
  8. Mathieu Andreux, Andre Manoel, Romuald Menuet, Charlie Saillard and Chloé Simpson. Federated Survival Analysis with Discrete-Time Cox Models
  9. Myungjae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis and Seong-Lyun Kim. XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated Learning
11:35 – 12:05 Poster Session 1 (lightning talk presenters)
12:05 – 13:20 Lunch (Presenters to re-connect at 13:15 PM and test the system)
13:20 – 13:45 Keynote Session 3: Federated Learning Applications in Alexa, by Shiv Vitaladevuni (Amazon Alexa)
13:45 – 15:10 Technical Talks Session 2 (7 talks, 12 mins each)
  1. Jinhyun So, Basak Guler and A. Salman Avestimehr. Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
  2. Chong Liu, Yuqing Zhu, Kamalika Chaudhuri and Yu-Xiang Wang. Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning
  3. Best Paper Award: Honglin Yuan and Tengyu Ma. Federated Accelerated Stochastic Gradient Descent
  4. Krishna Pillutla, Sham Kakade and Zaid Harchaoui. Robust Aggregation for Federated Learning
  5. Leighton Pate Barnes, Huseyin A. Inan, Berivan Isik and Ayfer Ozgur. rTop-k: A Statistical Estimation Approach to Distributed SGD
  6. Ashkan Yousefpour, Brian Nguyen, Siddartha Devic, Guanhua Wang, Abdul Rahman Kreidieh, Hans Lobel, Alexandre Bayen and Jason Jue. ResiliNet: Failure-Resilient Inference in Distributed Neural Networks
  7. Swanand Kadhe, Nived Rajaraman, O. Ozan Koyluoglu and Kannan Ramchandran. FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning
15:10 – 15:25 Break (Presenters should connect and test the system)
15:25 – 15:50 Keynote Session 4: The Shuffle Model and Federated Learning, by Ilya Mironov (Facebook)
15:50 – 16:15 Lightning Talks Session 2 (8 talks, 3 mins each)
  1. Avishek Ghosh, Jichan Chung, Dong Yin and Kannan Ramchandran. An Efficient Framework for Clustered Federated Learning
  2. Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, Amir Salman Avestimehr and Nageen Himayat. Coded Computing for Federated Learning at the Edge
  3. Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani and Ali Jadbabaie. Robust Federated Learning: The Case of Affine Distribution Shifts
  4. Mikhail Khodak, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith and Ameet Talwalkar. Weight-Sharing for Hyperparameter Optimization in Federated Learning
  5. Vaikkunth Mugunthan, Ravi Rahman and Lalana Kagal. BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning
  6. Vaikkunth Mugunthan, Anton Peraire-Bueno and Lalana Kagal. PrivacyFL: A simulator for privacy-preserving and secure federated learning
  7. Hossein Hosseini, Sungrack Yun, Hyunsin Park, Christos Louizos, Joseph Soriaga and Max Welling. Federated Learning of User Authentication Models
  8. Xinwei Zhang, Mingyi Hong, Sairaj Dhople, Wotao Yin and Yang Liu. FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data
16:15 – 16:45 Poster Session 2 (lightning talk presenters)
16:45 – 17:00 Break
17:00 – 17:25 Keynote Session 5: Advances and Open Problems in Federated Learning, by Brendan McMahan (Google)
17:25 – 17:35 Closing Remarks

Keynote Abstracts

Keynote Session 1: Balancing Efficiency and Security in Federated Learning, by Qiang Yang (WeBank)

Abstract: Federated learning systems need to balance the efficiency and security of machine learning algorithms while maintaining model accuracy. In this talk we discuss this trade-off in two settings. One is when two collaborating organisations wish to transfer the knowledge from one to another via a federated learning framework. We present a federated transfer learning algorithm to both improve the security and the performance while preserving privacy. Another case is when one exploits differential privacy in a federated learning framework to ensure efficiency, but this may cause security degradation. To solve the problem, we employ a dual-headed network architecture that guarantees training data privacy by exerting secret gradient perturbations to original gradients, while maintaining high performance of the global shared model. We find that the combination of secret-public networks provides a preferable alternative to DP-based mechanisms in federated learning applications.

Biography: Qiang Yang is Chief Artificial Intelligence Officer of WeBank and Chair Professor of CSE Department of Hong Kong Univ. of Sci. and Tech. He is the Conference Chair of AAAI-21, President of Hong Kong Society of Artificial Intelligence and Robotics(HKSAIR) and a former President of IJCAI (2017-2019). He is a fellow of AAAI, ACM, IEEE and AAAS. His research interests include transfer learning and federated learning. He is the founding EiC of two journals: IEEE Transactions on Big Data and ACM Transactions on Intelligent Systems and Technology.

Keynote Session 2: Federated Learning in Enterprise Settings, by Rania Khalaf (IBM Research)

Abstract: Federated learning in consumer scenarios has garnered a lot of interest. However, its application in large enterprises brings to bear additional needs and guarantees. In this talk, I will highlight key drivers for federated learning in enterprises, illustrate representative uses cases, and summarize the requirements for a platform that can support it. I will then present the newly released IBM Federated Learning framework (git, white paper) and show how it can be used and extended by researchers. Finally, I will highlight recent advances in federated learning and privacy from IBM Research.

Biography: Rania Khalaf is the Director of AI Platforms and Runtimes at IBM Research where she leads teams pushing the envelope in AI platforms to make creating AI models and applications easy, fast, and safe for data scientists and developers. Her multi-disciplinary teams tackle key problems at the intersection of core AI, distributed systems, human computer interaction and cloud computing. Prior to this role, Rania was Director of Cloud Platform, Programming Models and Runtimes. Rania serves as a Judge for the MIT Solve AI for Humanity Prize, on the Leadership Challenge Group for MIT Solve's Learning for Girls and Women Challenge and on the Advisory Board of the Hariri Institute for Computing at Boston University. She has received several Outstanding Technical Innovation awards for major impact to the field of computer science and was a finalist for the 2019 MassTLC CTO of the Year award.

Keynote Session 3: Federated Learning Applications in Alexa, by Shiv Vitaladevuni (Amazon Alexa)

Abstract: Alexa is a virtual assistant AI technology launched by Amazon in 2014. One of key enabling technologies is wakeword, which allows users to interact with Alexa devices hands-free via voice. We present some of the unique ML challengesposed in wakeword, and how Federated Learning can be used to address them. We also present some considerations when bringing Federated Learning to consumer grade, embedded applications.

Biography: Shiv Vitaladevuni is a Senior Manager in Machine Learning at Amazon Alexa, focusing on R&D for Alexa family of devices such as Echo, Dot, FireTV, etc. At Amazon, Shiv leads a team of scientists and engineers inventing embedded speechand ML products used by millions of Alexa customers across all Alexa devices, around the globe. His team conducts research in areas such as Federated ML, Large scale semi/unsupervised learning, User diversity and fairness in ML, Speaker adaptation and personalization,memory efficient deep learning models, etc. Prior to Amazon, Shiv worked on video and text document analysis at Raytheon BBN Technologies, and bio-medical image analysis at Howard Hughes Medical Institute.

Keynote Session 4: The Shuffle Model and Federated Learning, by Ilya Mironov (Facebook)

Abstract: The shuffle model of computation, also known as the Encode-Shuffle-Analyze (ESA) architecture, is a recently introduced powerful approach towards combining anonymization channels and differentially private distributed computations. We present general results about amplification-by-shuffling unlocked by ESA, as well as more specialized theoretical and empirical findings. We discuss challenges of instantiating the shuffle model in practice.

Biography: Ilya Mironov obtained his Ph.D. in cryptography from Stanford in 2003. In 2003-2014 he was a member of Microsoft Research-Silicon Valley Campus, where he contributed to early works on differential privacy. In 2015-2019 he worked in Google Brain. Since 2019 he has been part of Facebook AI working on privacy-preserving machine learning.

Keynote Session 5: Advances and Open Problems in Federated Learning, by Brendan McMahan (Google)

Abstract: Motivated by the explosive growth in federated learning research, 22 Google researchers and 36 academics from 24 institutions collaborated on a paper titled Advances and Open Problems in Federated Learning. In this talk, I will survey some of the main themes from the paper, particularly the defining characteristics and challenges of different FL settings. I will then briefly discuss some of the ways FL increasingly powers Google products, and also highlight several exciting FL research results from Google.

Biography: Brendan McMahan is a research scientist at Google, where he leads efforts on decentralized and privacy-preserving machine learning. His team pioneered the concept of federated learning, and continues to push the boundaries of what is possible when working with decentralized data using privacy-preserving techniques. Previously, he has worked in the fields of online learning, large-scale convex optimization, and reinforcement learning. Brendan received his Ph.D. in computer science from Carnegie Mellon University.

Awards

  • Best Paper Award: Honglin Yuan and Tengyu Ma. Federated Accelerated Stochastic Gradient Descent
  • Best Student Paper Award: Wonyong Jeong, Jaehong Yoon, Eunho Yang and Sung Ju Hwang. Federated Semi-Supervised Learning with Inter-Client Consistency

Accepted Full Papers

  1. Laura Rieger, Rasmus Malik Thaarup Høegh and Lars Kai Hansen. Client Adaptation improves Federated Learning with Simulated Non-IID Clients
  2. Jinhyun So, Basak Guler and A. Salman Avestimehr. Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
  3. Wonyong Jeong, Jaehong Yoon, Eunho Yang and Sung Ju Hwang. Federated Semi-Supervised Learning with Inter-Client Consistency
  4. Ishika Singh, Haoyi Zhou, Kunlin Yang, Meng Ding, Bill Lin and Pengtao Xie. Differentially-private Federated Neural Architecture Search
  5. Chong Liu, Yuqing Zhu, Kamalika Chaudhuri and Yu-Xiang Wang. Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning
  6. Honglin Yuan and Tengyu Ma. Federated Accelerated Stochastic Gradient Descent
  7. Krishna Pillutla, Sham Kakade and Zaid Harchaoui. Robust Aggregation for Federated Learning
  8. Leighton Pate Barnes, Huseyin A. Inan, Berivan Isik and Ayfer Ozgur. rTop-k: A Statistical Estimation Approach to Distributed SGD
  9. Ashkan Yousefpour, Brian Nguyen, Siddartha Devic, Guanhua Wang, Abdul Rahman Kreidieh, Hans Lobel, Alexandre Bayen and Jason Jue. ResiliNet: Failure-Resilient Inference in Distributed Neural Networks
  10. Hanlin Lu, Changchang Liu, Ting He, Shiqiang Wang and Kevin S. Chan. Sharing Models or Coresets: A Study based on Membership Inference Attack
  11. Swanand Kadhe, Nived Rajaraman, O. Ozan Koyluoglu and Kannan Ramchandran. FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated Learning

Accepted Short Papers

  1. Zhaohui Yang, Mingzhe Chen, Walid Saad, Choong Seon Hong, Mohammad Shikh-Bahaei, H. Vincent Poor and Shuguang Cui. Delay Minimization for Federated Learning Over Wireless Communication Networks
  2. Angel Navia Vázquez, Manuel-Alberto Vázquez-López and Jesús Cid-Sueiro. Double Confidential Federated Machine Learning Logistic Regression for Industrial Data Platforms
  3. Kun Li, Fanglan Zheng, Jiang Tian and Xiaojia Xiang. A Federated F-score Based Ensemble Model for Automatic Rule Extraction
  4. Hajime Ono and Tsubasa Takahashi. Locally Private Distributed Reinforcement Learning
  5. Avishek Ghosh, Jichan Chung, Dong Yin and Kannan Ramchandran. An Efficient Framework for Clustered Federated Learning
  6. Yang Liu, Zhihao Yi and Tianjian Chen. Defending backdoor attacks in feature-partitioned collaborative learning
  7. Tianyi Chen, Xiao Jin, Yuejiao Sun and Wotao Yin. VAFL: a Method of Vertical Asynchronous Federated Learning
  8. Shahab Asoodeh and Flavio Calmon. Differentially Private Federated Learning: An Information-Theoretic Perspective
  9. Mathieu Andreux, Andre Manoel, Romuald Menuet, Charlie Saillard and Chloé Simpson. Federated Survival Analysis with Discrete-Time Cox Models
  10. Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, Amir Salman Avestimehr and Nageen Himayat. Coded Computing for Federated Learning at the Edge
  11. Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani and Ali Jadbabaie. Robust Federated Learning: The Case of Affine Distribution Shifts
  12. Mikhail Khodak, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith and Ameet Talwalkar. Weight-Sharing for Hyperparameter Optimization in Federated Learning
  13. Vaikkunth Mugunthan, Ravi Rahman and Lalana Kagal. BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning
  14. Vaikkunth Mugunthan, Anton Peraire-Bueno and Lalana Kagal. PrivacyFL: A simulator for privacy-preserving and secure federated learning
  15. Myungjae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis and Seong-Lyun Kim. XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated Learning
  16. Hossein Hosseini, Sungrack Yun, Hyunsin Park, Christos Louizos, Joseph Soriaga and Max Welling. Federated Learning of User Authentication Models
  17. Xinwei Zhang, Mingyi Hong, Sairaj Dhople, Wotao Yin and Yang Liu. FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data

Call for Papers

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of federated learning, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical challenges, and discuss potential solutions. This will lead to an overall advancement of FL and its impact in the community. Topics of interest include, but are not limited to, the following:

  • Adversarial attacks on FL
  • Blockchain for FL
  • Fairness in FL
  • Hardware for on-device FL
  • Novel applications of FL
  • Operational challenges in FL
  • Personalization in FL
  • Privacy concerns in FL
  • Privacy-preserving methods for FL
  • Resource-efficient FL
  • System and infrastructure for FL
  • Theoretical contributions to FL
  • Uncertainty in FL

Proceedings and Dual Submission Policy

Our workshop has no formal proceedings. Accepted papers will be posted on the workshop webpage. We welcome submissions of unpublished papers, including those that are submitted/accepted to other venues if that other venue allows so. We will not accept papers that are already published though, because the goal of the workshop is to share recent results and discuss open problems.

Submission Instructions

Submissions must be at most 6 pages long, excluding references, and follow ICML-20 template. Submissions are single-blind and author identity will be revealed to the reviewers. An optional appendix of arbitrary length is allowed and should be put at the end of the paper (after references).

Easychair submission link: https://easychair.org/conferences/?conf=flicml20

Submission Due (Final): May 17, 2020 June 10, 2020 (23:59 UTC-12)

Notification Due: May 31, 2020 June 30, 2020 (23:59 UTC-12)

If you have any enquiries, please email us at: flworkshop.icml.2020@gmail.com

Organizing Committee

  • Nathalie Baracaldo (IBM Research Almaden, USA)
  • Olivia Choudhury (Amazon, USA)
  • Gauri Joshi (Carnegie Mellon University, USA)
  • Ramesh Raskar (MIT Media Lab, USA)
  • Shiqiang Wang (IBM T. J. Watson Research Center, USA)
  • Han Yu (Nanyang Technological University, Singapore)

Program Committee

  • M. Hadi Amini (Carnegie Mellon University, USA)
  • Mehdi Bennis (University of Oulu, Finland)
  • Supriyo Chakraborty (IBM Research, USA)
  • Mingzhe Chen (Princeton University, USA)
  • Boi Faltings (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
  • Mingyi Hong (University of Minnesota, USA)
  • Mingyue Ji (University of Utah, USA)
  • Peter Kairouz (Google AI, USA)
  • Jakub Konecný (Google, USA)
  • Kin K. Leung (Imperial College, UK)
  • Changchang Liu (IBM Research, USA)
  • Dianbo Liu (Massachusetts Institute of Technology, USA)
  • Ji Liu (Stony Brook University, USA)
  • Yang Liu (Webank, China)
  • Mehrdad Mahdavi (Pennsylvania State University, USA)
  • Kshitiz Malik (Facebook, USA)
  • Jihong Park (Deakin University, Australia)
  • Stacy Patterson (Rensselaer Polytechnic Institute)
  • Peter Richtarik (King Abdullah University of Science and Technology, Saudi Arabia)
  • Shahin Shahrampour (Texas A&M University, USA)
  • Sebastian Urban Stich (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
  • Andrew Trask (DeepMind, USA)
  • Praneeth Vepakomma (Massachusetts Institute of Technology, USA)
  • Lingfei Wu (IBM Research AI, USA)
  • Poonam Yadav (University of York, UK)
  • Ashkan Yousefpour (Facebook AI, USA)
  • Mikhail Yurochkin (IBM Research, USA)
  • Hongyuan Zhan (Facebook, USA)

Organized by