International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with NeurIPS 2019 (FL-NeurIPS'19)


Submission Due: September 9, 2019 (23:59 UTC-12)
Notification Due: September 30, 2019 (23:59 UTC-12)

Workshop Date: December 13, 2019
Venue: West 118–120 Vancouver Convention Center, Vancouver, BC, Canada

Overview

Privacy and security are becoming major concerns in recent years, particularly as companies and organizations are collecting increasingly detailed information about their products and users. This information can enable machine learning that produces more helpful products. However, at the same time, it expands the potential for misuse, and increases corresponding public concerns about the way companies use data, particularly when private data about individuals is involved. Recent research shows that privacy and utility do not necessarily need to be at odds, but can be addressed by careful design and analysis. The need for such research is reinforced by the recent introduction of new legal constraints, led by the European Union’s General Data Protection Regulation (GDPR), which is already inspiring novel legislative approaches around the world such as Cyber-security Law of the People’s Republic of China and The California Consumer Privacy Act of 2018.

A specific approach that has the potential to address a number of problems in this space is Federated Learning. The concept of Federated Learning is relevant in the setting when one wants to train a machine learning model based on a dataset stored across multiple locations, without the ability to move the data to any central location. This seemingly mild restriction renders many of the state-of-the-art techniques in machine learning impractical. One class of applications arises when data is generated by different users of a smartphone app, staying on users’ phones for privacy reasons. For example, Google’s Gboard mobile keyboard is already using federated learning in multiple places. Another class of applications involves data collected by different organizations, unable to share due to confidentiality reasons. Nevertheless, the same restrictions can also be present independent of privacy concerns, such as the case of data streams collected by IoT devices or self-driving cars, which need to be processed on-device, because it is infeasible to transmit and store the sheer amount of data.

At this moment, the pace of research innovation in federated learning is hampered by the relative complexity of properly setting up even simple experiments that reflect the practical setting. This issue is exacerbated in academic settings which typically lack access to actual user data. Recently, multiple open-source projects were created to address this high-barrier to entry. For example, LeaF is a benchmarking framework that contains preprocessed datasets, each with a “natural” partitioning that aims to reflect the type of non-identically distributed data partitions encountered in practical federated environments. Federated AI Technology Enabler (FATE) led by WeBank is an open-source technical framework that enables distributed and scalable secure computation protocols based on homomorphic encryption and multi-party computation, supporting federated learning architectures with various machine learning algorithms. Webank is also leading a related IEEE standard proposal. TensorFlow Federated (TFF) led by Google is an open-source framework on top of TensorFlow for flexibly expressing arbitrary computation on decentralized data. TFF enables researchers to experiment with federated learning on their own datasets, or those provided by LeaF. Google has also published a systems paper describing the design of their production system, which supports tens of millions of mobile phones. We expect these projects will encourage academic researchers and industry engineers to work more closely in addressing the challenges and eventually make significant positive impact. We support reproducible research and will sponsor a prize to be given to the best contribution, which also provides code to reproduce their results.

The workshop aims to bring together academic researchers and industry practitioners with common interests in this domain. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. Overall, the workshop should provide an opportunity to share the most recent and innovative work in this area, and discuss open problems and relevant approaches. The technical issues encouraged to be submitted include general computation based on decentralized data (i.e., not only machine learning), and how such computations can be combined with other research fields, such as differential privacy, secure multi-party computation, computational efficiency, coding theory, and others. Contributions in theory as well as applications are welcome, particularly proposals for novel system design.

Workshop Program

Time Activity
08:55 – 09:00 Opening Remarks by Lixin Fan
09:00 – 09:30 Invited Talk by Qiang Yang - Federated Learning in Recommendation Systems
09:30 – 10:00 Invited Talk by Ameet Talwalkar - Personalized Federated Learning
10:00 – 10:30 Tea Break & Poster Exhibition
10:30 – 11:00 Invited Talk by Max Welling - Ingredients for Bayesian, Privacy Preserving, Distributed Learning
11:00 – 11:30 Invited Talk by Dawn Song - Decentralized Federated Learning with Data Valuation
Session 1. Effectiveness and Robustness
11:30 – 11:40 Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations
11:40 – 11:50 Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation
11:50 – 12:00 Daliang Li and Junpu Wang. FedMD: Heterogeneous Federated Learning via Model Distillation
12:00 – 12:10 Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan. Improving Federated Learning Personalization via Model Agnostic Meta Learning
12:10 – 13:30 Lunch & Poster Exhibition
13:30 – 14:00 Invited Talk by Daniel Ramage - Federated Learning at Google – Systems, Algorithms, and Applications in Practice
14:00 – 14:30 Invited Talk by Francoise Beaufays - Applied Federated Learning – What it Takes to Make it Happen, and Deployment in GBoard, the Google Keyboard
Session 2: Communication and Efficiency
14:30 – 14:40 Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
14:40 – 14:50 Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar. Mitigating the Impact of Federated Learning on Client Resources
14:50 – 15:00 Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong. A Communication Efficient Vertical Federated Learning Framework
15:00 – 15:10 Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik. Better Communication Complexity for Local SGD
15:10 – 15:30 Tea Break & Poster Exhibition
15:30 – 16:00 Invited Talk by Raluca Ada Popa - Helen: Coopetitive Learning for Linear Models
16:30 – 17:00 Invited Talk by Yiqiang Chen - FOCUS: Federated Opportunistic Computing for Ubiquitous Systems
Session 3. Privacy and Fairness
17:00 – 17:10 Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun. Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating
17:10 – 17:20 Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao. Exploring Private Federated Learning with Laplacian Smoothing
17:20 – 17:30 Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz. Gradient-Leaks: Understanding Deanonymization in Federated Learning
17:30 – 17:40 Aleksei Triastcyn and Boi Faltings. Federated Learning with Bayesian Differential Privacy
17:40 – 18:00 Panel Discussion (Mediated by: Qiang Yang)
  1. Boi Faltings, Professor, EPFL, AAAI Fellow
  2. Chunyan Miao, Professor, Chair, School of Computer Science and Engineering, Nanyang Technological University, Singapore
  3. Daniel Ramage, Research Scientist, Google Research
  4. Dawn Song, Professor, University of California, Berkeley
  5. Max Welling, Professor, University of Amsterdam; VP Technologies, Qualcomm
  6. Yiqiang Chen, Professor, Institute of Computing Technology, Chinese Academy of Sciences
18:00 – 18:05 Closing Remarks by Brendan McMahan
End of the Workshop
Proceed to Reception Venue:
Vancouver Marriott Pinnacle Downtown Hotel, Level 3 Pinnacle Ball Room
WeBank AI Night, Reception & Award Ceremony
19:00 – 19:10 Welcome Speech by WeBank CAIO Prof Qiang Yang
19:10 – 19:30 WeBank and MILA Partnership Announcement
19:30 – 19:50 WeBank and Tencent Partnership Announcement
19:50 – 20:10 FL-NeurIPS 2019 Award Ceremony
20:10 – 21:00 Reception and Networking

Awards

Distinguished Paper Awards:
  • Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation
  • Daliang Li and Junpu Wang. FedMD: Heterogenous Federated Learning via Model Distillation

Distinguished Student Paper Awards:

  • Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations
  • Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling

Accepted Papers

  1. Ahmed Khaled and Peter Richtárik. Gradient Descent with Compressed Iterates
  2. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik. Better Communication Complexity for Local SGD
  3. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik. First Analysis of Local GD on Heterogeneous Data
  4. Aleksei Triastcyn and Boi Faltings. Federated Learning with Bayesian Differential Privacy
  5. Boyue Li, Shicong Cen, Yuxin Chen and Yuejie Chi. Communication-Efficient Distributed Optimization in Networks with Gradient Tracking
  6. Daliang Li and Junpu Wang. FedMD: Heterogenous Federated Learning via Model Distillation
  7. Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation
  8. Dashan Gao, Ce Ju, Xiguang Wei, Yang Liu, Tianjian Chen and Qiang Yang. HHHFL: Hierarchical Heterogeneous Horizontal Federated Learning for Electroencephalography
  9. Felix Sattler, Klaus-Robert Müller and Wojciech Samek. Clustered Federated Learning
  10. Florian Hartmann, Sunah Suh, Arkadiusz Komarzewski, Tim D. Smith and Ilana Segall. Federated Learning for Ranking Browser History Suggestions
  11. Jack Goetz, Kshitiz Malik, Duc Bui, Seungwhan Moon, Honglei Liu and Anuj Kumar. Active Federated Learning
  12. Jiahuan Luo, Xueyang Wu, Yun Luo, Anbu Huang, Yunfeng Huang, Yang Liu and Qiang Yang. Real-World Image Datasets for Federated Learning
  13. Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
  14. Kai Yang, Tao Fan, Tianjian Chen, Yuanming Shi and Qiang Yang. A Quasi-Newton Method Based Vertical Federated Learning Framework for Logistic Regression
  15. Kartikeya Bhardwaj, Wei Chen and Radu Marculescu. FedMAX: Activation Entropy Maximization Targeting Effective Non-IID Federated Learning
  16. Khaoula El Mekkaoui, Paul Blomstedt, Diego Mesquita and Samuel Kaski. Towards federated stochastic gradient Langevin dynamics
  17. Mingshu Cong, Zhongming Ou, Yanxin Zhang, Han Yu, Xi Weng, Jiabao Qu, Siu Ming Yiu, Yang Liu and Qiang Yang. Neural Network Optimization for a VCG-based Federated Learning Incentive Mechanism
  18. Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef and Itai Zeitak. Overcoming Forgetting in Federated Learning on Non-IID Data
  19. Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations
  20. Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar. Mitigating the Impact of Federated Learning on Client Resources
  21. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan Mcmahan, Virginia Smith and Ameet Talwalkar. Leaf: A Benchmark for Federated Settings
  22. Shicong Cen, Huishuai Zhang, Yuejie Chi, Wei Chen and Tie-Yan Liu. Convergence and Regularization of Distributed Stochastic Variance Reduced Methods
  23. Songtao Lu, Yawen Zhang, Yunlong Wang and Christina Mack. Learn Electronic Health Records by Fully Decentralized Federated Learning
  24. Suyi Li, Yong Cheng, Yang Liu and Wei Wang. Abnormal Client Behavior Detection in Federated Learning
  25. Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz. Gradient-Leaks: Understanding Deanonymization in Federated Learning
  26. Tzu-Ming Harry Hsu, Hang Qi and Matthew Brown. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification
  27. Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun. Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating
  28. Yang Liu, Xiong Zhang, Shuqi Qin and Xiaoping Lei. Differentially Private Linear Regression over Fully Decentralized Datasets
  29. Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong. A Communication Efficient Vertical Federated Learning Framework
  30. Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan. Improving Federated Learning Personalization via Model Agnostic Meta Learning
  31. Zhaorui Li, Zhicong Huang, Chaochao Chen and Cheng Hong. Quantification of the Leakage in Federated Learning
  32. Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao. Exploring Private Federated Learning with Laplacian Smoothing
  33. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh and Brendan McMahan. Backdoor Attacks on Federated Learning and Corresponding Defenses

Call for Contributions

We welcome high quality submissions in the broad area of federated learning (FL). A few (non-exhaustive) topics of interest include:
  1. Optimization algorithms for FL, particularly communication-efficient algorithms tolerant of non-IID data
  2. Approaches that scale FL to larger models, including model and gradient compression techniques
  3. Novel applications of FL
  4. Theory for FL
  5. Approaches to enhancing the security and privacy of FL, including cryptographic techniques and differential privacy
  6. Bias and fairness in the FL setting
  7. Attacks on FL including model poisoning, and corresponding defenses
  8. Incentive mechanisms for FL
  9. Software and systems for FL
  10. Novel applications of techniques from other fields to the FL setting: information theory, multi-task learning, model-agnostic meta-learning, and etc.
  11. Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with FL.

Submissions in the form of extended abstracts must be at most 4 pages long (not including references) and adhere to the NeurIPS 2019 format. Submissions should be anonymized. The workshop will not have formal proceedings, but the accepted contributions will be expected to present a poster at the workshop.

Submission link: https://easychair.org/conferences/?conf=flneurips2019

Co-Chairs

  • Lixin Fan (WeBank, China)
  • Jakub Konečný (Google, USA)
  • Yang Liu (WeBank, China)
  • Brendan McMahan (Google, USA)
  • Virginia Smith (Carnegie Mellon University, USA)
  • Han Yu (Nanyang Technological University, Singapore)

Program Committee

  • Adria Gascon (The Alan Turing Institute / University of Warwick, UK)
  • Anis Elgabli (University of Oulu, Finland)
  • Aurélien Bellet (Inria, France)
  • Ayfer Ozgur (Stanford University, USA)
  • Bingsheng He (National University of Singapore, Singapore)
  • Boi Faltings (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
  • Chaoping Xing (Nanyang Technological University, Singapore)
  • Chaoyang He (University of Southern California, USA)
  • Dimitrios Papadopoulos (Hong Kong University of Science and Technology, Hong Kong)
  • Fabio Casati (University of Trento, Italy)
  • Farinaz Koushanfar (University of California San Diego, USA)
  • Gauri Joshi (Carnegie Mellon University, USA)
  • Graham Cormode (University of Warwick, UK)
  • Jalaj Upadhyay (Apple, USA)
  • Ji Feng (Sinnovation Ventures AI Institute, China)
  • Jianshu Weng (AI Singapore, Singapore)
  • Jihong Park (University of Oulu, Finland)
  • Joshua Gardner (University of Michigan, USA)
  • Jun Zhao (Nanyang Technological University, Singapore)
  • Keith Bonawitz (Google, USA)
  • Lalitha Sankar (Arizona State University, USA)
  • Leye Wang (Peking University, China)
  • Marco Gruteser (Google, USA)
  • Martin Jaggi (Ecole Polytechnique Fédérale de Lausanne, Switzerland)
  • Mehdi Bennis (University of Oulu, Finland)
  • Mingshu Cong (The University of Hong Kong, Hong Kong)
  • Nguyen Tran (The University of Sydney, Australia)
  • Peter Kairouz (Google, USA)
  • Pingzhong Tang (Tsinghua University, China)
  • Praneeth Vepakomma (Massachusetts Institute of Technology, USA)
  • Prateek Mittal (Princeton University, USA)
  • Richard Nock (Data61, Australia)
  • Rui Lin (Chalmers University of Technology, Sweden)
  • Sewoong Oh (University of Illinois at Urbana-Champaign, USA)
  • Shiqiang Wang (IBM, USA)
  • Siwei Feng (Nanyang Technological University, Singapore)
  • Tara Javidi (University of California San Diego, USA)
  • Xi Weng (Peking University, China)
  • Yihan Jiang (University of Washington, USA)
  • Yong Cheng (WeBank, China)
  • Yongxin Tong (Beihang University, China)
  • Zelei Liu (Nanyang Technological University, Singapore)
  • Zheng Xu (University of Science and Technology of China, China)

Organized by

 

In Collaboration with