Workshop on Federated Learning for Data Privacy and Confidentiality (in Conjunction with NeurIPS 2019)

Time and Venue

  • Submission deadline: Sep 9, 2019 (23:59pm UTC-12)
  • Author notification: Sep 30, 2019 (23:59pm UTC-12)
  • Workshop: Dec 13, 2019
  • Venue: West 118 – 120, Vancouver Convention Center.

Awards

To be announced after the workshop.

Program

TimeActivities
08:55 – 09:00Opening Remarks by Lixin Fan
09:00 – 09:30Invited Talk by Qiang Yang
Federated Learning in Recommendation Systems
09:30 – 10:00Invited Talk by Ameet Talwalkar
Personalized Federated Learning
10:00 – 10:30Tea Break & Poster Exhibition
10:30 – 11:00Invited Talk by Max Welling
Ingredients for Bayesian, Privacy Preserving, Distributed Learning
11:00 – 11:30Invited Talk by Dawn Song
Title: TBD
Session 1. Effectiveness and Robustness
11:30 – 11:40Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency.
Think Locally, Act Globally: Federated Learning with Local and Global Representations
11:40 – 11:50Daniel Peterson, Pallika Kanani and Virendra Marathe.
Private Federated Learning with Domain Adaptation
11:50 – 12:00Daliang Li and Junpu Wang.
FedMD: Heterogenous Federated Learning via Model Distillation
12:00 – 12:10Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan.
Improving Federated Learning Personalization via Model Agnostic Meta Learning
12:10 – 13:30Lunch & Poster Exhibition
13:30 – 14:00Invited Talk by Daniel Ramage
Federated learning at Google – systems, algorithms, and applications in practice
14:00 – 14:30Invited Talk by Francoise Beaufays
Applied federated learning – what it takes to make it happen, and deployments in Gboard, the Google Keyboard
Session 2: Communication and Efficiency
14:30 – 14:40Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar.
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
14:40 – 14:50Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar.
Mitigating the Impact of Federated Learning on Client Resources
14:50 – 15:00Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong.
A Communication Efficient Vertical Federated Learning Framework
15:00 – 15:10Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik.
Better Communication Complexity for Local SGD
15:10 – 15:30Tea Break & Poster Exhibition
15:30 – 16:00Invited Talk by Raluca Ada Popa
Helen: Coopetitive Learning for Linear Models
16:30 – 17:00Invited Talk by Yiqiang Chen
FOCUS: Federated Opportunistic Computing for Ubiquitous Systems
Session 3. Privacy and Fairness
17:00 – 17:10Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun.
Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating
17:10 – 17:20Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao.
Exploring Private Federated Learning with Laplacian Smoothing
17:20 – 17:30Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz.
Gradient-Leaks: Understanding Deanonymization in Federated Learning
17:30 – 17:40Aleksei Triastcyn and Boi Faltings.
Federated Learning with Bayesian Differential Privacy
17:40 – 18:00Panel Discussion (Mediated by: Qiang Yang)

Yiqiang Chen, Professor, Institute of Computing Technology, Chinese Academy of Sciences
Boi Faltings, Professor, EPFL, AAAI Fellow
Chunyan Miao, Professor, Chair, School of Computer Science and Engineering, Nanyang Technological University, Singapore
Daniel Ramage, Research Scientist, Google Research
Dawn Song, Professor, University of California, Berkeley
Max Welling, Professor, University of Amsterdam; VP Technologies, Qualcomm
18:00 – 18:05Closing Remarks by Brendan McMahan
End of the Workshop
Proceed to Reception Venue @ Vancouver Marriott Pinnacle Downtown Hotel
(270 meters from Vancouver Convention Center)
19:00 – 21:00Reception & Award Ceremony @ WeBank AI Night

Overview

Privacy and security have become critical concerns in recent years, particularly as companies and organizations increasingly collect detailed information about their products and users. This information can enable machine learning methods that produce better products. However, it also has the potential to allow for misuse, especially when private data about individuals is involved. Recent research shows that privacy and utility do not necessarily need to be at odds, but can be addressed by careful design and analysis. The need for such research is reinforced by the recent introduction of new legal constraints, led by the European Union’s General Data Protection Regulation (GDPR), which is already inspiring novel legislative approaches around the world such as Cyber-security Law of the People’s Republic of China and The California Consumer Privacy Act of 2018.

An approach that has the potential to address a number of problems in this space is federated learning (FL). FL is an ML setting where many clients (e.g., mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g., service provider), while keeping the training data decentralized. Organizations and mobile devices have access to increasing amounts of sensitive data, with scrutiny of ML privacy and data handling practices increasing correspondingly. These trends have produced significant interest in FL, since it provides a viable path to state-of-the-art ML without the need for the centralized collection of training data – and the risks and responsibilities that come with such centralization. Nevertheless, significant challenges remain open in the FL setting, the solution of which will require novel techniques from multiple fields, as well as improved open-source tooling for both FL research and real-world deployment

This workshop aims to bring together academic researchers and industry practitioners with common interests in this domain. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. Overall, the workshop will provide an opportunity to share the most recent and innovative work in FL, and discuss open problems and relevant approaches. The technical issues encouraged to be submitted include general computation based on decentralized data (i.e., not only machine learning), and how such computations can be combined with other research areas, such as differential privacy, secure multi-party computation, computational efficiency, coding theory, etc. Contributions in theory as well as applications are welcome, including proposals for novel system design. Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with federated learning.

Call for Contributions

We welcome high quality submissions in the broad area of federated learning (FL). A few (non-exhaustive) topics of interest include:

  • Optimization algorithms for FL, particularly communication-efficient algorithms tolerant of non-IID data
  • Approaches that scale FL to larger models, including model and gradient compression techniques
  • Novel applications of FL
  • Theory for FL
  • Approaches to enhancing the security and privacy of FL, including cryptographic techniques and differential privacy
  • Bias and fairness in the FL setting
  • Attacks on FL including model poisoning, and corresponding defenses
  • Incentive mechanisms for FL
  • Software and systems for FL
  • Novel applications of techniques from other fields to the FL setting: information theory, multi-task learning, model-agnostic meta-learning, and etc.
  • Work on fully-decentralized (peer-to-peer) learning will also be considered, as there is significant overlap in both interest and techniques with FL.

Submissions in the form of extended abstracts must be at most 4 pages long (not including references), be anonymized, and adhere to the NeurIPS 2019 format. Submissions will be accepted as contributed talks or poster presentations. The workshop will not have formal proceedings, but accepted papers will be posted on the workshop website.

We support reproducible research and will sponsor a prize to be given to the best contribution that provides code to reproduce their results.

Submission link:
https://easychair.org/conferences/?conf=flneurips2019

Organizers

  • Lixin Fan (WeBank)
  • Jakub Konečný (Google)
  • Yang Liu (WeBank)
  • Brendan McMahan (Google)
  • Virginia Smith (Carnegie Mellon University)
  • Han Yu (Nanyang Technological University)

Invited Speakers

  • Francoise Beaufays, Principal Researcher, Google
  • Yiqiang Chen, Professor, Institute of Computing Technology, Chinese Academy of Sciences
  • Raluca Ada Popa, Assistant Professor, University of California, Berkeley
  • Daniel Ramage, Research Scientist, Google Research
  • Dawn Song, Professor, University of California, Berkeley
  • Ameet Talwalkar, Assistant Professor, CMU; Chief Scientist, Determined AI
  • Max Welling, Professor, University of Amsterdam; VP Technologies, Qualcomm
  • Qiang Yang, Chair Professor, Hong Kong University of Science and Technology, Hong Kong; Chief AI Officer, WeBank

FAQ

  • Can supplementary material be added beyond the 4-page limit and are there any restrictions on it?
    • Yes, you may include additional supplementary material, but you should ensure that the main paper is self-contained, since looking at supplementary material is at the discretion of the reviewers. The supplementary material should also follow the same NeurIPS format as the paper and be limited to a reasonable amount (max 10 pages in addition to the main submission).
  • Can a submission to this workshop be submitted to another NeurIPS workshop in parallel?
    • We discourage this, as it leads to more work for reviewers across multiple workshops. Our suggestion is to pick one workshop to submit to.
  • Can a paper be submitted to the workshop that has already appeared at a previous conference with published proceedings?
    • We won’t be accepting such submissions unless they have been adapted to contain significantly new results (where novelty is one of the qualities reviewers will be asked to evaluate).
  • Can a paper be submitted to the workshop that is currently under review or will be under review at a conference during the review phase?
    • It is fine to submit a condensed version (i.e., 4 pages) of a parallel conference submission, if it also fine for the conference in question. Our workshop does not have archival proceedings, and therefore parallel submissions of extended versions to other conferences are acceptable.

Program Committee Members

  • Adria Gascon (The Alan Turing Institute / University of Warwick)
  • Anis Elgabli (University of Oulu)
  • Aurélien Bellet (Inria)
  • Ayfer Ozgur (Stanford University)
  • Bingsheng He (National University of Singapore)
  • Boi Faltings (Ecole Polytechnique Fédérale de Lausanne)
  • Chaoyang He (University of Southern California)
  • Dimitrios Papadopoulos (Hong Kong University of Science and Technology)
  • Fabio Casati (University of Trento)
  • Farinaz Koushanfar (University of California San Diego)
  • Gauri Joshi (Carnegie Mellon University)
  • Graham Cormode (University of Warwick)
  • Ji Feng (Sinnovation Ventures AI Institute)
  • Jianshu Weng (Swiss Re)
  • Jihong Park (University of Oulu)
  • Joshua Gardner (University of Michigan)
  • Jun Zhao (Nanyang Technological University)
  • Keith Bonawitz (Google)
  • Lalitha Sankar (Arizona State University)
  • Leye Wang (Peking University)
  • Marco Gruteser (Google)
  • Martin Jaggi (Ecole Polytechnique Fédérale de Lausanne)
  • Mehdi Bennis (University of Oulu)
  • Mingshu Cong (The University of Hong Kong)
  • Nguyen Tran (The University of Sydney)
  • Peter Kairouz (Google)
  • Praneeth Vepakomma (MIT)
  • Prateek Mittal (Princeton University)
  • Richard Nock (Data61)
  • Rui Lin (Chalmers University of Technology)
  • Sewoong Oh (University of Illinois at Urbana-Champaign)
  • Shiqiang Wang (IBM)
  • Siwei Feng (Nanyang Technological University)
  • Tara Javidi (University of California San Diego)
  • Xi Weng (Peking University)
  • Yihan Jiang (University of Washington)
  • Yong Cheng (WeBank)
  • Yongxin Tong (Beihang University)
  • Zelei Liu (Nanyang Technological University)
  • Zheng Xu (University of Science and Technology of China)

Accepted Papers

  1. Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations
  2. Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun. Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating
  3. Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation
  4. Daliang Li and Junpu Wang. FedMD: Heterogenous Federated Learning via Model Distillation
  5. Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar. Mitigating the Impact of Federated Learning on Client Resources
  6. Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
  7. Sebastian Caldas, Sai Meher Karthik Duddu, Peter Wu, Tian Li, Jakub Konečný, H. Brendan Mcmahan, Virginia Smith and Ameet Talwalkar. Leaf: A Benchmark for Federated Settings
  8. Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan. Improving Federated Learning Personalization via Model Agnostic Meta Learning
  9. Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao. Exploring Private Federated Learning with Laplacian Smoothing
  10. Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz. Gradient-Leaks: Understanding Deanonymization in Federated Learning
  11. Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong. A Communication Efficient Vertical Federated Learning Framework
  12. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik. Better Communication Complexity for Local SGD
  13. Yang Liu, Xiong Zhang, Shuqi Qin and Xiaoping Lei. Differentially Private Linear Regression over Fully Decentralized Datasets
  14. Florian Hartmann, Sunah Suh, Arkadiusz Komarzewski, Tim D. Smith, Ilana Segall. Federated Learning for Ranking Browser History Suggestions
  15. Aleksei Triastcyn and Boi Faltings. Federated Learning with Bayesian Differential Privacy
  16. Jack Goetz, Kshitiz Malik, Duc Bui, Seungwhan Moon, Honglei Liu and Anuj Kumar. Active Federated Learning
  17. Kartikeya Bhardwaj, Wei Chen and Radu Marculescu. FedMAX: Activation Entropy Maximization Targeting Effective Non-IID Federated Learning
  18. Mingshu Cong, Zhongming Ou, Yanxin Zhang, Han Yu, Xi Weng, Jiabao Qu, Siu Ming Yiu, Yang Liu and Qiang Yang. Neural Network Optimization for a VCG-based Federated Learning Incentive Mechanism
  19. Kai Yang, Tao Fan, Tianjian Chen, Yuanming Shi and Qiang Yang. A Quasi-Newton Method Based Vertical Federated Learning Framework for Logistic Regression
  20. Suyi Li, Yong Cheng, Yang Liu and Wei Wang. Abnormal Client Behavior Detection in Federated Learning
  21. Songtao Lu, Yawen Zhang, Yunlong Wang and Christina Mack. Learn Electronic Health Records by Fully Decentralized Federated Learning
  22. Shicong Cen, Huishuai Zhang, Yuejie Chi, Wei Chen and Tie-Yan Liu. Convergence and Regularization of Distributed Stochastic Variance Reduced Methods
  23. Zhaorui Li, Zhicong Huang, Chaochao Chen and Cheng Hong. Quantification of the Leakage in Federated Learning
  24. Tzu-Ming Harry Hsu, Hang Qi and Matthew Brown. Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification
  25. Boyue Li, Shicong Cen, Yuxin Chen and Yuejie Chi. Communication-Efficient Distributed Optimization in Networks with Gradient Tracking
  26. Khaoula El Mekkaoui, Paul Blomstedt, Diego Mesquita and Samuel Kaski. Towards federated stochastic gradient Langevin dynamics
  27. Felix Sattler, Klaus-Robert Müller and Wojciech Samek. Clustered Federated Learning
  28. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh and Brendan McMahan. Backdoor Attacks on Federated Learning and Corresponding Defenses
  29. Neta Shoham, Tomer Avidor, Aviv Keren, Nadav Israel, Daniel Benditkis, Liron Mor-Yosef and Itai Zeitak. Overcoming Forgetting in Federated Learning on Non-IID Data
  30. Ahmed Khaled and Peter Richtárik. Gradient Descent with Compressed Iterates
  31. Jiahuan Luo, Xueyang Wu, Yun Luo, Anbu Huang, Yunfeng Huang, Yang Liu and Qiang Yang. Real-World Image Datasets for Federated Learning
  32. Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik. First Analysis of Local GD on Heterogeneous Data
  33. Dashan Gao, Ce Ju, Xiguang Wei, Yang Liu, Tianjian Chen and Qiang Yang. HHHFL: Hierarchical Heterogeneous Horizontal Federated Learning for Electroencephalography