International Workshop on Federated Learning for User Privacy and Data Confidentiality
in Conjunction with NeurIPS 2019 (FL-NeurIPS'19)
Workshop Date: December 13, 2019
Venue: West 118–120 Vancouver Convention Center, Vancouver, BC, Canada
Overview
A specific approach that has the potential to address a number of problems in this space is Federated Learning. The concept of Federated Learning is relevant in the setting when one wants to train a machine learning model based on a dataset stored across multiple locations, without the ability to move the data to any central location. This seemingly mild restriction renders many of the state-of-the-art techniques in machine learning impractical. One class of applications arises when data is generated by different users of a smartphone app, staying on users’ phones for privacy reasons. For example, Google’s Gboard mobile keyboard is already using federated learning in multiple places. Another class of applications involves data collected by different organizations, unable to share due to confidentiality reasons. Nevertheless, the same restrictions can also be present independent of privacy concerns, such as the case of data streams collected by IoT devices or self-driving cars, which need to be processed on-device, because it is infeasible to transmit and store the sheer amount of data.
At this moment, the pace of research innovation in federated learning is hampered by the relative complexity of properly setting up even simple experiments that reflect the practical setting. This issue is exacerbated in academic settings which typically lack access to actual user data. Recently, multiple open-source projects were created to address this high-barrier to entry. For example, LeaF is a benchmarking framework that contains preprocessed datasets, each with a “natural” partitioning that aims to reflect the type of non-identically distributed data partitions encountered in practical federated environments. Federated AI Technology Enabler (FATE) led by WeBank is an open-source technical framework that enables distributed and scalable secure computation protocols based on homomorphic encryption and multi-party computation, supporting federated learning architectures with various machine learning algorithms. Webank is also leading a related IEEE standard proposal. TensorFlow Federated (TFF) led by Google is an open-source framework on top of TensorFlow for flexibly expressing arbitrary computation on decentralized data. TFF enables researchers to experiment with federated learning on their own datasets, or those provided by LeaF. Google has also published a systems paper describing the design of their production system, which supports tens of millions of mobile phones. We expect these projects will encourage academic researchers and industry engineers to work more closely in addressing the challenges and eventually make significant positive impact. We support reproducible research and will sponsor a prize to be given to the best contribution, which also provides code to reproduce their results.
The workshop aims to bring together academic researchers and industry practitioners with common interests in this domain. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. Overall, the workshop should provide an opportunity to share the most recent and innovative work in this area, and discuss open problems and relevant approaches. The technical issues encouraged to be submitted include general computation based on decentralized data (i.e., not only machine learning), and how such computations can be combined with other research fields, such as differential privacy, secure multi-party computation, computational efficiency, coding theory, and others. Contributions in theory as well as applications are welcome, particularly proposals for novel system design.
Workshop Program
Time | Activity |
---|---|
08:55 – 09:00 | Opening Remarks by Lixin Fan |
09:00 – 09:30 | Invited Talk by Qiang Yang - Federated Learning in Recommendation Systems |
09:30 – 10:00 | Invited Talk by Ameet Talwalkar - Personalized Federated Learning |
10:00 – 10:30 | Tea Break & Poster Exhibition |
10:30 – 11:00 | Invited Talk by Max Welling - Ingredients for Bayesian, Privacy Preserving, Distributed Learning |
11:00 – 11:30 | Invited Talk by Dawn Song - Decentralized Federated Learning with Data Valuation |
Session 1. Effectiveness and Robustness | |
11:30 – 11:40 | Paul Pu Liang, Terrance Liu, Liu Ziyin, Russ Salakhutdinov and Louis-Philippe Morency. Think Locally, Act Globally: Federated Learning with Local and Global Representations |
11:40 – 11:50 | Daniel Peterson, Pallika Kanani and Virendra Marathe. Private Federated Learning with Domain Adaptation |
11:50 – 12:00 | Daliang Li and Junpu Wang. FedMD: Heterogeneous Federated Learning via Model Distillation |
12:00 – 12:10 | Yihan Jiang, Jakub Konečný, Keith Rush and Sreeram Kannan. Improving Federated Learning Personalization via Model Agnostic Meta Learning |
12:10 – 13:30 | Lunch & Poster Exhibition |
13:30 – 14:00 | Invited Talk by Daniel Ramage - Federated Learning at Google – Systems, Algorithms, and Applications in Practice |
14:00 – 14:30 | Invited Talk by Francoise Beaufays - Applied Federated Learning – What it Takes to Make it Happen, and Deployment in GBoard, the Google Keyboard |
Session 2: Communication and Efficiency | |
14:30 – 14:40 | Jianyu Wang, Anit Sahu, Zhouyi Yang, Gauri Joshi and Soummya Kar. MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling |
14:40 – 14:50 | Sebastian Caldas, Jakub Konečný, H. Brendan Mcmahan and Ameet Talwalkar. Mitigating the Impact of Federated Learning on Client Resources |
14:50 – 15:00 | Yang Liu, Yan Kang, Xinwei Zhang, Liping Li and Mingyi Hong. A Communication Efficient Vertical Federated Learning Framework |
15:00 – 15:10 | Ahmed Khaled, Konstantin Mishchenko and Peter Richtárik. Better Communication Complexity for Local SGD |
15:10 – 15:30 | Tea Break & Poster Exhibition |
15:30 – 16:00 | Invited Talk by Raluca Ada Popa - Helen: Coopetitive Learning for Linear Models |
16:30 – 17:00 | Invited Talk by Yiqiang Chen - FOCUS: Federated Opportunistic Computing for Ubiquitous Systems |
Session 3. Privacy and Fairness | |
17:00 – 17:10 | Xin Yao, Tianchi Huang, Rui-Xiao Zhang, Ruiyu Li and Lifeng Sun. Federated Learning with Unbiased Gradient Aggregation and Controllable Meta Updating |
17:10 – 17:20 | Zhicong Liang, Bao Wang, Stanley Osher and Yuan Yao. Exploring Private Federated Learning with Laplacian Smoothing |
17:20 – 17:30 | Tribhuvanesh Orekondy, Seong Joon Oh, Yang Zhang, Bernt Schiele and Mario Fritz. Gradient-Leaks: Understanding Deanonymization in Federated Learning |
17:30 – 17:40 | Aleksei Triastcyn and Boi Faltings. Federated Learning with Bayesian Differential Privacy |
17:40 – 18:00 | Panel Discussion (Mediated by: Qiang Yang) |
|
|
18:00 – 18:05 | Closing Remarks by Brendan McMahan |
End of the Workshop | |
Proceed to Reception Venue:
Vancouver Marriott Pinnacle Downtown Hotel, Level 3 Pinnacle Ball Room |
|
WeBank AI Night, Reception & Award Ceremony | |
19:00 – 19:10 | Welcome Speech by WeBank CAIO Prof Qiang Yang |
19:10 – 19:30 | WeBank and MILA Partnership Announcement |
19:30 – 19:50 | WeBank and Tencent Partnership Announcement |
19:50 – 20:10 | FL-NeurIPS 2019 Award Ceremony |
20:10 – 21:00 | Reception and Networking |
Awards
Distinguished Student Paper Awards:
Accepted Papers
Call for Contributions
Submissions in the form of extended abstracts must be at most 4 pages long (not including references) and adhere to the NeurIPS 2019 format. Submissions should be anonymized. The workshop will not have formal proceedings, but the accepted contributions will be expected to present a poster at the workshop.
Submission link: https://easychair.org/conferences/?conf=flneurips2019
Co-Chairs
Program Committee
Organized by
In Collaboration with