International Workshop on Federated Learning: Recent Advances and New Challenges
in Conjunction with NeurIPS 2022 (FL-NeurIPS'22)


Final Submission Deadline: September 22, 2022 (23:59:59 AoE)
Notification Due: October 20, 2022
Workshop Date: Friday, December 2, 2022
Venue: New Orleans Convention Center, New Orleans, LA, USA

Call for Papers

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Despite the advantages of FL, and its successful application in certain industry-based cases, this field is still in its infancy due to new challenges that are imposed by limited visibility of the training data, potential lack of trust among participants training a single model, potential privacy inferences, and in some cases, limited or unreliable connectivity.

The goal of this workshop is to bring together researchers and practitioners interested in FL. This day-long event will facilitate interaction among students, scholars, and industry professionals from around the world to understand the topic, identify technical challenges, and discuss potential solutions. This will lead to an overall advancement of FL and its impact in the community, while noting that FL has become an increasingly popular topic in the machine learning community in recent years.

Topics of interest include, but are not limited to, the following:
  • Adversarial attacks on FL
  • Applications of FL
  • Blockchain for FL
  • Beyond first-order methods in FL
  • Beyond local methods in FL
  • Communication compression in FL
  • Data heterogeneity in FL
  • Decentralized FL
  • Device heterogeneity in FL
  • Fairness in FL
  • Hardware for on-device FL
  • Variants of FL like split learning
  • Local methods in FL
  • Nonconvex FL
  • Operational challenges in FL
  • Optimization advances in FL
  • Partial participation in FL
  • Personalization in FL
  • Privacy concerns in FL
  • Privacy-preserving methods for FL
  • Resource-efficient FL
  • Systems and infrastructure for FL
  • Theoretical contributions to FL
  • Uncertainty in FL
  • Vertical FL

The workshop will have invited talks on a diverse set of topics related to FL. In addition, we plan to have an industrial panel and booth, where researchers from industry will talk about challenges and solutions from an industrial perspective.

More information on previous workshops can be found here.


Submission Instructions

Submissions should be no more than 6 pages long, excluding references, and follow NeurIPS'22 template. Submissions are double-blind (author identity shall not be revealed to the reviewers), so the submitted PDF file should not include any identifiable information of authors. An optional appendix of any length is allowed and should be put at the end of the paper (after references).

Submissions are collected on OpenReview at the following link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Federated_Learning.
Accepted papers and their review comments will be posted on OpenReview in public. Due to the short timeline, we will not have a rebuttal period, but the authors are encouraged to interact and discuss with reviewers on OpenReview after the acceptance notifications are sent out. Rejected papers and their reviews will remain private and not posted in public.

For questions, please contact: fl-neurips-2022@googlegroups.com


Proceedings and Dual Submission Policy

Our workshop does not have formal proceedings, i.e., it is non-archival. Accepted papers will be available in public on OpenReview together with the reviewers' comments. Revisions to accepted papers will be allowed until shortly before the workshop date.

We welcome submissions of unpublished papers, including those that are submitted to other venues if that other venue allows so. However, papers that have been accepted to an archival venue as of Sept. 21, 2022 should not be resubmitted to this workshop, because the goal of the workshop is to share recent results and discuss open problems. Specifically, papers that have been accepted to NeurIPS'22 main conference should not be resubmitted to this workshop.


Presentation Format

The workshop will primarily take place physically with in person attendance. For presenters who cannot attend in person, it is planned to be made possible to connect remotely over Zoom for the oral talks. However, the poster sessions will be in-person only. Depending on the situation, we may include a lightening talk session for accepted poster presentations where the presenters cannot attend physically, or organize a separate virtual session after the official workshop date. If a paper is accepted as an oral talk, the NeurIPS organizers require a pre-recording of the presentation by early November, which will be made available for virtual participants to view. All accepted papers will be posted on OpenReview and linked on our webpage.


Invited Talks

   

Title: Trustworthy Federated Learning

Speaker: Bo Li, Assistant Professor, University of Illinois at Urbana–Champaign (UIUC)

Biography
Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.

   

Title: Scalable and Communication-Efficient Vertical Federated Learning

Speaker: Stacy Patterson, Associate Professor, Rensselaer Polytechnic Institute

Biography
Stacy Patterson is an Associate Professor in the Department of Computer Science at Rensselaer Polytechnic Institute. She received the MS and PhD in computer science from UC Santa Barbara in 2003 and 2009, respectively. From 2009-2011, she was a postdoctoral scholar at the Center for Control, Dynamical Systems and Computation at UC Santa Barbara. From 2011-2013, she was a postdoctoral fellow in the Department of Electrical Engineering at Technion - Israel Institute of Technology. Dr. Patterson is the recipient of a Viterbi postdoctoral fellowship, the IEEE CSS Axelby Outstanding Paper Award, and an NSF CAREER award. She serves as an Associate Editor for the IEEE Transactions on Control of Network Systems. Her research interests include distributed algorithms, cooperative control, and edge and cloud computing.

   

Title: Asynchronous Optimization: Delays, Stability, and the Impact of Data Heterogeneity

Speaker: Konstantin Mishchenko, Research Scientist, Samsung

Biography
Konstantin Mishchenko is a Research Scientist at Samsung in Cambridge, UK, working on optimization theory and federated learning. He received his double-degree MSc from Paris-Dauphine and École normale supérieure Paris-Saclay in 2017, and he did his PhD under the supervision of Peter Richtárik from 2017 to 2021. From December 2021 to October 2022, he was a postdoc in the group of Francis Bach at Inria Paris. Konstantin had research internships at Google Brain and Amazon, has been recognized as an outstanding reviewer for NeurIPS19, ICML20, AAAI20, ICLR21, ICML21, NeurIPS21, ICLR22, ICML22 and served as an Area Chair for ACML 2022. He was named a Rising Career in Data Science by the University of Chicago in 2021 and has published 11 conference papers at ICML, ICLR, NeurIPS, AISTATS, and UAI.

   

Title: On the Unreasonable Effectiveness of Federated Averaging with Heterogenous Data

Speaker: Jianyu Wang, Research Scientist, Meta

Biography
Jianyu Wang is a research scientist at Meta. He received his Ph.D. from ECE department at Carnegie Mellon University in 2022 and received his B.Eng. in Electrical Engineering from Tsinghua University in 2017. He was a research intern with Google Research in 2020 and 2021, and with Facebook AI Research in 2019. His research interests are federated learning, distributed optimization, and systems for large-scale machine learning. His awards and honors include the Qualcomm Ph.D. Fellowship (2019), the best student paper award at NeurIPS 2019 federated learning workshop, and the best poster award at NSF CEDO workshop (2021).


Organizing Committee


Program Committee

  • Ali Anwar (University of Minnesota)
  • Ang Li (Duke University)
  • Anran Li (Nanyang Technological University)
  • Ashkan Yousefpour (Meta)
  • Aurélien Bellet (INRIA)
  • Bing Luo (Duke University)
  • Bingsheng He (National University of Singapore)
  • Carlee Joe-Wong (Carnegie Mellon University)
  • Chao Ren (Nanyang Technological University)
  • Chaoyang He (University of Southern California)
  • Chuizheng Meng (University of Southern California)
  • Dianbo Liu (University of Montreal)
  • Divyansh Jhunjhunwala (Carnegie Mellon University)
  • Farzin Haddadpour (Yale University)
  • Grigory Malinovsky (KAUST)
  • Hongyi Wang (Carnegie Mellon University)
  • Hongyuan Zhan (Meta)
  • Jayanth Reddy Regatti (Ohio State University)
  • Jia Liu (Ohio State University)
  • Jiankai Sun (ByteDance Inc.)
  • Jianyu Wang (Facebook)
  • Jiayi Wang (University of Utah)
  • Jihong Park (Deakin University)
  • Jinhyun So (University of Southern California)
  • Kallista Bonawitz (Google)
  • Kevin Hsieh (Microsoft)
  • Konstantin Mishchenko (Ecole Normale Supérieure de Paris)
  • Kshitiz Malik (University of Illinois, Urbana-Champaign)
  • Mehrdad Mahdavi (Pennsylvania State University)
  • Mi Zhang (Ohio State University)
  • Michael Rabbat (McGill University)
  • Mingyi Hong (Iowa State University)
  • Mingzhe Chen (University of Miami)
  • Ningning Ding (Chinese University of Hong Kong)
  • Paulo Ferreira (Dell)
  • Pengchao Han (Chinese University of Hong Kong, Shenzhen)
  • Pranay Sharma (Carnegie Mellon University)
  • Rui Lin (Chalmers University of Technology)
  • Samuel Horváth (Mohamed bin Zayed University of Artificial Intelligence)
  • Sebastian U Stich (CISPA Helmholtz Center for Information Security)
  • Shangwei Guo (Chongqing University)
  • Songtao Lu (IBM Research)
  • Songze Li (The Hong Kong University of Science and Technology)
  • Swanand Kadhe (IBM Research)
  • Tara Javidi (University of California, San Diego)
  • Theodoros Salonidis (IBM Research)
  • Tianyi Chen (Rensselaer Polytechnic Institute)
  • Victor Valls (Trinity College, Dublin)
  • Yae Jee Cho (Carnegie Mellon University)
  • Yang Liu (Tsinghua University)
  • Yi Zhou (IBM Research)
  • Zehui Xiong (Singapore University of Technology and Design)
  • Zheng Xu (Google)

Sponsored by

         

Organized by