International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS’23)


Final Submission Deadline: October 02, 2023 (23:59:59 AoE)
Notification Due: October 27, 2023
Workshop Date: Saturday, December 16, 2023
Venue: New Orleans Convention Center, New Orleans, LA, USA

Call for Papers

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Recently, foundation models such as ChatGPT have revolutionized the field of machine learning by demonstrating remarkable capabilities across a wide range of tasks. These models have democratized the development of machine learning models, empowering developers to focus more on tuning a foundation model to their specific task rather than building complex models from scratch. This paradigm shift has the potential to remove the barriers to entry for machine learning development, and enables a broader community of developers to create high-quality models.

However, as the model development process itself becomes increasingly accessible, a new bottleneck emerges: computation power and data access. While foundation models have the potential to perform exceptionally well across various tasks, they pose two challenges: 1) training them requires vast amounts of training data and compute power, and 2) fine-tuning them to specific applications requires specialized and potentially sensitive data. Acquiring and centralizing datasets for both training and fine-tuning poses several challenges, including data privacy concerns, legal constraints (such as GDPR, HIPAA), and computational burdens.

FL is a promising solution to address these challenges in the era of foundation models. The fundamental goal of federated learning is to train models collaboratively across decentralized devices or data silos while keeping the data securely on those devices or within specific organizations. By adopting federated learning approaches, we can leverage the vast amounts of distributed data and compute available across different sources while respecting privacy regulations and data ownership.

The rise of foundation models amplifies the importance and relevance of FL as a crucial research direction. With foundation models becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the full potential of foundation models, enabling efficient and scalable training while safeguarding sensitive data.

With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the age of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field. The workshop topics include but are not limited to the following.
Theory and algorithmic foundations:
  • Impact of heterogeneity in FL of large models
  • Multi-stage model training (e.g., base model + fine tuning)
  • Optimization advances in FL (e.g., beyond first-order and local methods)
  • Prompt tuning in federated settings
  • Self-supervised learning in federated settings
Leveraging foundation models to improve federated learning:
  • Adaptive aggregation strategies for FL in heterogeneous environments
  • Foundation model enhanced FL knowledge distillation
  • Overcoming data interoperability challenges using foundation models
  • Personalization of FL with foundation models
Federated learning for training and tuning foundation models:
  • Fairness, bias, and interpretability challenges in FL with foundation models
  • Federated transfer learning with foundation models
  • FL techniques for traning large-scale foundation models
  • Hardware for FL with foundation models
  • Optimization algorithms for federated training of foundation models
  • Privacy-preserving mechanisms in FL with foundation models
  • Resource-efficient FL with foundation models
  • Security and robustness considerations in FL with foundation models
  • Systems and infrastructure for FL with foundation models
  • Vertical federated learning with foundation models
  • Vulnerabilities of FL with foundation models

More information on previous workshops can be found here.


Submission Instructions

Submissions should be no more than 6 pages long, excluding references, and follow NeurIPS'23 template. Submissions are double-blind (author identity shall not be revealed to the reviewers), so the submitted PDF file should not include any identifiable information of authors. An optional appendix of any length is allowed and should be put at the end of the paper (after references).

Submissions are collected on OpenReview at the following link: https://openreview.net/group?id=NeurIPS.cc/2023/Workshop/Federated_Learning.
Accepted papers and their review comments will be posted on OpenReview in public. Due to the short timeline, we will not have a rebuttal period, but the authors are encouraged to interact and discuss with reviewers on OpenReview after the acceptance notifications are sent out. Rejected papers and their reviews will remain private and not posted in public.

For questions, please contact: flfm-neurips-2023@googlegroups.com


Proceedings and Dual Submission Policy

Our workshop does not have formal proceedings, i.e., it is non-archival. Accepted papers will be available in public on OpenReview together with the reviewers' comments. Revisions to accepted papers will be allowed until shortly before the workshop date.

We welcome submissions of unpublished papers, including those that are submitted to other venues if that other venue allows so. However, papers that have been accepted to an archival venue as of Sept. 28, 2023 should not be resubmitted to this workshop, because the goal of the workshop is to share recent results and discuss open problems. Specifically, papers that have been accepted to NeurIPS'23 main conference should not be resubmitted to this workshop.


Presentation Format

The workshop will primarily take place physically with in person attendance. For presenters who cannot attend in person, it is planned to be made possible to connect remotely over Zoom for the oral talks. However, the poster sessions will be in-person only. Depending on the situation, we may include a lightening talk session for accepted poster presentations where the presenters cannot attend physically, or organize a separate virtual session after the official workshop date. If a paper is accepted as an oral talk, the NeurIPS organizers require a pre-recording of the presentation by early November, which will be made available for virtual participants to view. All accepted papers will be posted on OpenReview and linked on our webpage.


Invited Talks

   

Title: TBA

Speaker: Cho-Jui Hsieh, Associate Professor, University of California, Los Angeles, USA

Biography
Cho-Jui Hsieh is an associate professor of Computer Science at UCLA. He was a Ph.D. student at UT Austin working with Prof. Inderjit Dhillon. He received his master degree from National Taiwan University under supervision of Prof. Chih-Jen Lin. Before joining UCLA, he has worked as an Assistant Professor at UC Davis Computer Science and Statistics for three years, and was a visiting scholar in Google since summer 2018. He is interested in developing new algorithms and optimization techniques for large-scale machine learning problems. Currently, he is working on developing new machine learning models as well as improving the model size, training speed, prediction speed, and robustness of popular (deep learning) models.

   

Title: TBA

Speaker: Michael I. Jordan, Distinguished Professor, University of California, Berkeley, USA

Biography
Michael I. Jordan has been a world-leading researcher in the field of statistical machine learning for nearly four decades. His contributions at the interface between computer science and statistics include the variational approach to statistical inference and learning, inference methods based on graphical models and Bayesian nonparametrics, and characterizations of trade-offs between statistical risk and computational complexity. Jordan developed recurrent neural networks as a cognitive model, and his work is less driven from a cognitive perspective and more from the background of traditional statistics. Jordan popularized Bayesian networks in the machine learning community and is known for pointing out links between machine learning and statistics. He has also been prominent in the formalization of variational methods for approximate inference and the popularization of the expectation maximization algorithm in machine learning.

   

Title: TBA

Speaker: Jayashree Kalpathy-Cramer, Professor, University of Colorado, Anschutz, USA

Biography
Jayashree Kalpathy-Cramer, PhD, has been named chief of the new Division of Artificial Medical Intelligence in Ophthalmology at the University of Colorado (CU) School of Medicine. In her new role, Kalpathy-Cramer will translate novel artificial intelligence (AI) methods into effective patient care practices at the Sue Anschutz-Rodgers Eye Center. Kalpathy-Cramer is currently director of the QTIM lab and the Center for Machine Learning at the Athinoula A. Martinos Center for Biomedical Imaging and an associate professor of radiology at Harvard Medical School. Her reasearch lies at the intersection of machine learning, statistics, informatics, image acquisition and analysis with a goal towards clinical translation. She is an electrical engineer by training, having receied a B.Tech in EE from IIT Bombay and a PhD in EE from Rensselaer Polytechnic Institute. Her current projects include quantitative imaging in cancer, image analysis and decision support for retinal imaging, cloud computing, mathematical modeling of drug delivery in cancer, crowd sourcing and challenges, algorithm development and deep learning.

   

Title: When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions

Speaker: Lingjuan Lyu, Head of Privacy and Security, Sony AI, Japan

Biography
Lingjuan Lyu is the Head of Privacy and Security team in Sony AI. Her current research interest is trustworthy AI. She had published over 100 papers in top conferences and journals, including NeurIPS, ICML, ICLR, Nature, etc. Her papers had won a long list of best or outstanding paper awards from top main venues, including ICML, ACL, CIKM, IEEE, etc. She was also a winner of the IBM Ph.D. Fellowship Worldwide.

   

Title: TBA

Speaker: Peter Richtárik, Professor, King Abdullah University of Science and Technology, Saudi Arabia

Biography
Peter Richtarik is a professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia, where he leads the Optimization and Machine Learning Lab. At KAUST, he has a courtesy affiliation with the Applied Mathematics and Computational Sciences program and the Statistics program, and is a member of the Visual Computing Center, and the Extreme Computing Research Center. Prof Richtarik is a founding member and a Fellow of the Alan Turing Institute (UK National Institute for Data Science and Artificial Intelligence), and an EPSRC Fellow in Mathematical Sciences. During 2017-2019, he was a Visiting Professor at the Moscow Institute of Physics and Technology. Prior to joining KAUST, he was an Associate Professor of Mathematics at the University of Edinburgh, and held postdoctoral and visiting positions at Université Catholique de Louvain, Belgium, and University of California, Berkeley, USA, respectively. He received his PhD in 2007 from Cornell University, USA.

   

Title: TBA

Speaker: Zheng Xu, Senior Research Scientist, Google Research, USA

Biography
Zheng is a research scientist working on federated learning. He got his PhD on optimization and machine learning from University of Maryland, College Park.


Organizing Committee


Program Committee

  • Alp Yurtsever (Umeå University)
  • Ambrish Rawat (International Business Machines)
  • Anastasios Kyrillidis (Rice University)
  • Ang Li (University of Maryland, College Park)
  • Anirban Das (Capital One)
  • Anran Li (Nanyang Technological University)
  • Aurélien Bellet (INRIA)
  • Berivan Isik (Google)
  • Bing Luo (Duke Kunshan University)
  • Bingsheng He (National University of Singapore)
  • Bo Zhao (Nanyang Technological University)
  • Chao Ren (Nanyang Technological University)
  • Charles Lu (Massachusetts Institute of Technology)
  • Christian Makaya (École Polytechnique de Montréal, Université de Montréal)
  • Chuizheng Meng (University of Southern California)
  • Chulin Xie (University of Illinois, Urbana Champaign)
  • Dimitrios Dimitriadis (Amazon)
  • Divyansh Jhunjhunwala (Carnegie Mellon University)
  • Egor Shulgin (KAUST)
  • Farzin Haddadpour (Biogen)
  • Feng Yan (University of Houston)
  • Giulio Zizzo (International Business Machines)
  • Grigory Malinovsky (King Abdullah University of Science and Technology)
  • Haibo Yang (Rochester Institute of Technology)
  • Herbert Woisetschläger (Technische Universität München)
  • Hongyi Wang (Carnegie Mellon University)
  • Hongyuan Zhan (Meta)
  • Javier Fernandez-Marques (Samsung AI)
  • Jayanth Regatti (Ohio State University)
  • Jianyu Wang (Apple)
  • Jiayi Wang (University of Utah)
  • Jihong Park (Deakin University)
  • Jinhyun So (Samsung)
  • Junyi Li (University of Pittsburgh)
  • Kallista Bonawitz (Google)
  • Karthik Prasad (Facebook AI)
  • Kevin Hsieh (Microsoft)
  • Konstantin Mishchenko (Samsung)
  • Lie He (Swiss Federal Institute of Technology Lausanne)
  • Liping Yi (Nankai University)
  • Matthias Reisser (Qualcomm Inc, QualComm)
  • Michael Kamp (Institute for AI in Medicine IKIM)
  • Minghong Fang (Duke University)
  • Minhao Cheng (Hong Kong University of Science and Technology)
  • Narasimha Raghavan Veeraragavan (Cancer Registry of Norway)
  • Paulo Ferreira (Dell Technologies)
  • Pengchao Han (The Chinese University of Hong Kong, Shenzhen)
  • Pranay Sharma (Carnegie Mellon University)
  • Prashant Khanduri (Wayne State University)
  • Radu Marculescu (University of Texas, Austin)
  • Samuel Horváth (Mohamed bin Zayed University of Artificial Intelligence)
  • Se-Young Yun (KAIST)
  • Sebastian Stich (CISPA Helmholtz Center for Information Security)
  • Shangwei Guo (Chongqing University)
  • Siyao Zhou (McMaster University)
  • Songze Li (Southeast University)
  • Stefanos Laskaridis (Brave Software)
  • Taha Toghani (Rice University)
  • Tahseen Rabbani (University of Maryland, College Park)
  • Virendra Marathe (Oracle)
  • Wenshuo Guo (University of California Berkeley)
  • Xianjie Guo (Hefei University of Technology)
  • Xiaoliang Fan (Xiamen University)
  • Yae Jee Cho (Carnegie Mellon University)
  • Yang Liu (Tsinghua University, Tsinghua University)
  • Yaosen Lin (Apple)
  • Yi Zhou (International Business Machines)
  • Yuanpu Cao (Pennsylvania State University)
  • Yujia Wang (Pennsylvania State University)
  • Zhanhong Jiang (Johnson Controls Inc.)
  • Zhaozhuo Xu (Rice University)
  • Zheng Xu (Google)

Organized by