|
Selected workshop papers are invited to be extended and re-reviewed for publication as book chapters in the Lecture Notes in Artificial Intelligence (LNAI). More information can be found here. |
  |   | |
Time | Activity | |
  |   | |
08:15 – 08:20 | Opening Remarks | |
08:20 – 09:00 | Keynote 1: Federated Large Language Models and Their Applications, by Qiang Yang | |
09:00 – 09:30 | Oral Presentation Session 1 (5 min per talk + 2 min Q&A) | |
|
||
09:30 – 10:00 | Coffee Break | |
10:00 – 10:30 | Oral Presentation Session 2 (5 min per talk + 2 min Q&A) | |
|
||
10:30 – 11:00 | Keynote 2: The first AGI will be Federated, by Nicholas D. Lane | |
11:00 – 12:30 | Oral Presentation Session 3 (5 min per talk + 2 min Q&A) | |
|
||
12:30 – 14:00 | Lunch Break | |
14:00 – 14:30 | Keynote 3: Transforming Multicenter Neurology Trials with Federated Learning: A New Era of Collaborative Medicine, by Martin J. McKeown | |
14:30 – 15:00 | Keynote 4: Federated Optimization Beyond Standard Empirical Risk Minimization, by Gauri Joshi | |
15:00 – 15:30 | Coffee Break | |
15:30 – 16:00 | Keynote 5: Machine Learning from Imbalanced Data Sources, by Shiqiang Wang | |
16:00 – 17:00 | Oral Presentation Session 4 (5 min per talk + 2 min Q&A) | |
|
||
17:00 – 17:15 | Award Ceremony | |
  |   |   |
    |
Title: Federated Large Language Models and Their Applications Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Professor Emeritus, Hong Kong University of Science and Technology Biography
|
|
    |
Title: The first AGI will be Federated Speaker: Nicholas D. Lane, Professor, University of Cambridge / Co-Founder and CSO, Flower Labs Biography
|
|
    |
Title: Transforming Multicenter Neurology Trials with Federated Learning: A New Era of Collaborative Medicine Speaker: Martin J. McKeown, Professor, The University of British Columbia Biography
|
|
    |
Title: Federated Optimization Beyond Standard Empirical Risk Minimization Speaker: Gauri Joshi, Associate Professor, Carnegie Mellon University Biography
|
|
    |
Title: Machine Learning from Imbalanced Data Sources Speaker: Shiqiang Wang, Staff Research Scientist, IBM T. J. Watson Research Center Biography
|
Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning dis-tributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.
Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.
The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.
FMs such as GPT-4 encoded with vast knowledge and powerful emergent abilities have achieved remarkable success in various natural language processing and computer vision tasks. Grounding FMs by adapting them to domain-specific tasks or augmenting them with domain-specific knowledge enables us to exploit the full potential of FMs. However, grounding FMs faces several challenges, stemming primarily from constrained computing resources, data privacy, model heterogeneity, and model ownership. Federated Transfer Learning (FTL), the combination of FL and transfer learning, provides promising solutions to address these challenges. In recent years, the need for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in both academia and industry.
With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field.
This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of combining FL with FM to open up opportunities to address new challenges. The workshop topics include but are not limited to:
Theory and algorithmic foundations:
|
Federated learning for training and tuning foundation models:
|
More information on previous workshops can be found here.
The main text of a submitted paper can be between 4 to 9 content pages, including all figures and tables, following NeurIPS'24 template. Additional pages containing references don't count as content pages. An optional appendix of any length is allowed and should be put at the end of the paper (after references). Submissions are double-blind (author identity shall not be revealed to the reviewers), so the submitted PDF file should not include any identifiable information of authors.
Late breaking papers refer to those papers that have been reviewed by NeurIPS'24 but were not accepted. Authors who wish to submit such papers can follow the same submission link and do so by 27 September 2024. In your submission, please include the NeurIPS'24 review comments in the appendix of your paper. These papers will not need to go through another round of peer review. Instead, the organizing committee will determine whether they are accepted into the FL@FM-NeurIPS'24 workshop.
Submissions are collected on OpenReview at the following link: https://openreview.net/group?id=NeurIPS.cc/2024/Workshop/Federated_Learning.
Accepted papers and their review comments will be posted on OpenReview in public.
Due to the short timeline, we will not have a rebuttal period, but the authors are encouraged to interact and discuss with reviewers on OpenReview after the acceptance notifications are sent out.
Rejected papers and their reviews will remain private and not posted in public.
Sai Praneeth Karimireddy (USC) |
    | Xiaoxiao Li (UBC) |
    | Songtao Lu (IBM) |
    | Stacy Patterson (RPI) |
    | Pascal Poupart (U Waterloo) |
    | Han Yu (NTU) |
    |
|
|