|
Selected workshop papers are invited to be extended and re-reviewed for publication as book chapters in the Lecture Notes in Artificial Intelligence (LNAI). More information can be found here. |
  |   | |
Time | Activity | |
  |   | |
08:30 – 09:00 | Keynote 1: Federated Learning in Healthcare: Opportunities and Barriers, by Ross Mitchell | |
09:00 – 09:30 | Keynote 2: Building Multi-Foundation Agent Systems through Trustworthy Auction-based Federated Learning, by Han Yu | |
09:30 – 10:00 | Keynote 3: Easy and Scalable Federated Learning in the Age of Large Language Models with NVIDIA FLARE, by Holger Roth (PDF) | |
10:00 – 10:30 | Tea Break | |
10:30 – 11:00 | Keynote 4: Uncertainty Quantification in Federated Learning, by Pascal Poupart | |
11:00 – 12:00 | Oral Presentation Session (10 min per talk, including Q&A) | |
|
||
12:00 – 12:30 | Keynote 5: Networked AI Learning: A Non-Federated Learning Approach, by Wen Tong (PDF) | |
  |   |   |
    |
Title: Federated Learning in Healthcare: Opportunities and Barriers Speaker: Ross Mitchell, Alberta Health Services (AHS) Chair in AI in Health, Professor in the Department of Medicine, and Adjunct professor in the Department of Computer Science, University of Alberta, Canada Biography
|
|
    |
Title: Building Multi-Foundation Agent Systems through Trustworthy Auction-based Federated Learning Speaker: Han Yu, Nanyang Assistant Professor, Nanyang Technological University, Singapore Biography
|
|
    |
Title: Easy and Scalable Federated Learning in the Age of Large Language Models with NVIDIA FLARE Speaker: Holger Roth, Principal Federated Learning Scientist, NVIDIA, USA Biography
|
|
    |
Title: Uncertainty Quantification in Federated Learning Speaker: Pascal Poupart, Professor, University of Waterloo, Canada Biography
|
|
    |
Title: Networked AI Learning: A Non-Federated Learning Approach Speaker: Wen Tong, CTO, Huawei Wireless, Huawei Technologies Company Ltd., Ottawa, ON, Canada Biography
|
Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning distributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.
Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.
The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.
With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field. The workshop topics include but are not limited to:
Theory and algorithmic foundations:
|
Federated learning for training and tuning foundation models:
|
More information on previous workshops can be found here.
Submitted papers must be written in English, with a maximum length limit of 6 printed pages. Papers that do not comply with the length limit will not be reviewed. Use the standard IEEE Transactions templates for Microsoft Word or LaTeX formats found at: https://www.ieee.org/conferences/publishing/templates.html All submitted papers will be under a single-blinded peer review for their novelty, technical quality and impact. The submissions can contain author details. Submission will be accepted via the Easychair submission website.
Easychair submission site: https://easychair.org/conferences/?conf=flfmicme2024
For enquiries, please email to: flfmicme2024@easychair.org
General Co-Chairs |     |     | Program Co-Chairs |     | ||||||
Randy Goebel (U Alberta) |
    | Xiaoxiao Li (UBC) |
    |     | Han Yu (NTU) |
    | Jane Z. Wang (UBC) |
    | Ross Mitchell (U Alberta) |
    |