|
Selected workshop papers are invited to be extended and re-reviewed for publication as book chapters in the Lecture Notes in Artificial Intelligence (LNAI). More information can be found here. |
  |   | |
Time | Activity | |
  |   | |
08:55 – 09:00 | Opening Remarks | |
09:00 – 09:45 | Keynote 1: Federated Large Language Models and Their Applications, by Qiang Yang | |
09:45 – 10:30 | Keynote 2: Data-driven Federated Optimisation: From Small Surrogates to LLMs, by Yaochu Jin | |
10:30 – 11:00 | Coffee Break | |
11:00 – 11:30 | Keynote 3: Advanced Federated Learning Paradigms in Extreme Heterogeneities and in the Era of Foundation Models, by Xiaoxiao Li | |
11:30 – 12:30 | Oral Presentation Session 1 (10 min per talk + 2 min Q&A) | |
|
||
12:30 – 14:00 | Lunch Break | |
14:00 – 14:30 | Keynote 4: Trustworthy Federated Learning and its Large-scale Model Applications, by Xiaoliang Fan | |
14:30 – 15:30 | Oral Presentation Session 2 (10 min per talk + 2 min Q&A) | |
|
||
15:30 – 16:00 | Coffee Break | |
16:00 – 16:30 | Keynote 5: Exploring Trustworthy Machine Learning under Imperfect Data, by Bo Han | |
16:30 – 16:45 | Award Ceremony | |
  |   |   |
    |
Title: Federated Large Language Models and Their Applications Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Professor Emeritus, Hong Kong University of Science and Technology Biography
|
|
    |
Title: Data-driven Federated Optimisation: From Small Surrogates to LLMs Speaker: Yaochu Jin, Chair Professor of AI, Westlake University Biography
|
|
    |
Title: Advanced Federated Learning Paradigms in Extreme Heterogeneities and in the Era of Foundation Models Speaker: Xiaoxiao Li, Assistant Professor, The University of British Columbia Biography
|
|
    |
Title: Trustworthy Federated Learning and its Large-scale Model Applications Speaker: Xiaoliang Fan, Senior Research Specialist, Fujian Key Laboratory of Sensing and Computing for Smart Cites, Xiamen University Biography
|
|
    |
Title: Exploring Trustworthy Machine Learning under Imperfect Data Speaker: Bo Han, Assistant Professor in Machine Learning, Hong Kong Baptist University Biography
|
Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning dis-tributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.
Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.
The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.
FMs such as GPT-4 encoded with vast knowledge and powerful emergent abilities have achieved remarkable success in various natural language processing and computer vision tasks. Grounding FMs by adapting them to domain-specific tasks or augmenting them with domain-specific knowledge enables us to exploit the full potential of FMs. However, grounding FMs faces several challenges, stemming primarily from constrained computing resources, data privacy, model heterogeneity, and model ownership. Federated Transfer Learning (FTL), the combination of FL and transfer learning, provides promising solutions to address these challenges. In recent years, the need for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in both academia and industry.
With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a rela-tively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field.
This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of combining FL with FM to open up opportunities to address new challenges. The workshop topics include but are not limited to:
Theory and algorithmic foundations:
|
Federated learning for training and tuning foundation models:
|
More information on previous workshops can be found here.
Each submission can be up to 7 pages of contents plus up to 2 additional pages of references and acknowledgements. The submitted papers must be written in English and in PDF format according to the IJCAI'24 template. All submitted papers will be under a single-blind peer review for their novelty, technical quality and impact. The submissions can contain author details. Submission will be accepted via the Easychair submission website.
Based on the requirement from IJCAI'24, at least one author of each accepted paper must travel to the IJCAI venue in person. In addition, multiple submissions of the same paper to more than one IJCAI workshop are forbidden.
Easychair submission site: https://easychair.org/conferences/?conf=flfm-ijcai-24
For enquiries, please email to: flfm-ijcai-24@easychair.org
Steering Co-Chairs |     | General Co-Chairs |     | Program Co-Chairs |     | ||||||||
Qiang Yang (WeBank/HKUST) |
    | Yaochu Jin (Westlake U) |
    |     | Randy Goebel (U Alberta) |
    | Lixin Fan (WeBank) |
    |     | Xiaoxiao Li (UBC) |
    | Han Yu (NTU) |
    |     |
Publicity Co-Chairs |     | Publications Co-Chairs |     | ||||||||||
Alysa Ziying Tan (NTU) |
    | Jiankai Sun (Pinterest) |
    |     | Ying Wei (NTU) |
    | Zengxiang Li (ENN) |
    |
|
|