International Workshop on Federated Learning in the Age of Foundation Models
In Conjunction with IJCAI 2024 (FL@FM-IJCAI'24)


Final Submission Deadline: May 14, 2024 (23:59:59 AoE)
Notification Due: June 01, 2024 (23:59:59 AoE)
Workshop Date: Monday, August 05, 2024
Venue: Room 1F-Baeknok B, International Convention Center (ICC), Jeju Island, South Korea

Post Workshop Publications

   

Selected workshop papers are invited to be extended and re-reviewed for publication as book chapters in the Lecture Notes in Artificial Intelligence (LNAI). More information can be found here.


Workshop Program (Monday, August 05, 2024)

  
Time Activity
  
08:55 – 09:00 Opening Remarks
09:00 – 09:45 Keynote 1: Federated Large Language Models and Their Applications, by Qiang Yang
09:45 – 10:30 Keynote 2: Data-driven Federated Optimisation: From Small Surrogates to LLMs, by Yaochu Jin
10:30 – 11:00 Coffee Break
11:00 – 11:30 Keynote 3: Advanced Federated Learning Paradigms in Extreme Heterogeneities and in the Era of Foundation Models, by Xiaoxiao Li
11:30 – 12:30 Oral Presentation Session 1 (10 min per talk + 2 min Q&A)
  1. [Best Paper Award] Jiaqi Wang, Jingtao Li, Weiming Zhuang, Chen Chen, Fenglong Ma and Lingjuan Lyu. Scaling Vision Foundation Models with Federated Adapter Generalization
  2. [Best Student Paper Award] Shaoyuan Chen, Linlin You, Rui Liu, Shuo Yu and Ahmed M. Abdelmoniem. Federated Knowledge Transfer Fine-tuning Large Server Model with Resource-Constrained IoT Clients
  3. Xueming Yan, Ziqi Wang and Yaochu Jin. Federated Incomplete Multi-View Clustering with Heterogeneous Graph Neural Networks
  4. Yuping Yan, Yizhi Wang, Yingchao Yu and Yaochu Jin. Privacy-preserving Quantification of Non-IID Degree in Federated Learning
  5. Holger Roth, Daniel Beutel, Yan Cheng, Javier Fernandez Marques, Heng Pan, Chester Chen, Zhihong Zhang, Yuhong Wen, Sean Yang, Isaac Te-Chung Yang, Yuan-Ting Hsieh, Ziyue Xu, Daguang Xu, Nicholas Lane and Andrew Feng. Supercharging Federated Learning with Flower and NVIDIA FLARE
12:30 – 14:00 Lunch Break
14:00 – 14:30 Keynote 4: Trustworthy Federated Learning and its Large-scale Model Applications, by Xiaoliang Fan
14:30 – 15:30 Oral Presentation Session 2 (10 min per talk + 2 min Q&A)
  1. Minbiao Han, Kumar Kshitij Patel, Han Shao and Lingxiao Wang. On the Effect of Defections in Federated Learning and How to Prevent Them
  2. Alka Luqman, Brandon Yeow Wei Liang and Anupam Chattopadhyay. Federated Learning Optimization: A Comparative Study of Data and Model Exchange Strategies in Dynamic Networks
  3. Yingchao Yu, Yuping Yan, Jisong Cai and Yaochu Jin. Heterogeneous Federated Learning with Convolutional and Spiking Neural Networks
  4. Fatima Abacha, Sin G Teo, Lucas Cordeiro and Mustafa Mustafa. Synthetic Data Aided Federated Learning Using Foundation Models
  5. Hao Yan and Yuhong Guo. Lightweight Unsupervised Federated Learning
15:30 – 16:00 Coffee Break
16:00 – 16:30 Keynote 5: Exploring Trustworthy Machine Learning under Imperfect Data, by Bo Han
16:30 – 16:45 Award Ceremony
   

Keynote Speakers

   

Title: Federated Large Language Models and Their Applications

Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Professor Emeritus, Hong Kong University of Science and Technology

Biography
Qiang Yang is the head of the AI Department at WeBank (Chief AI Officer) and Professor Emeritus at the Computer Science and Engineering (CSE) Department of the Hong Kong University of Science and Technology (HKUST), where he was a former head of CSE Department and founding director of the Big Data Institute (2015-2018). His research interests include AI, machine learning, and data mining, especially in transfer learning, automated planning, federated learning, and case-based reasoning. He is a fellow of several international societies, including ACM, AAAI, IEEE, IAPR, and AAAS. He received his Ph.D. in Computer Science in 1989 and his M.Sc. in Astrophysics in 1985, both from the University of Maryland, College Park. He obtained his B.Sc. in Astrophysics from Peking University in 1982. He had been a faculty member at the University of Waterloo (1989-1995) and Simon Fraser University (1995-2001). He was the founding Editor-in-Chief of the ACM Transactions on Intelligent Systems and Technology (ACM TIST) and IEEE Transactions on Big Data (IEEE TBD). He served as the President of International Joint Conference on AI (IJCAI, 2017-2019) and an executive council member of Association for the Advancement of AI (AAAI, 2016-2020). Qiang Yang is a recipient of several awards, including the 2004/2005 ACM KDDCUP Championship, the ACM SIGKDD Distinguished Service Award (2017), and AAAI Innovative Application Awards (2018, 2020 and 2022). He was the founding director of Huawei's Noah's Ark Lab (2012-2014) and a co-founder of 4Paradigm Corp, an AI platform company. He is an author of several books including Intelligent Planning (Springer), Crafting Your Research Future (Morgan & Claypool), and Constraint-based Design Recovery for Software Engineering (Springer).

   

Title: Data-driven Federated Optimisation: From Small Surrogates to LLMs

Speaker: Yaochu Jin, Chair Professor of AI, Westlake University

Biography
Yaochu Jin obtained the BSc., MSc. and PhD degree all in automatic control from the Electrical Engineering Department, Zhejiang University, China, in 1988, 1991, and 1996, respectively, and the Dr.-Ing. from the Institute of Neuroinformatics, Ruhr-University Bochum, Germany in 2001. He is currently a Chair Professor of Artificial Intelligence with the School of Engineering, Westlake University. Before joining Westlake University, he was an Alexander von Humboldt Professor for Artificial Intelligence endowed by the German Federal Ministry of Education and Research, Bielefeld University, Germany from 2021 to 2023, and a Surrey Distinguished Chair Professor in Computational Intelligence, University of Surrey, Guildford, U.K., from 2010 to 2021. He was a "Finland Distinguished Professor" of University of Jyväskylä, Finland, and "Changjiang Distinguished Visiting Professor", Northeastern University, China from 2015 to 2017. Prof Jin is presently the President of the IEEE Computational Intelligence Society and the Editor-in-Chief of Complex & Intelligent Systems. He was the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems, an IEEE Distinguished Lecturer in 2013-2015 and 2017-2019, the Vice President for Technical Activities of the IEEE Computational Intelligence Society (2015-2016). He is a Member of Academia Europaea and Fellow of IEEE. For his outstanding contributions to evolutionary optimization of complex systems, he has received the IEEE Frank Rosenblatt Award.

   

Title: Advanced Federated Learning Paradigms in Extreme Heterogeneities and in the Era of Foundation Models

Speaker: Xiaoxiao Li, Assistant Professor, The University of British Columbia

Biography
Dr. Xiaoxiao Li is an Assistant Professor in the Electrical and Computer Engineering Department at the University of British Columbia (UBC), a Faculty Member at Vector Institute, and a CIFAR AI Chair. Before joining UBC, Dr. Li was a Postdoc Research Fellow in the Computer Science Department at Princeton University. Dr. Li obtained PhD degree from Yale University in 2020, and Bachelor's degree from Zhejiang University in 2015. In the recent few years, Dr. Li has over 50 papers published in leading machine learning conferences and journals, including NeurIPS, ICML, ICLR, CVPR, IJCAI, MICCAI, IPMI, ECCV, AAAI and Nature Methods. Dr Li's research work has been recognized with the OHBM Merit Abstract Award, the MLMI Best Paper Award, the DART Best Paper Award, and FL@FM-TheWebConf'24 Best Paper Award.

   

Title: Trustworthy Federated Learning and its Large-scale Model Applications

Speaker: Xiaoliang Fan, Senior Research Specialist, Fujian Key Laboratory of Sensing and Computing for Smart Cites, Xiamen University

Biography
Xiaoliang Fan is a Senior Research Specialist at Fujian Key Laboratory of Sensing and Computing for Smart Cites, School of Informatics, Xiamen University, China. He received his PhD degree at University Pierre and Marie CURIE, France in 2012. His research interests include trustworthy AI and federated learning, spatio-temporal data mining, LLM applications, etc. He has published 80+ journals (IEEE TSC/TMC/TITS, etc.) and top conferences (AAAI, KDD, IJCAI, WWW, etc.) papers. His works are funded by NSFC and many industry collaborators. Dr. FAN is an IEEE Senior Member, and CCF Senior Member.

   

Title: Exploring Trustworthy Machine Learning under Imperfect Data

Speaker: Bo Han, Assistant Professor in Machine Learning, Hong Kong Baptist University

Biography
Bo Han is currently an Assistant Professor in Machine Learning at Hong Kong Baptist University, and a BAIHO Visiting Scientist at RIKEN Center for Advanced Intelligence Project (RIKEN AIP), where his research focuses on machine learning, deep learning, foundation models, and their applications. He was a Visiting Research Scholar at MBZUAI MLD (2024), a Visiting Faculty Researcher at Microsoft Research (2022) and Alibaba DAMO Academy (2021), and a Postdoc Fellow at RIKEN AIP (2019-2020). He received his Ph.D. degree in Computer Science from University of Technology Sydney (2015-2019). He has co-authored two machine learning monographs, including Machine Learning with Noisy Labels (MIT Press) and Trustworthy Machine Learning under Imperfect Data (Springer Nature). He has served as Senior Area Chair of NeurIPS, and Area Chairs of NeurIPS, ICML and ICLR. He has also served as Associate Editors of IEEE TPAMI, MLJ and JAIR, and Editorial Board Members of JMLR and MLJ. He received Outstanding Paper Award at NeurIPS, Most Influential Paper at NeurIPS, Notable Area Chair at NeurIPS, Outstanding Area Chair at ICLR, and Outstanding Associate Editor at IEEE TNNLS. He received the RGC Early CAREER Scheme, NSFC General Program, IJCAI Early Career Spotlight, RIKEN BAIHO Award, Dean's Award for Outstanding Achievement, Microsoft Research StarTrack Program, and Faculty Research Awards from ByteDance, Baidu, Alibaba and Tencent.


Accepted Papers

  1. [Best Paper Award] Jiaqi Wang, Jingtao Li, Weiming Zhuang, Chen Chen, Fenglong Ma and Lingjuan Lyu. Scaling Vision Foundation Models with Federated Adapter Generalization
  2. [Best Student Paper Award] Shaoyuan Chen, Linlin You, Rui Liu, Shuo Yu and Ahmed M. Abdelmoniem. Federated Knowledge Transfer Fine-tuning Large Server Model with Resource-Constrained IoT Clients
  3. Xueming Yan, Ziqi Wang and Yaochu Jin. Federated Incomplete Multi-View Clustering with Heterogeneous Graph Neural Networks
  4. Yuping Yan, Yizhi Wang, Yingchao Yu and Yaochu Jin. Privacy-preserving Quantification of Non-IID Degree in Federated Learning
  5. Holger Roth, Daniel Beutel, Yan Cheng, Javier Fernandez Marques, Heng Pan, Chester Chen, Zhihong Zhang, Yuhong Wen, Sean Yang, Isaac Te-Chung Yang, Yuan-Ting Hsieh, Ziyue Xu, Daguang Xu, Nicholas Lane and Andrew Feng. Supercharging Federated Learning with Flower and NVIDIA FLARE
  6. Minbiao Han, Kumar Kshitij Patel, Han Shao and Lingxiao Wang. On the Effect of Defections in Federated Learning and How to Prevent Them
  7. Alka Luqman, Brandon Yeow Wei Liang and Anupam Chattopadhyay. Federated Learning Optimization: A Comparative Study of Data and Model Exchange Strategies in Dynamic Networks
  8. Yingchao Yu, Yuping Yan, Jisong Cai and Yaochu Jin. Heterogeneous Federated Learning with Convolutional and Spiking Neural Networks
  9. Fatima Abacha, Sin G Teo, Lucas Cordeiro and Mustafa Mustafa. Synthetic Data Aided Federated Learning Using Foundation Models
  10. Hao Yan and Yuhong Guo. Lightweight Unsupervised Federated Learning

Call for Papers

Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning dis-tributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.

Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.

The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.

FMs such as GPT-4 encoded with vast knowledge and powerful emergent abilities have achieved remarkable success in various natural language processing and computer vision tasks. Grounding FMs by adapting them to domain-specific tasks or augmenting them with domain-specific knowledge enables us to exploit the full potential of FMs. However, grounding FMs faces several challenges, stemming primarily from constrained computing resources, data privacy, model heterogeneity, and model ownership. Federated Transfer Learning (FTL), the combination of FL and transfer learning, provides promising solutions to address these challenges. In recent years, the need for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in both academia and industry.

With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a rela-tively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field.

This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of combining FL with FM to open up opportunities to address new challenges. The workshop topics include but are not limited to:
Theory and algorithmic foundations:
  • Impact of heterogeneity in FL of large models
  • Multi-stage model training (e.g., base model + fine tuning)
  • Optimization advances in FL (e.g., beyond first-order and local methods)
  • Prompt tuning in federated settings
  • Self-supervised learning in federated settings
Leveraging foundation models to improve federated learning:
  • Adaptive aggregation strategies for FL in heterogeneous environments
  • Foundation model enhanced FL knowledge distillation
  • Overcoming data interoperability challenges using foundation models
  • Personalization of FL with foundation models
Federated learning for training and tuning foundation models:
  • Fairness, bias, and interpretability challenges in FL with foundation models
  • Federated transfer learning with foundation models
  • FL techniques for training large-scale foundation models
  • Hardware for FL with foundation models
  • Optimization algorithms for federated training of foundation models
  • Privacy-preserving mechanisms in FL with foundation models
  • Resource-efficient FL with foundation models
  • Security and robustness considerations in FL with foundation models
  • Systems and infrastructure for FL with foundation models
  • Vertical federated learning with foundation models
  • Vulnerabilities of FL with foundation models

More information on previous workshops can be found here.


Submission Instructions

Each submission can be up to 7 pages of contents plus up to 2 additional pages of references and acknowledgements. The submitted papers must be written in English and in PDF format according to the IJCAI'24 template. All submitted papers will be under a single-blind peer review for their novelty, technical quality and impact. The submissions can contain author details. Submission will be accepted via the Easychair submission website.

Based on the requirement from IJCAI'24, at least one author of each accepted paper must travel to the IJCAI venue in person. In addition, multiple submissions of the same paper to more than one IJCAI workshop are forbidden.

Easychair submission site: https://easychair.org/conferences/?conf=flfm-ijcai-24

For enquiries, please email to: flfm-ijcai-24@easychair.org


Organizing Committee

Steering Co-Chairs     General Co-Chairs     Program Co-Chairs
   


Qiang Yang
(WeBank/HKUST)
   

Yaochu Jin
(Westlake U)
       

Randy Goebel
(U Alberta)
   

Lixin Fan
(WeBank)
       

Xiaoxiao Li
(UBC)
   

Han Yu
(NTU)
   
   
Publicity Co-Chairs     Publications Co-Chairs
   


Alysa Ziying Tan
(NTU)
   

Jiankai Sun
(Pinterest)
       

Ying Wei
(NTU)
   

Zengxiang Li
(ENN)
   

Program Committee

  • Alysa Ziying Tan (Alibaba-NTU Singapore Joint Research Institute)
  • Anran Li (Yale University, USA)
  • Chun-Yin Huang (University of British Colubia, Canada)
  • Guojun Zhang (Huawei Noah's Ark Lab, China)
  • Hongyi Peng (Nanyang Technological University, Singapore)
  • Jiangtian Nie (Nanyang Technological University, Singapore)
  • Jiankai Sun (The Ohio State University, USA)
  • Jihong Park (Deakin University, Australia)
  • Jinhyun So (University of Southern California, USA)
  • Jun Luo (Huawei Noah's Ark Lab, China)
  • Peng Zhang (Guangzhou University, China)
  • Rui Liu Nanyang (Technological University, Singapore)
  • Shiqiang Wang (IBM Thomas J. Watson Research Center, USA)
  • Siwei Feng (Soochow University, China)
  • Songze Li (Southeast University, China)
  • Wei Yang Bryan Lim (Nanyang Technological University, Singapore)
  • Wenlong Deng (University of British Colubia, Canada)
  • Xi Chen (Huawei, Canada)
  • Xiaohu Wu (Beijing University of Posts and Telecommunications, China)
  • Xiaoli Tang (Nanyang Technological University, Singapore)
  • Xu Guo (Nanyang Technological University, Singapore)
  • Yanci Zhang (Nanyang Technological University, Singapore)
  • Yang Zhang (Nanjing University of Aeronautics and Astronautics, China)
  • Yulan Gao (Nanyang Technological University, Singapore)
  • Yuxin Shi (Nanyang Technological University, Singapore)
  • Zelei Liu (Unicom (Shanghai) Industrial Internet Co. Ltd., China)
  • Zhuan Shi (EPFL, Switzerland)
  • Zichen Chen (University of California Santa Barbara, USA)

Organized by