International Workshop on Federated Foundation Models for the Web 2024
(FL@FM-TheWebConf'24)


Final Submission Deadline: Feburary 15, 2024 (23:59:59 AoE)
Notification Due: March 05, 2024 (23:59:59 AoE)
Workshop Date: May 14, 2024
Venue: Pisces 2, Level 1, Resorts World Convention Centre, Singapore
Join Streaming


Post Workshop Publications

   

Selected workshop papers are invited to be extended and re-reviewed for publication as book chapters in the Lecture Notes in Artificial Intelligence (LNAI). More information can be found here.


Workshop Program (Tuesday, May 14, 2024)

  
Singapore Time
(UTC+8)
Your Local Time
(
)
Activity
  
08:55 – 09:00 Opening Remarks
09:00 – 09:30 Distinguished Keynote Lecture: Trustworthy Federated Learning and LLM, by Qiang Yang
09:30 – 10:30 Oral Presentation Session 1 (Session Chair: Guodong Long) (12 min per talk + 3 min Q&A each)
  1. [In-Person] Yun-Wei Chu, Dong-Jun Han and Christopher Brinton. Only Send What You Need: Learning to Communicate Efficiently in Federated Multilingual Machine Translation
  2. [Virtual] Holger Roth, Ziyue Xu, Yuan-Ting Hsieh, Adithya Renduchintala, Isaac Yang, Zhihong Zhang, Yuhong Wen, Sean Yang, Kevin Lu, Kristopher Kersten, Camir Ricketts, Daguang Xu, Chester Chen, Yan Cheng and Andrew Feng. Empowering Federated Learning for Massive Models with NVIDIA FLARE
  3. [Virtual] [Best Paper Award] Ruinan Jin, Minghui Chen, Qiong Zhang and Xiaoxiao Li. F2L2: Forgettable Federated Linear Learning with Certified Data Removal
  4. [In-Person] Xiaoli Tang and Han Yu. Multi-Session Multi-Objective Budget Optimization for Auction-based Federated Learning
10:30 – 11:00 Tea Break
11:00 – 12:30 Oral Presentation Session 2 (Session Chair: Han Yu) (12 min per talk + 3 min Q&A each)
  1. [Virtual] Zengxiang Li, Zhaoxiang Hou, Hui Liu, Tongzhi Li, Ying Wang, Chao Shi, Longfei Xie, Chengyi Yang, Weishan Zhang, Liang Xu and Zelei Liu. Federated Learning in Large Model Era: Vision-Language Model for Smart City Safety Operation Management
  2. [Virtual] Mahdi Morafah, Matthias Reisser, Bill Lin and Christos Louizos. Stable Diffusion-based Data Augmentation for Federated Learning with Non-IID Data
  3. [Virtual] Fiona Victoria Stanley Jothiraj and Afra Mashhadi. Phoenix: A Federated Generative Diffusion Model
  4. [In-Person] Zhihan Guo, Yifei Zhang, Zhuo Zhang, Zenglin Xu and Irwin King. FedLRC: Efficient Federated Low-Rank Adaption with Clustering for Multilingual Modeling
  5. [In-Person] Xiaoli Tang, Han Yu, Run Tang, Chao Ren, Anran Li and Xiaoxiao Li. Dual Calibration-based Personalised Federated Learning
12:30 – 13:30 End of Workshop & Lunch Break
   
   

Distinguished Keynote Lecture

   

Title: Trustworthy Federated Learning and LLM

Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Professor Emeritus, Hong Kong University of Science and Technology (HKUST), Hong Kong, China

Biography
Qiang Yang is the head of the AI Department at WeBank (Chief AI Officer) and Professor Emeritus at the Computer Science and Engineering (CSE) Department of the Hong Kong University of Science and Technology (HKUST), where he was a former head of CSE Department and founding director of the Big Data Institute (2015-2018). His research interests include AI, machine learning, and data mining, especially in transfer learning, automated planning, federated learning, and case-based reasoning. He is a fellow of several international societies, including ACM, AAAI, IEEE, IAPR, and AAAS. He received his Ph.D. in Computer Science in 1989 and his M.Sc. in Astrophysics in 1985, both from the University of Maryland, College Park. He obtained his B.Sc. in Astrophysics from Peking University in 1982. He had been a faculty member at the University of Waterloo (1989-1995) and Simon Fraser University (1995-2001). He was the founding Editor-in-Chief of the ACM Transactions on Intelligent Systems and Technology (ACM TIST) and IEEE Transactions on Big Data (IEEE TBD). He served as the President of International Joint Conference on AI (IJCAI, 2017-2019) and an executive council member of Association for the Advancement of AI (AAAI, 2016-2020). Qiang Yang is a recipient of several awards, including the 2004/2005 ACM KDDCUP Championship, the ACM SIGKDD Distinguished Service Award (2017), and AAAI Innovative Application Awards (2018, 2020 and 2022). He was the founding director of Huawei's Noah's Ark Lab (2012-2014) and a co-founder of 4Paradigm Corp, an AI platform company. He is an author of several books including Intelligent Planning (Springer), Crafting Your Research Future (Morgan & Claypool), and Constraint-based Design Recovery for Software Engineering (Springer).


Accepted Papers

  1. [Best Paper Award] Ruinan Jin, Minghui Chen, Qiong Zhang and Xiaoxiao Li. F2L2: Forgettable Federated Linear Learning with Certified Data Removal
  2. Yun-Wei Chu, Dong-Jun Han and Christopher Brinton. Only Send What You Need: Learning to Communicate Efficiently in Federated Multilingual Machine Translation
  3. Holger Roth, Ziyue Xu, Yuan-Ting Hsieh, Adithya Renduchintala, Isaac Yang, Zhihong Zhang, Yuhong Wen, Sean Yang, Kevin Lu, Kristopher Kersten, Camir Ricketts, Daguang Xu, Chester Chen, Yan Cheng and Andrew Feng. Empowering Federated Learning for Massive Models with NVIDIA FLARE
  4. Xiaoli Tang and Han Yu. Multi-Session Multi-Objective Budget Optimization for Auction-based Federated Learning
  5. Xiaoli Tang, Han Yu, Run Tang, Chao Ren, Anran Li and Xiaoxiao Li. Dual Calibration-based Personalised Federated Learning
  6. Zengxiang Li, Zhaoxiang Hou, Hui Liu, Tongzhi Li, Ying Wang, Chao Shi, Longfei Xie, Chengyi Yang, Weishan Zhang, Liang Xu and Zelei Liu. Federated Learning in Large Model Era: Vision-Language Model for Smart City Safety Operation Management
  7. Zhihan Guo, Yifei Zhang, Zhuo Zhang, Zenglin Xu and Irwin King. FedLRC: Efficient Federated Low-Rank Adaption with Clustering for Multilingual Modeling
  8. Mahdi Morafah, Matthias Reisser, Bill Lin and Christos Louizos. Stable Diffusion-based Data Augmentation for Federated Learning with Non-IID Data
  9. Fiona Victoria Stanley Jothiraj and Afra Mashhadi. Phoenix: A Federated Generative Diffusion Model

Call for Papers

Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning distributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.

Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.

The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.

With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field. The workshop topics include but are not limited to:
Theory and algorithmic foundations:
  • Impact of heterogeneity in FL of large models
  • Multi-stage model training (e.g., base model + fine tuning)
  • Optimization advances in FL (e.g., beyond first-order and local methods)
  • Prompt tuning in federated settings
  • Self-supervised learning in federated settings
Leveraging foundation models to improve federated learning:
  • Adaptive aggregation strategies for FL in heterogeneous environments
  • Foundation model enhanced FL knowledge distillation
  • Overcoming data interoperability challenges using foundation models
  • Personalization of FL with foundation models
Federated learning for training and tuning foundation models:
  • Fairness, bias, and interpretability challenges in FL with foundation models
  • Federated transfer learning with foundation models
  • FL techniques for traning large-scale foundation models
  • Hardware for FL with foundation models
  • Optimization algorithms for federated training of foundation models
  • Privacy-preserving mechanisms in FL with foundation models
  • Resource-efficient FL with foundation models
  • Security and robustness considerations in FL with foundation models
  • Systems and infrastructure for FL with foundation models
  • Vertical federated learning with foundation models
  • Vulnerabilities of FL with foundation models

More information on previous workshops can be found here.


Submission Instructions

Formatting Requirements. Submissions must be written in English, in double-column format, and must adhere to the ACM template and format (also available in Overleaf). Word users may use the Word Interim Template. The recommended setting for LaTeX is:

\documentclass\\sigconf, review\\{acmart}.

Submissions must be a single PDF file of up to 8 (eight) pages as main paper, with up to 2 additional pages for references and optional appendix.

Authorship. Submissions are not anonymous, hence authors should list their names and affiliations.

Submission site: https://easychair.org/conferences/?conf=thewebconf2024_workshops (make sure to select the "FL@FM-TheWebConf'24: International Workshop on Federated Foundation Models for the Web" track when making a submission).

For enquiries, please email to the workshop general/program co-chairs.


Publications

For accepted papers, it is up to the authors to decide if they want them to be included in the WWW'24 Companion proceedings. Doing so will make the paper regarded as published in an archival venue, and preclude it from being submitted to other conferences or journals. If the authors opt out of this option, their papers can still be submitted for consideration in other conferences and journals.

We will contact the authors of accepted papers in due course to make this decision at a later date.


Organizing Committee


Program Committee

  • Aaqib Saeed (Philips Research)
  • Alysa Ziying Tan (Alibaba-NTU Singapore Joint Research Institute)
  • Antoun Yaacoub (ESIEA)
  • Bo Zhao (Nanjing University of Aeronautics and Astronautics)
  • Cheng-Ying Yang (University of Taipei)
  • Christian Prehofer (Technical University of Munich)
  • Hanlin Gu (The Hong Kong University of Science and Technology)
  • Hongyi Peng (Nanyang Technological University)
  • Jiangtian Nie (Nanyang Technological University)
  • Jiankai Sun (The Ohio State University)
  • Jianshu Weng (Swiss Re)
  • Jihong Park (Deakin University)
  • Jinhyun So (University of Southern California)
  • Liping Yi (Nankai University)
  • Ljubomir Rokvic (EPFL)
  • Muhammad Intizar Ali (Dublin City University)
  • Paulo Ferreira (Dell Technologies)
  • Peng Zhang (Guangzhou University)
  • Rui Liu (Nanyang Technological University)
  • Sheng Wan (Hong Kong University of Science and Technology)
  • Shengchao Chen (University of Technology Sydney)
  • Siba Haidar (ESIEA)
  • Siwei Feng (Soochow University)
  • Wei Yang Bryan Lim (Nanyang Technological University)
  • Weiming Zhuang (Nanyang Technological University)
  • William Lindskog (DENSO Automotive Deutschland)
  • Xianjie Guo (Hefei University of Technology)
  • Xiaohu Wu (Beijing University of Posts and Telecommunications)
  • Xiaoli Tang (Nanyang Technological University)
  • Xu Guo (Nanyang Technological University)
  • Yanci Zhang (Shandong University)
  • Yang Zhang (Nanjing University of Aeronautics and Astronautics)
  • Yiqiang Chen (Institute of Computing Technology, Chinese Academy of Sciences)
  • Yulan Gao (Nanyang Technological University)
  • Zelei Liu (China Unicom, Shanghai)
  • Zhuan Shi (EPFL)
  • Zichen Chen (University of California, Santa Barbara)

Organized by