Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning distributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.
Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.
The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.
With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field. The workshop topics include but are not limited to:
Theory and algorithmic foundations:
|
Federated learning for training and tuning foundation models:
|
More information on previous workshops can be found here.
Formatting Requirements. Submissions must be written in English, in double-column format, and must adhere to the ACM template and format (also available in Overleaf). Word users may use the Word Interim Template. The recommended setting for LaTeX is:
\documentclass\\sigconf, review\\{acmart}.
Submissions must be a single PDF file of 4 (four) to 8 (eight) pages as the main paper, with up to 2 additional pages for references and optional appendix.
Late breaking papers refer to those papers that have been reviewed by WWW'25 but were not accepted. Authors who wish to submit such papers can follow the same submission link and do so by January 21, 2025. In your submission, please include the WWW'25 review comments in the appendix of your paper. These papers will not need to go through another round of peer review. Instead, the organizing committee will determine whether they are accepted into the FL@FM-TheWebConf'25 workshop.
Authorship. Submissions are not anonymous, hence authors should list their names and affiliations.
Submission site: https://easychair.org/conferences/?conf=fedfm-www-25.
For enquiries, please email us at: fedfm-www-25@easychair.org.
For accepted papers, it is up to the authors to decide if they want them to be included in the WWW'25 Companion proceedings. Doing so will make the paper regarded as published in an archival venue, and preclude it from being submitted to other conferences or journals. If the authors opt out of this option, their papers can still be submitted for consideration in other conferences and journals.
    |
Alternatively, selected high quality papers will be invited to submit to the Journal of Computer Science & Technology (JCST), Springer for fast track review and publication. JCST is an SCI indexed, CCF-B, journal with an impact factor of 1.2. More information will be provided at a later time. |
We will contact the authors of accepted papers in due course to make this decision at a later date.
General Co-Chairs |     | Program Co-Chairs |     | Local Chair |     | ||||||||
Irwin King (CUHK) |
    | Guodong Long (UTS) |
    |     | Zenglin Xu (Fudan) |
    | Han Yu (NTU) |
    |     | Yifei Zhang (NTU) |
    |
|
|