International Workshop on Federated Learning with Generative AI
In Conjunction with IJCAI 2025 (FedGenAI-IJCAI'25)


Final Submission Deadline: May 08, 2025 (23:59:59 AoE)
Notification Due: June 08, 2025 (23:59:59 AoE)
Workshop Dates: August 16-18, 2025 | August 28-31, 2025
Venues: Montreal, QC, Canada | Guangzhou, China

Call for Papers

Generative AI (GenAI), particularly large language models (LLMs) like ChatGPT, has demonstrated transformative potential across diverse domains. However, deploying these models in real-world applications presents critical challenges in distributed model management, including data privacy, efficiency, and scalability. Training foundation models (FMs) is inherently data- and resource-intensive, traditionally relying on centralized methods that conflict with privacy regulations and real-world constraints. In decentralized settings, organizations must navigate fragmented training data, high computational demands, and stringent regulatory frameworks (e.g., GDPR) that limit data sharing. Federated Learning (FL) offers a compelling solution by enabling collaborative learning across distributed data sources while preserving privacy. As GenAI continues to reshape AI applications, FL is becoming increasingly essential for ensuring secure, scalable, and decentralized AI development. By allowing data owners to collaboratively train models without sharing raw data, Federated Generative AI (FedGenAI) bridges the gap between the power of foundation models and the need for privacy-preserving, distributed learning. Advancements in FL methodologies tailored for GenAI can unlock new opportunities for efficient model training, personalized adaptation, and responsible AI deployment while mitigating privacy risks and computational constraints.

Foundation models like GPT-4, with their vast knowledge and emergent capabilities, have achieved remarkable success in natural language processing and computer vision. However, fully leveraging their potential in decentralized environments requires addressing challenges such as limited computing resources, data privacy concerns, model heterogeneity, and proprietary ownership. Federated Transfer Learning (FTL)—the integration of FL and transfer learning—offers promising solutions by enabling efficient model adaptation without compromising data privacy. The concept of FTL-FM, which applies FTL to foundation models, has gained significant traction in both academia and industry. As the intersection of federated learning and generative AI (FedGenAI) remains underexplored, this workshop aims to fill that gap. We invite original research contributions, position papers, and work-in-progress reports to advance our understanding of privacy-preserving, scalable, and decentralized generative AI. By bringing together researchers, students, and industry professionals, FedGenAI provides a unique platform to discuss the latest advancements, share insights, and shape the future of collaborative, privacy-conscious AI development.

This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area of FedGenAI. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of combining FL with GenAI to open up opportunities for addressing new challenges. The topics of interest include but are not limited to the following:
Theory and algorithmic foundations:
  • Impact of heterogeneity in FL of GenAI
  • Multi-stage model training (e.g., base model + fine tuning)
  • Optimization advances in FL, e.g., beyond first-order and local methods
  • Prompt tuning and design in federated settings
  • Self-supervised learning in federated settings
  • Federated in-context learning
  • Federated neuro-symbolic learning
Leveraging foundation models to improve federated learning:
  • Adaptive aggregation strategies for FL in heterogeneous environments
  • GenAI enhanced FL knowledge distillation
  • Overcoming data interoperability challenges using GenAI
  • Personalization of FL with GenAI
Federated learning for training and tuning foundation models:
  • Fairness, bias, and interpretability challenges in FL with foundation models
  • Federated transfer learning with GenAI
  • FL-empowered multi-agent foundation model systems
  • FL techniques for training large-scale foundation models
  • Hardware for FL with foundation models
  • Optimization algorithms for federated training of foundation models
  • Privacy-preserving mechanisms in FL with foundation models
  • Resource-efficient FL with foundation models
  • Security and robustness considerations in FL with foundation models
  • Systems and infrastructure for FL with foundation models
  • Vertical federated learning with GenAI
  • Vulnerabilities of FL with GenAI

More information on previous workshops can be found here.


Submission Instructions

Each submission can be up to 7 pages of contents plus up to 2 additional pages of references and acknowledgements. The submitted papers must be written in English and in PDF format according to the IJCAI'25 template. All submitted papers will be under a double-blind peer review for their novelty, technical quality and impact. The submissions must not contain author details. Submission will be accepted via the Easychair submission website.

Based on the requirement from IJCAI'25, at least one author of each accepted paper must travel to the IJCAI venue in person. In addition, multiple submissions of the same paper to more than one IJCAI workshop are forbidden.

Easychair submission site: https://easychair.org/conferences/?conf=fedgenai-ijcai-25

For enquiries, please email to: fedgenai-ijcai-25@easychair.org


Keynote Speakers

   

Title: Towards Real-life Federated Learning with Generative AI Models: Research and Development with NVIDIA FLARE

Speaker: Ziyue Xu, Senior Scientist, NVIDIA, USA

Biography
Ziyue Xu is a senior scientist at NVIDIA. His research interests lie in the area of medical image analysis, computer vision, and federated learning. He has been working on collaborative AI development over the years along with fellow researchers and clinicians. Ziyue received his B.S. from Tsinghua University, and Ph.D. from the University of Iowa. He is an IEEE Senior Member, Area Chair for major conferences, and Associate Editor for several journals, including IEEE Transactions of Medical Imaging (TMI) and International Journal of Computer Vision (IJCV).

   

Title: TBA

Speaker: Peter Kairouz, Research Scientist, Google, USA

Biography
Peter Kairouz is a research scientist at Google, where he leads research efforts on distributed, privacy-preserving, and robust machine learning. Prior to joining Google, he was a postdoctoral research fellow at Stanford University, and before that, he was a PhD student at the University of Illinois Urbana-Champaign (UIUC). He is the recipient of the 2012 Roberto Padovani Scholarship from Qualcomm's Research Center, the 2015 ACM SIGMETRICS Best Paper Award, the 2015 Qualcomm Innovation Fellowship Finalist Award, and the 2016 Harold L. Olesen Award for Excellence in Undergraduate Teaching from UIUC.

   

Title: TBA

Speaker: Xiaoxiao Li, Assistant Professor, the University of British Columbia, Canada

Biography
Dr. Xiaoxiao Li is an Assistant Professor in the Electrical and Computer Engineering Department at the University of British Columbia (UBC), a Faculty Member at Vector Institute, and a CIFAR AI Chair. Before joining UBC, Dr. Li was a Postdoc Research Fellow in the Computer Science Department at Princeton University. Dr. Li obtained PhD degree from Yale University in 2020, and Bachelor's degree from Zhejiang University in 2015. In the recent few years, Dr. Li has over 50 papers published in leading machine learning conferences and journals, including NeurIPS, ICML, ICLR, CVPR, IJCAI, MICCAI, IPMI, ECCV, AAAI and Nature Methods. Dr Li's research work has been recognized with the OHBM Merit Abstract Award, the MLMI Best Paper Award, the DART Best Paper Award, and FL@FM-TheWebConf'24 Best Paper Award.

   

Title: TBA

Speaker: Bang Liu, Associate Professor, the University of Montreal, Canada

Biography
Bang Liu is an Associate Professor in the Department of Computer Science and Operations Research (DIRO) at the University of Montreal (UdeM). He is a member of the RALI laboratory (Applied Research in Computer Linguistics) of DIRO, a member of Institut Courtois of UdeM, an associate member of Mila – Quebec Artificial Intelligence Institute, and a Canada CIFAR AI (CCAI) Chair. He received his B.Engr. degree in 2013 from University of Science and Technology of China (USTC), as well as his M.S. degree and Ph.D. degree from University of Alberta in 2015 and 2020, respectively. His research interests primarily lie in the areas of natural language processing, multimodal & embodied learning, theory and techniques for AGI (e.g., understanding and improving large language models), and AI for science (e.g., health, material science, XR).


Co-Chairs



Jindong Wang
(William & Mary)
   

Xiaohu Wu
(NERCMNT)
   

Lingjuan Lyu
(Sony AI)
   

Dimitrios Dimitriadis
(Amazon)
   

Han Yu
(NTU)
   

Program Committee

  • Alysa Ziying Tan (Nanyang Technological University)
  • Anran Li (Yale University)
  • Enyue Yang (Shenzhen University)
  • Guanyu Gao (Nanjing University of Science and Technology)
  • Haoran Shi (Nanyang Technological University)
  • Hongyi Peng (Nanyang Technological University)
  • Huawei Huang (Sun Yat-Sen University)
  • Jing Tang (Hong Kong University of Science and Technology)
  • Jiankai Sun (The Ohio State University)
  • Minghui Chen (Nanyang Technological University)
  • Paulo Ferreira (Dell Technologies)
  • Peng Zhang (Guangzhou University)
  • Qicheng Lao (Beijing University of Posts and Telecommunications)
  • Rui Liu (Nanyang Technological University)
  • Wei Yang Bryan Lim (Nanyang Technological University)
  • Weixin An (Xidian University)
  • Wen Wu (Peng Cheng Laboratory)
  • Wenshuo Wang (Beijing University of Posts and Telecommunications)
  • Yang Zhang (Nanjing University of Aeronautics and Astronautics)
  • Yiqiang Chen (Institute of Computing Technology, Chinese Academy of Sciences)
  • Yuning Yang (Beijing University of Posts and Telecommunications)
  • Zhuang Qi (Shandong University)

Organized by