Federated Learning in the Age of Foundation Models
Summer School @ Singapore 2024 (FL@FM-Singapore'24)


Date: May 13-14, 2024
Venue: Infuse Level 14, A*STAR, Connexis South Tower, FP 1, Singapore (138632)
Register here for free, before May 10, 2024!
Lunch & coffee breaks provided!

Program


Day 1: Monday, May 13, 2024
  
Time Activity
  
09:00 – 09:30 Registration
09:30 – 10:10 Keynote Talk 1: Federated Learning and Transfer Learning in the Context of Continual Learning and LLM, by Qiang Yang
10:10 – 10:40 Keynote Talk 2: Advancing Time Series Sensor Data Analytics: Harnessing Federated Learning and Foundation Models, by Xiaoli Li
10:40 – 11:00 Coffee Break
11:00 – 11:30 Keynote Talk 3: Introduction to Federated Recommendations, by Guodong Long
11:30 – 12:00 Keynote Talk 4: Towards Auction-based Federated Learning, by Han Yu
12:00 – 14:00 Lunch
14:00 – 15:00 Briefings Session 1 (10 min per talk, including Q&A)
  1. Anran Li: Historical Embedding-Guided Efficient Large-Scale Federated Graph Learning
  2. Yanci Zhang: LR-XFL: Logical Reasoning-based Explainable Federated Learning
  3. Alysa Tan: FL-CLIP: Bridging Plasticity and Stability in Pre-Trained Federated Class-Incremental Learning Models
  4. Xiaoli Tang: Dual Calibration-based Personalised Federated Learning
  5. Liping Yi: FedSSA: Semantic Similarity-based Aggregation for Efficient Model-Heterogeneous Personali
  6. Hongyan Chang: Bias Propagation in Federated Learning
15:00 – 15:20 Coffee Break
15:20 – 16:20 Briefings Session 2 (10 min per talk, including Q&A)
  1. Xianjie Guo: Sample Quality Heterogeneity-aware Federated Causal Discovery through Adaptive Variable Space Selection
  2. Yuxin Shi: Fairness-Aware Job Scheduling for Multi-Job Federated Learning
  3. Jiewei Chen: Enabling Foundation Models: A Distributed Collaboration Framework via Graph-based Federated Learning
  4. Lulu Wang: Reconciliation of Privacy Protection and Security Defense in Federated Learning
16:20 – 16:50 Invited Talk 5: Federated Continual Learning via Prompt-based Dual Knowledge Transfer, by Ying Wei
16:50 – 17:10 Training Session: How to look for a research topic for Trustworthy Federated Learning, by Lixin Fan
17:10 – 17:30 Formation of Discussion Groups
   

Day 2: Tuesday, May 14, 2024
  
Time Activity
  
09:00 – 09:30 Registration
09:30 – 10:00 Keynote Talk 6: Model Ownership in Federated Learning, by Chee Seng Chan
10:00 – 10:30 Coffee Break
10:30 – 12:00 Group Discussions
12:00 – 14:00 Lunch
14:00 – 15:00 Group Discussions
15:00 – 15:30 Coffee Break
15:30 – 17:30 Group Discussions
   

Overview

Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning dis-tributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.

Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.

The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.

FMs such as GPT-4 encoded with vast knowledge and powerful emergent abilities have achieved remarkable success in various natural language processing and computer vision tasks. Grounding FMs by adapting them to domain-specific tasks or augmenting them with domain-specific knowledge enables us to exploit the full potential of FMs. However, grounding FMs faces several challenges, stemming primarily from constrained computing resources, data privacy, model heterogeneity, and model ownership. Federated Transfer Learning (FTL), the combination of FL and transfer learning, provides promising solutions to address these challenges. In recent years, the need for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in both academia and industry.

In this summer school, we bring together academic researchers and industry practitioners to present their latest research and discuss visions towards addressing open issues in this interdisciplinary research area. It will focus on the theme of combining FL with FM to open up opportunities to address new challenges. It will be a useful platform for students working in related areas to interact closely with leading scientists in the field.


Invited Speakers

   

Title: Federated Learning and Transfer Learning in the Context of Continual Learning and LLM

Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Professor Emeritus, Hong Kong University of Science and Technology (HKUST), Hong Kong, China

Biography
Qiang Yang is the head of the AI Department at WeBank (Chief AI Officer) and Professor Emeritus at the Computer Science and Engineering (CSE) Department of the Hong Kong University of Science and Technology (HKUST), where he was a former head of CSE Department and founding director of the Big Data Institute (2015-2018). His research interests include AI, machine learning, and data mining, especially in transfer learning, automated planning, federated learning, and case-based reasoning. He is a fellow of several international societies, including ACM, AAAI, IEEE, IAPR, and AAAS. He received his Ph.D. in Computer Science in 1989 and his M.Sc. in Astrophysics in 1985, both from the University of Maryland, College Park. He obtained his B.Sc. in Astrophysics from Peking University in 1982. He had been a faculty member at the University of Waterloo (1989-1995) and Simon Fraser University (1995-2001). He was the founding Editor-in-Chief of the ACM Transactions on Intelligent Systems and Technology (ACM TIST) and IEEE Transactions on Big Data (IEEE TBD). He served as the President of International Joint Conference on AI (IJCAI, 2017-2019) and an executive council member of Association for the Advancement of AI (AAAI, 2016-2020). Qiang Yang is a recipient of several awards, including the 2004/2005 ACM KDDCUP Championship, the ACM SIGKDD Distinguished Service Award (2017), and AAAI Innovative Application Awards (2018, 2020 and 2022). He was the founding director of Huawei's Noah's Ark Lab (2012-2014) and a co-founder of 4Paradigm Corp, an AI platform company. He is an author of several books including Intelligent Planning (Springer), Crafting Your Research Future (Morgan & Claypool), and Constraint-based Design Recovery for Software Engineering (Springer).

   

Title: Advancing Time Series Sensor Data Analytics: Harnessing Federated Learning and Foundation Models

Speaker: Xiaoli Li, Department Head (Machine Intellection), Institute of Infocomm Research (I2R), A*STAR, Singapore

Biography
Dr. Li Xiaoli is the Department Head and Principal Scientist of the Machine Intellection (MI) department at the Institute for Infocomm Research (I2R), A*STAR, Singapore. He also holds adjunct full professor position at School of Computer Science and Engineering, Nanyang Technological University. He has been a member of ITSC (Information Technology Standards Committee) from ESG Singapore and IMDA since 2020 and has served as joint lab directors with a few major industry partners. He holds the title of IEEE Fellow and is also recognised as a Fellow of the Asia-Pacific Artificial Intelligence Association (AAIA). Dr. Li also served as a health innovation expert panel member for the Ministry of Health (MOH), as well as an AI advisor for the Smart Nation and Digital Government Office (SNDGO), Prime Minister's Office, highlighting his extensive involvement in key Government and industry initiatives. His research interests include AI, data mining, machine learning, and bioinformatics. He has been serving as the Chair of many leading AI/data mining/machine learning related conferences & workshops (including KDD, ICDM, SDM, PKDD/ECML, ACML, PAKDD, WWW, IJCAI, AAAI, ACL, and CIKM). He currently serves as editor-in-chief of Annual Review of Artificial Intelligence, and associate editor of IEEE Transactions on Artificial Intelligence, Knowledge and Information Systems, and Machine Learning with Applications (Elsevier).

   

Title: Introduction to Federated Recommendations

Speaker: Guodong Long, Associate Professor, University of Technology Sydney (UTS), Australia

Biography
Dr. Guodong Long is an Associate Professor in School of Computer Science, Faculty of Engineering and IT (FEIT), University of Technology Sydney (UTS), Australia. He is one of the core members of the Research Australian Artificial Intelligence Institute (AAII). He is currently leading a research group to conduct application-driven research on machine learning and data science. Particularly, his research interests focus on several application domains, such as NLP, Healthcare, Smart Home, Education and Social Media. He is dedicated on exploring the blue-sky research ideas with real-world value and impact. His group's research is funded by multiple sources of industry grants and ARC grants. He has published more than 100 papers on top-tier conferences including ICLR, ICML, NeurIPS, AAAI, IJCAI, ACL, KDD, WebConf, and journals including IEEE TPAMI, TKDE and TNNLS. His publications attract more than 10k citations. He will serve as a general co-chair for the WebConf 2025 to be hosted in Sydney.

   

Title: Towards Auction-based Federated Learning

Speaker: Han Yu, Nanyang Assistant Professor, Nanyang Technological University (NTU), Singapore

Biography
Dr Han Yu is a Nanyang Assistant Professor (NAP) in the College of Computing and Data Science (CCDS), Nanyang Technological University (NTU), Singapore. Between 2015 and 2018, he held the prestigious Lee Kuan Yew Post-Doctoral Fellowship (LKY PDF) at NTU. Before joining NTU, he worked as an Embedded Software Engineer at Hewlett-Packard (HP) Pte Ltd, Singapore. He obtained his PhD from the School of Computer Science and Engineering, NTU in 2014. His work focuses on trustworthy federated ubiquitous learning (TrustFUL). He has published over 250 research papers in book chapters, leading international conferences and journals. He co-authored the book Federated Learning - the first monograph on the topic of federated learning. His research work has been recognized with multiple scientific awards. In 2021, he co-founded the Trustworthy Federated Ubiquitous Learning (TrustFUL) Research Lab (https://trustful.federated-learning.org/). He has been serving as an Associate Editor of IEEE TNNLS. Since 2023, he has been appointed as the IJCAI Sponsorship Officer General by the IJCAI Board of Trustees. He is a Distinguished Member of CCF, and a Senior Member of AAAI and IEEE. For his continued contributions to the field of trustworthy AI and real-world impact in the society, he has been identified as one of the World's Top 2% Scientists in AI, and selected as one of the JCI Ten Outstanding Young Persons (TOYP) of Singapore (Scientific and/or Technological Development) in 2022.

   

Title: Federated Continual Learning via Prompt-based Dual Knowledge Transfer

Speaker: Ying Wei, Nanyang Assistant Professor, Nanyang Technological University (NTU), Singapore

Biography
I am currently a Nanyang Assistant Professor with School of Computer Science and Engineering, Nanyang Technological University. I am generally interested in developing algorithms that equip machines with more general intelligence via knowledge transfer and compositionality. This includes allowing continuous transfer and adaptation of the knowledge previous learned (nowadays in LLMs) to quickly learn the current task with minimal human supervision, and autonomously evaluating the success of knowledge transfer. I am also passionate about applying these algorithms into real-world applications with small data, e.g., drug discovery. Previously, I was an Assistant Professor at Department of Computer Science, City University of Hong Kong and a senior researcher at Tencent AI Lab. I completed my Ph.D. in Computer Science and Engineering at Hong Kong University of Science and Technology under the supervision of Professor Qiang Yang, and my B.S. in Automation at Huazhong University of Science and Technology. I have also spent time interning at Microsoft Research Asia.

   

Title: Model Ownership in Federated Learning

Speaker: Chee Seng Chan, Professor, Universiti Malaya, Malaysia

Biography
My research interests include computer vision and machine learning, where I lead a young and energetic research team that has published more than 100 papers in related top peer-review conferences and journals (e.g. CVPR, NeurIPS, TPAMI, TIP etc). I was the founding Chair for IEEE Computational Intelligence Society, Malaysia chapter. Also currently, I serve as the Associate Editor of Pattern Recognition (Elsevier), and have co-organized several conferences/workshops/tutorials/challenges related to computer vision/machine learning. I was the recipient of Top Research Scientists Malaysia (TRSM) in 2022, Young Scientists Network Academy of Sciences Malaysia (YSN-ASM) in 2015 and Hitachi Research Fellowship in 2013. Besides that, I am also a senior member (IEEE), Professional Engineer (BEM) and Chartered Engineer (IET). During 2020-2022, I was seconded to the Ministry of Science, Technology and Innovation (MOSTI) as the Lead of PICC Unit under COVID19 Immunisation Task Force (CITF), as well as the Undersecretary for Division of Data Strategic and Foresight.


Briefings



Anran Li (NTU)

"Historical Embedding-Guided Efficient Large-Scale Federated Graph Learning," SIGMOD'24


Yanci Zhang (NTU)

"LR-XFL: Logical Reasoning-based Explainable Federated Learning," AAAI-24


Alysa Ziying Tan (NTU)

"FL-CLIP: Bridging Plasticity and Stability in Pre-Trained Federated Class-Incremental Learning Models," ICME'24
 


Xiaoli Tang (NTU)

"Dual Calibration-based Personalised Federated Learning," IJCAI'24


Liping Yi (NTU)

"FedSSA: Semantic Similarity-based Aggregation for Efficient Model-Heterogeneous Personalized Federated Learning," IJCAI'24


Xianjie Guo (NTU)

"Sample Quality Heterogeneity-aware Federated Causal Discovery through Adaptive Variable Space Selection," IJCAI'24
 


Yuxin Shi (NTU)

"Fairness-Aware Job Scheduling for Multi-Job Federated Learning," ICASSP'24


Jiewei Chen (SUTD)

"Enabling Foundation Models: A Distributed Collaboration Framework via Graph-based Federated Learning"


Lulu Wang (SUTD)

"Reconciliation of Privacy Protection and Security Defense in Federated Learning"
 


Hongyan Chang (NUS)

"Bias Propagation in Federated Learning"

Organizing Committee



Lixin Fan
(WeBank)
   

Sin Gee Teo
(A*STAR)
   

Xiaoxiao Li
(UBC)
   

Han Yu
(NTU)
   

Organized by