|   | |
Time | Activity | |
  |   | |
09:00 – 09:30 | Registration | |
09:30 – 10:10 | Keynote Talk 1: Trustworthy Federated Learning: Research and Applications, by Qiang Yang | |
10:10 – 10:40 | Keynote Talk 2: Advancing Time Series Sensor Data Analytics: Harnessing Federated Learning and Foundation Models, by Xiaoli Li | |
10:40 – 11:00 | Coffee Break | |
11:00 – 11:30 | Keynote Talk 3: Introduction to Federated Recommendations, by Guodong Long | |
11:30 – 12:00 | Keynote Talk 4: Rethinking Benchmarks for Machine Learning Systems, by Bingsheng He | |
12:00 – 14:00 | Lunch | |
14:00 – 15:10 | Briefings Session 1 (10 min per talk, including Q&A) | |
|
||
15:10 – 15:30 | Coffee Break | |
15:30 – 16:40 | Briefings Session 2 (10 min per talk, including Q&A) | |
|
||
16:40 – 17:00 | Break | |
17:00 – 17:30 | Keynote Talk 5: Federated Continual Learning via Prompt-based Dual Knowledge Transfer, by Ying Wei | |
17:30 – 17:50 | Keynote Talk 6: Model Ownership in Federated Learning, by Chee Seng Chan | |
17:50 – 18:20 | Formation of Discussion Groups | |
  |   |   |
  |   | |
Time | Activity | |
  |   | |
09:00 – 09:30 | Registration | |
09:30 – 10:00 | Keynote Talk 7: Trustworthy Federated Learning and Large Language Models, by Xiaojin Zhang | |
10:00 – 10:30 | Coffee Break | |
10:30 – 11:00 | Training Session: How to look for a research topic for Trustworthy Federated Learning, by Lixin Fan | |
11:00 – 12:00 | Group Discussions | |
12:00 – 14:00 | Lunch | |
14:00 – 15:00 | Group Discussions | |
15:00 – 15:30 | Coffee Break | |
15:30 – 17:00 | Group Discussions | |
17:00 – 18:00 | Student Summarization | |
  |   |   |
  |   |   |
Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning dis-tributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.
Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.
The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.
FMs such as GPT-4 encoded with vast knowledge and powerful emergent abilities have achieved remarkable success in various natural language processing and computer vision tasks. Grounding FMs by adapting them to domain-specific tasks or augmenting them with domain-specific knowledge enables us to exploit the full potential of FMs. However, grounding FMs faces several challenges, stemming primarily from constrained computing resources, data privacy, model heterogeneity, and model ownership. Federated Transfer Learning (FTL), the combination of FL and transfer learning, provides promising solutions to address these challenges. In recent years, the need for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in both academia and industry.
In this summer school, we bring together academic researchers and industry practitioners to present their latest research and discuss visions towards addressing open issues in this interdisciplinary research area. It will focus on the theme of combining FL with FM to open up opportunities to address new challenges. It will be a useful platform for students working in related areas to interact closely with leading scientists in the field.
    |
Title: Trustworthy Federated Learning: Research and Applications Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Professor Emeritus, Hong Kong University of Science and Technology (HKUST), Hong Kong, China Biography
|
|
    |
Title: Advancing Time Series Sensor Data Analytics: Harnessing Federated Learning and Foundation Models Speaker: Xiaoli Li, Department Head (Machine Intellection), Institute of Infocomm Research (I2R), A*STAR, Singapore Biography
|
|
    |
Title: Introduction to Federated Recommendations Speaker: Guodong Long, Associate Professor, University of Technology Sydney (UTS), Australia Biography
|
|
    |
Title: Rethinking Benchmarks for Machine Learning Systems Speaker: Bingsheng He, Professor and Vice-Dean (Research), National University of Singapore (NUS), Singapore Biography
|
|
    |
Title: Federated Continual Learning via Prompt-based Dual Knowledge Transfer Speaker: Ying Wei, Nanyang Assistant Professor, Nanyang Technological University (NTU), Singapore Biography
|
|
    |
Title: Model Ownership in Federated Learning Speaker: Chee Seng Chan, Professor, Universiti Malaya, Malaysia Biography
|
|
    |
Title: Trustworthy Federated Learning and Large Language Models Speaker: Xiaojin Zhang, Assistant Professor, Huazhong University of Science and Technology, China Biography
|
Anran Li (NTU) "Historical Embedding-Guided Efficient Large-Scale Federated Graph Learning," SIGMOD'24 |
Yanci Zhang (NTU) "LR-XFL: Logical Reasoning-based Explainable Federated Learning," AAAI-24 |
Alysa Ziying Tan (NTU) "FL-CLIP: Bridging Plasticity and Stability in Pre-Trained Federated Class-Incremental Learning Models," ICME'24 |
  | ||
Xiaoli Tang (NTU) "Dual Calibration-based Personalised Federated Learning," IJCAI'24 |
Liping Yi (NTU) "FedSSA: Semantic Similarity-based Aggregation for Efficient Model-Heterogeneous Personalized Federated Learning," IJCAI'24 |
Xianjie Guo (NTU) "Sample Quality Heterogeneity-aware Federated Causal Discovery through Adaptive Variable Space Selection," IJCAI'24 |
  | ||
Yuxin Shi (NTU) "Fairness-Aware Job Scheduling for Multi-Job Federated Learning," ICASSP'24 |
Pengwei Xing (NTU) "Federated Neuro-Symbolic Learning," ICML'24 |
Hongyi Peng (NTU) "FedCal: Achieving Local and Global Calibration in Federated Learning via Aggregated Parameterized Scaler," ICML'24 |
  | ||
Jiewei Chen (SUTD) "Enabling Foundation Models: A Distributed Collaboration Framework via Graph-based Federated Learning" |
Lulu Wang (SUTD) "Reconciliation of Privacy Protection and Security Defense in Federated Learning" |
Hongyan Chang (NUS) "Bias Propagation in Federated Learning" |
  | ||
Yiqun Diao (NUS) "Towards addressing label skews in one-shot federated learning" |
Sixu Hu (NUS) "Communication-Efficient Generalized Neuron Matching for Federated Learning" |
Lixin Fan (WeBank) |
    | Sin Gee Teo (A*STAR) |
    | Xiaoxiao Li (UBC) |
    | Han Yu (NTU) |
    |