Workshop on Federated Learning in Australasia: When FL meets Foundation Models in Conjunction with AJCAI 2023 (FL@FM-AJCAI'23)


Workshop Date: 09:00-16:00, Thursday, November 28, 2023
Venue: Room 132, Sir Llew Edwards Building (14), The University of Queensland (UQ), Brisbane, Australia

Workshop Program

  
Time (Brisbane) Activity (Zoom link: https://zoom.uts.edu.au/j/82109382247)
  
09:00 – 10:00 Invited Talk 1: Tilted Losses in Machine Learning: Theory and Applications, by Tian Li, Assistant Professor, The University of Chicago, USA
10:00 – 10:15 Break
10:15 – 11:15 Invited Talk 2: Federated Causal Discovery, by Mingming Gong, Senior Lecturer in Data Science, ARC DECRA Fellow, The University of Melbourne, Australia
11:15 – 11:30 Break
11:30 – 12:30 Invited Talk 3: When Foundation Model Meets Federated Learning, by Lingjuan Lyu, Head of Privacy and Security, Sony AI, Japan
12:30 – 14:00 Lunch Break
14:00 – 16:00 Tutorial: Heterogeneous Federated Learning, by Guodong Long and Yue Tan
   

Overview

The aim of this workshop is to bring together academic researchers and industry professionals within the Australasian region to discuss, explore, and address the potentials and challenges of incorporating federated learning (FL) techniques with foundation models. This workshop serves as a platform to share insights and innovative solutions in developing robust and secure AI systems that handle the unique issues brought about by foundation models, such as the high number of learnable parameters leading to edge computing and communication challenges, privacy and security concerns due to learning from a vast amount of data, and limited personalization possibilities. Through fostering collaboration and exchanging ideas, the workshop seeks to push the boundaries of current understanding and applications of federated learning in the context of foundation models, thereby propelling the advancement of AI technology in Australasia.


Invited Talks

   

Title: Tilted Losses in Machine Learning: Theory and Applications

Speaker: Tian Li, Assistant Professor, The University of Chicago, USA

Abstract:
Heterogeneity not only affects the convergence of federated learning (FL) models, but also poses challenges to a number of other critical constraints including fairness. In this talk, I first introduce a fair federated learning objective, q-Fair FL (q-FFL), to promote consistent quality of service for all clients in the network. Partly motivated by q-FFL and exponential tilting, I then focus on a more general framework to address limitations of empirical risk minimization via tilting, named tilted empirical risk minimization (TERM). I make connections between TERM and related approaches, such as Value-at-Risk, Conditional Value-at-Risk, and distributionally robust optimization, and present batch and stochastic first-order optimization methods for solving TERM at scale. Finally, I show that this approach can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance—delivering state-of-the-art performance relative to more complex, bespoke solutions for these problems.

Biography
Tian Li will be joining the Department as an Assistant Professor in the Summer of 2024. Her research centers around distributed optimization, federated learning, and trustworthy ML. She is interested in designing, analyzing, and evaluating principled learning algorithms, taking into account practical constraints, to address issues related to accuracy, scalability, trustworthiness, and their interplays. Tian received her Ph.D. in Computer Science from Carnegie Mellon University. Prior to CMU, she received her undergraduate degrees in Computer Science and Economics from Peking University. She received the Best Paper Award at the ICLR Workshop on Secure Machine Learning Systems, was invited to participate in the EECS Rising Stars Workshop, and was recognized as a Rising Star in Machine Learning/Data Science by multiple institutions.

  1. Code: https://github.com/litian96/TERM
  2. Blog post: https://blog.ml.cmu.edu/2021/04/02/term/
  3. Paper: https://www.jmlr.org/papers/v24/21-1095.html
   

Title: Federated Causal Discovery

Speaker: Mingming Gong, Senior Lecturer in Data Science, ARC DECRA Fellow, The University of Melbourne, Australia

Abstract:
To date, most causal directed acyclic graphs (DAGs) structure learning approaches require data to be stored in a central server. However, due to the consideration of privacy protection, data owners gradually refuse to share their personalized raw data to avoid private information leakage, making this task more troublesome by cutting off the first step. Thus, a puzzle arises: \textit{how do we discover the underlying DAG structure from decentralized data?} In this talk, focusing on the additive noise models (ANMs) assumption of data generation, I will introduce our gradient-based learning framework named FedDAG, which can learn the DAG structure without directly touching the local data and also can naturally handle the data heterogeneity.

Biography
I am a senior lecturer in Data Science at the School of Mathematics and Statistics and Melbourne Centre for Data Science, the University of Melbourne (UoM), and an affiliated associate professor of Machine Learning at Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI). I am the co-founder and co-director of UoM Causal Learning & Reasoning Group and the Melbourne Deep Learning Group. Before joining UoM, I was a postdoctoral research fellow at University of Pittsburgh and Carnegie Mellon University, working with Prof Kayhan Batmanghelich and Prof Kun Zhang. I obtained my PhD from University of Technology Sydney, supervised by Prof Dacheng Tao and co-supervised by Prof Kun Zhang, Master degree from Huazhong University of Science and Technology, and bachelor's degree from Nanjing University. From 03/2013 - 10/2013, I was a research intern at Max-Planck Institute for Intelligent Systems (Prof Bernhard Schölkopf's lab).

  1. FedDAG: Federated DAG Structure Learning
   

Title: When Foundation Model Meets Federated Learning

Speaker: Lingjuan Lyu, Head of Privacy and Security, Sony AI, Japan

Abstract:
Foundation Models (FMs) had received a tremendous attention in the past few years. However, the development of FMs also face a series of bottlenecks such as legal data usage, heavy computation resources, etc. Federated Learning (FL) emerges as a promising solution to address these bottlenecks by allowing training, fine-tuning or enriching FMs by aggregating knowledge from the distributed data sources without direct data sharing, facilitating computation sharing, mitigating the domain gap between training and test data, democratizing the development of FMs, and effectively handling the challenges posed by continuously growing data. Beyond the benefits that FL can bring to FMs, FMs can also greatly contribute to the FL community. In this talk, I will discuss how Foundation Model (FM) and Federated Learning (FL) will interplay and benefit from each other.

Biography
Lingjuan is the Head of Privacy-Preserving Machine Learning (PPML) team in Sony AI. As a globally recognized expert in privacy and security, she is leading a group of excellent scientists and engineers on privacy and security related initiatives across the company. Prior to joining Sony AI, she spent more than eight years working in academia and at industry organizations. Lingjuan received her Ph.D. from the University of Melbourne. She was a recipient of the prestigious IBM PhD Fellowship Award Worldwide. Lingjuan’s current interest is trustworthy AI, mainly on federated learning, responsible foundation model development, data privacy, model robustness, IP protection, on-device AI, etc. She had published over 100 papers in top conferences and journals, including NeurIPS, ICML, ICLR, Nature, etc. She and her papers had won a long list of awards from top main venues, such as ICML Outstanding Paper Award, ACL Area Chair Award, CIKM Best Paper Runner-up Award (only 1), IEEE Outstanding Leadership Award, and many best paper awards from AAAI, IJCAI, WWW, KDD, etc.

  1. When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
  2. TARGET: Federated Class-Continual Learning via Exemplar-Free Distillation
  3. Taming Heterogeneity to Deal with Test-Time Shift in Federated Learning
  4. Towards Fair and Privacy-Preserving Federated Deep Models

Organizers


Organized by