International Workshop on Federated Foundation Models
In Conjunction with NeurIPS 2024 (FL@FM-NeurIPS'24)


New Submission Due: September 05, 2024 (23:59:59 AoE)
Late Breaking Paper (i.e., rejected NeurIPS'24 paper) Submission Due: September 27, 2024 (23:59:59 AoE)
Notification Due: October 01, 2024 (23:59:59 AoE)
Final Version Due: October 15, 2024 (23:59:59 AoE)
Workshop Date: Sunday, December 15, 2024
Venue: East Wing - Meeting Rooms 8 & 15, Vancouver Convention Centre, Vancouver, BC, Canada


Post Workshop Publications

   

Selected workshop papers are invited to be extended and re-reviewed for publication as book chapters in the Lecture Notes in Artificial Intelligence (LNAI). More information can be found here.


Workshop Program (Sunday, December 15, 2024)

  
Time Activity
  
08:15 – 08:20 Opening Remarks
08:20 – 09:00 Keynote 1: Federated Large Language Models and Their Applications, by Qiang Yang
09:00 – 09:30 Oral Presentation Session 1 (5 min per talk + 2 min Q&A)
  1. Outstanding Paper Award: Alex Iacob, Lorenzo Sani, Bill Marino, Preslav Aleksandrov, William F. Shen & Nicholas Donald Lane. Worldwide Federated Training of Language Models
  2. Alexander Bienstock, Antigoni Polychroniadou & Ujjwal Kumar. Distributed Matrix Mechanism for Federated Learning
  3. Christian Internò, Elena Raponi, Niki van Stein, Thomas Bäck, Markus Olhofer, Yaochu Jin & Barbara Hammer. Adaptive Model Hybrid Pruning in Federated Learning through Loss Exploration
  4. Filip Granqvist, Congzheng Song, Áine Cahill, Rogier van Dalen, Martin Pelikan, Yi Sheng Chan, Xiaojun Feng, Natarajan Krishnaswami, Vojta J & Mona Chitnis. pfl-research: Simulation Framework for Accelerating Research in Private Federated Learning
09:30 – 10:00 Coffee Break
10:00 – 10:30 Oral Presentation Session 2 (5 min per talk + 2 min Q&A)
  1. Harish Karthikeyan & Antigoni Polychroniadou. OPA: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
  2. Kai Yi, Timur Kharisov, Igor Sokolov & Peter Richtárik. Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning
  3. Outstanding Student Paper Award: Lei Shen, Zhenheng Tang, Lijun Wu, Yonggang Zhang, Xiaowen Chu, Tao Qin & Bo Han. Hot Pluggable Federated Learning
  4. Lorenzo Sani, Alex Iacob, Zeyu Cao, Bill Marino, Yan Gao, Tomas Paulik, Wanru Zhao, William F. Shen, Preslav Aleksandrov, Xinchi Qiu & Nicholas Donald Lane. The Future of Large Language Model Pre-training is Federated
10:30 – 11:00 Keynote 2: The first AGI will be Federated, by Nicholas D. Lane
11:00 – 12:30 Oral Presentation Session 3 (5 min per talk + 2 min Q&A)
  1. Lu Li, Tianyu Zhang, Zhiqi Bu, Suyuchen Wang, Huan He, Jie Fu, Yonghui Wu, Jiang Bian, Yong Chen & Yoshua Bengio. MAP: Model Merging with Amortized Pareto Front Using Limited Computation
  2. Mariel Werner, Sai Praneeth Karimireddy & Michael Jordan. Defection-Free Collaboration between Competitors in a Learning System
  3. Muxing Wang, Pengkun Yang & Lili Su. On the Convergence Rates of Federated Q-Learning across Heterogeneous Environments
  4. Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang & Siheng Chen. Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models
  5. Rui Ye, Rui Ge, Fengting Yuchi, Jingyi Chai, Yanfeng Wang & Siheng Chen. Leveraging Unstructured Text Data for Federated Instruction Tuning of Large Language Models
  6. Rui Ye, Xinyu Zhu, Jingyi Chai, Lingjuan Lyu, Chen Xie, Yanfeng Wang & Siheng Chen. Federated Learning with Generative Content
  7. Sergio Zaera Mata & Roberto Gómez-Espinosa Martín. The SynapticCity Phenomenon: When All Foundation Models Marry Federated Learning and Blockchain
  8. Steffen Schotthöfer & M. Paul Laiu. Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees
  9. Sunny Gupta & Amit Sethi. FedStein: Enhancing Multi-Domain Federated Learning Through James-Stein Estimator
  10. Tao Yu, Congzheng Song, Jianyu Wang & Mona Chitnis. Momentum Approximation in Asynchronous Private Federated Learning
12:30 – 14:00 Lunch Break
14:00 – 14:30 Keynote 3: Transforming Multicenter Neurology Trials with Federated Learning: A New Era of Collaborative Medicine, by Martin J. McKeown
14:30 – 15:00 Keynote 4: Federated Optimization Beyond Standard Empirical Risk Minimization, by Gauri Joshi
15:00 – 15:30 Coffee Break
15:30 – 16:00 Keynote 5: Machine Learning from Imbalanced Data Sources, by Shiqiang Wang
16:00 – 17:00 Oral Presentation Session 4 (5 min per talk + 2 min Q&A)
  1. Vasileios Tsouvalas, Samaneh Mohammadi, Ali Balador, Tanir Özçelebi, Francesco Flammini & Nirvana Meratnia. EncCluster: Bringing Functional Encryption in Federated Foundational Models
  2. Outstanding Paper Award: Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yao Zhu, Yiqiang Chen, Qiang Yang, Xing Xie & Xiangyang Ji. ZOOPFL: Exploring Black-box Foundation Models for Personalized Federated Learning
  3. Outstanding Student Paper Award: Xianjie Guo, Liping Yi, Xiaohu Wu, Kui Yu & Gang Wang. Enhancing Causal Discovery in Federated Settings with Limited Local Samples
  4. Xiaochun Niu, Lili Su, Jiaming Xu & Pengkun Yang. Collaborative Learning with Shared Linear Representation: Statistical Rates and Optimal Algorithms
  5. Yao Shu, Wenyang Hu, See-Kiong Ng, Bryan Kian Hsiang Low & Fei Richard Yu. Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models
  6. Zexi Li, Jie Lin, Zhiqi Li, Didi Zhu, Rui Ye, Tao Shen, Tao Lin & Chao Wu. Improving Group Connectivity for Generalization of Federated Deep Learning
  7. Zhe Li, Bicheng Ying, Zidong Liu, Chaosheng Dong & Haibo Yang. DeComFL: Federated Learning with Dimension-Free Communication
  8. Zhilong Li, Xiaohu Wu, Xiaoli Tang, Tiantian He, Yew-Soon Ong, Mengmeng Chen, Qiqi Liu, Qicheng Lao & Han Yu. Benchmarking Data Heterogeneity Evaluation Approaches for Personalized Federated Learning
17:00 – 17:15 Award Ceremony
   

Keynote Speakers

   

Title: Federated Large Language Models and Their Applications

Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Professor Emeritus, Hong Kong University of Science and Technology

Biography
Qiang Yang is the head of the AI Department at WeBank (Chief AI Officer) and Professor Emeritus at the Computer Science and Engineering (CSE) Department of the Hong Kong University of Science and Technology (HKUST), where he was a former head of CSE Department and founding director of the Big Data Institute (2015-2018). His research interests include AI, machine learning, and data mining, especially in transfer learning, automated planning, federated learning, and case-based reasoning. He is a fellow of several international societies, including ACM, AAAI, IEEE, IAPR, and AAAS. He received his Ph.D. in Computer Science in 1989 and his M.Sc. in Astrophysics in 1985, both from the University of Maryland, College Park. He obtained his B.Sc. in Astrophysics from Peking University in 1982. He had been a faculty member at the University of Waterloo (1989-1995) and Simon Fraser University (1995-2001). He was the founding Editor-in-Chief of the ACM Transactions on Intelligent Systems and Technology (ACM TIST) and IEEE Transactions on Big Data (IEEE TBD). He served as the President of International Joint Conference on AI (IJCAI, 2017-2019) and an executive council member of Association for the Advancement of AI (AAAI, 2016-2020). Qiang Yang is a recipient of several awards, including the 2004/2005 ACM KDDCUP Championship, the ACM SIGKDD Distinguished Service Award (2017), and AAAI Innovative Application Awards (2018, 2020 and 2022). He was the founding director of Huawei's Noah's Ark Lab (2012-2014) and a co-founder of 4Paradigm Corp, an AI platform company. He is an author of several books including Intelligent Planning (Springer), Crafting Your Research Future (Morgan & Claypool), and Constraint-based Design Recovery for Software Engineering (Springer).

   

Title: The first AGI will be Federated

Speaker: Nicholas D. Lane, Professor, University of Cambridge / Co-Founder and CSO, Flower Labs

Biography
Nic Lane (http://niclane.org) is a full Professor in the department of Computer Science and Technology at the University of Cambridge and holds a Royal Academy of Engineering Chair in De-centralized AI. He is also a Fellow of St. John's College. At Cambridge, Nic leads the Cambridge Machine Learning Systems lab (CaMLSys; https://mlsys.cst.cam.ac.uk/). The mission of CaMLSys is to invent the next-generation of breakthrough ML-centric systems. Alongside his academic roles, Nic is the co-founder and Chief Scientific Officer of Flower Labs (https://flower.ai), a venture-backed AI company (YCW23) behind the Flower open-source federated learning framework. Flower Labs seeks to enable an AI future that is collaborative, open and decentralized. Nic has received multiple best paper awards, including ACM/IEEE IPSN 2017 and two from ACM UbiComp (2012 and 2015). In 2018 and 2019, he (and his co-authors) received the ACM SenSys Test-of-Time award and ACM SIGMOBILE Test-of-Time award for pioneering research, performed during his PhD thesis, that devised machine learning algorithms used today on devices like smartphones. Nic was the 2020 ACM SIGMOBILE Rockstar award winner for his contributions to “the understanding of how resource-constrained mobile devices can robustly understand, reason and react to complex user behaviors and environments through new paradigms in learning algorithms and system design.

   

Title: Transforming Multicenter Neurology Trials with Federated Learning: A New Era of Collaborative Medicine

Speaker: Martin J. McKeown, Professor, The University of British Columbia

Biography
Dr. McKeown is the PPRI/UBC Chair in Parkinson's Research, Director at the Pacific Parkinson's Research Centre (PPRC), Professor in the Department of Medicine, and associate member in the Department of Electrical and Computer Engineering at the University of British Columbia, Canada. The PPRC is deemed an International Centre of Excellence by the (US-based) National Parkinson's Foundation. He did his Engineering Physics, Medicine and Neurology training at McMaster, University of Toronto, and University of Western Ontario, respectively. He did a 3yr research fellowship at the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in San Diego before being hired as an Assistant Professor of Medicine and Biomedical Engineering at Duke University. He was recruited to UBC in 2003. He has been responsible for a variety of peer-reviewed research projects funded through the National Institute of Health (US-NIH), the National Parkinson's Foundation (US-NPF), the Canadian Foundation for Innovation (CFI), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Institutes of Health Research (CIHR), the International Association of Translational Neuroscience, and the (US) Whitaker Foundation. He was a member of the Neuroscience A (NSA) Canadian CIHR Scientific peer review committee as well as a member of the Scientific Advisory Board of the Parkinson's Society of Canada. He has authored over 180 peer-reviewed papers and book chapters. His interests include examining novel treatments for Parkinson's and exploring how Engineering methods can be used to enrich the lives of people with Parkinson's.

   

Title: Federated Optimization Beyond Standard Empirical Risk Minimization

Speaker: Gauri Joshi, Associate Professor, Carnegie Mellon University

Biography
Gauri Joshi is a faculty member in the ECE department at Carnegie Mellon University. Gauri completed her Ph.D. from MIT EECS, and received her B.Tech and M.Tech from the Indian Institute of Technology (IIT) Bombay. Her awards include the MIT Technology Review 35 under 35 Award, ONR Young Investigator and NSF CAREER Award, Best Paper awards at MobiHoc 2022 and SIGMETRICS 2020, and the Institute Gold Medal of IIT Bombay (2010).

   

Title: Machine Learning from Imbalanced Data Sources

Speaker: Shiqiang Wang, Staff Research Scientist, IBM T. J. Watson Research Center

Biography
Shiqiang Wang is a Staff Research Scientist at IBM T. J. Watson Research Center, NY, USA. He received his Ph.D. from Imperial College London, United Kingdom, in 2015. His current research focuses on the intersection of distributed computing, machine learning, networking, and optimization, with a broad range of applications including data analytics, edge-based artificial intelligence (Edge AI), Internet of Things (IoT), and future wireless systems. He has made foundational contributions to edge computing and federated learning that generated both academic and industrial impact. Dr. Wang serves as an associate editor of the IEEE Transactions on Mobile Computing and IEEE Transactions on Parallel and Distributed Systems. He has also been actively organizing workshops at the intersection of edge computing and machine learning, and regularly participates in technical program committees (TPCs) of prominent conferences and review panels of research grants. He received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize in 2021, IEEE ComSoc Best Young Professional Award in Industry in 2021, IBM Outstanding Technical Achievement Awards (OTAA) in 2019, 2021, 2022, and 2023, multiple Invention Achievement Awards from IBM since 2016, Best Paper Finalist of the IEEE International Conference on Image Processing (ICIP) 2019, and Best Student Paper Award of the Network and Information Sciences International Technology Alliance (NIS-ITA) in 2015.


Awards


Accepted Papers

  1. Alex Iacob, Lorenzo Sani, Bill Marino, Preslav Aleksandrov, William F. Shen & Nicholas Donald Lane. Worldwide Federated Training of Language Models
  2. Alexander Bienstock, Antigoni Polychroniadou & Ujjwal Kumar. Distributed Matrix Mechanism for Federated Learning
  3. Christian Internò, Elena Raponi, Niki van Stein, Thomas Bäck, Markus Olhofer, Yaochu Jin & Barbara Hammer. Adaptive Model Hybrid Pruning in Federated Learning through Loss Exploration
  4. Filip Granqvist, Congzheng Song, Áine Cahill, Rogier van Dalen, Martin Pelikan, Yi Sheng Chan, Xiaojun Feng, Natarajan Krishnaswami, Vojta J & Mona Chitnis. pfl-research: Simulation Framework for Accelerating Research in Private Federated Learning
  5. Harish Karthikeyan & Antigoni Polychroniadou. OPA: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
  6. Kai Yi, Timur Kharisov, Igor Sokolov & Peter Richtárik. Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning
  7. Lei Shen, Zhenheng Tang, Lijun Wu, Yonggang Zhang, Xiaowen Chu, Tao Qin & Bo Han. Hot Pluggable Federated Learning
  8. Lorenzo Sani, Alex Iacob, Zeyu Cao, Bill Marino, Yan Gao, Tomas Paulik, Wanru Zhao, William F. Shen, Preslav Aleksandrov, Xinchi Qiu & Nicholas Donald Lane. The Future of Large Language Model Pre-training is Federated
  9. Lu Li, Tianyu Zhang, Zhiqi Bu, Suyuchen Wang, Huan He, Jie Fu, Yonghui Wu, Jiang Bian, Yong Chen & Yoshua Bengio. MAP: Model Merging with Amortized Pareto Front Using Limited Computation
  10. Mariel Werner, Sai Praneeth Karimireddy & Michael Jordan. Defection-Free Collaboration between Competitors in a Learning System
  11. Muxing Wang, Pengkun Yang & Lili Su. On the Convergence Rates of Federated Q-Learning across Heterogeneous Environments
  12. Rui Ye, Jingyi Chai, Xiangrui Liu, Yaodong Yang, Yanfeng Wang & Siheng Chen. Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models
  13. Rui Ye, Rui Ge, Fengting Yuchi, Jingyi Chai, Yanfeng Wang & Siheng Chen. Leveraging Unstructured Text Data for Federated Instruction Tuning of Large Language Models
  14. Rui Ye, Xinyu Zhu, Jingyi Chai, Lingjuan Lyu, Chen Xie, Yanfeng Wang & Siheng Chen. Federated Learning with Generative Content
  15. Sergio Zaera Mata & Roberto Gómez-Espinosa Martín. The SynapticCity Phenomenon: When All Foundation Models Marry Federated Learning and Blockchain
  16. Steffen Schotthöfer & M. Paul Laiu. Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees
  17. Sunny Gupta & Amit Sethi. FedStein: Enhancing Multi-Domain Federated Learning Through James-Stein Estimator
  18. Tao Yu, Congzheng Song, Jianyu Wang & Mona Chitnis. Momentum Approximation in Asynchronous Private Federated Learning
  19. Vasileios Tsouvalas, Samaneh Mohammadi, Ali Balador, Tanir Özçelebi, Francesco Flammini & Nirvana Meratnia. EncCluster: Bringing Functional Encryption in Federated Foundational Models
  20. Wang Lu, Hao Yu, Jindong Wang, Damien Teney, Haohan Wang, Yao Zhu, Yiqiang Chen, Qiang Yang, Xing Xie & Xiangyang Ji. ZOOPFL: Exploring Black-box Foundation Models for Personalized Federated Learning
  21. Xianjie Guo, Liping Yi, Xiaohu Wu, Kui Yu & Gang Wang. Enhancing Causal Discovery in Federated Settings with Limited Local Samples
  22. Xiaochun Niu, Lili Su, Jiaming Xu & Pengkun Yang. Collaborative Learning with Shared Linear Representation: Statistical Rates and Optimal Algorithms
  23. Yao Shu, Wenyang Hu, See-Kiong Ng, Bryan Kian Hsiang Low & Fei Richard Yu. Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models
  24. Zexi Li, Jie Lin, Zhiqi Li, Didi Zhu, Rui Ye, Tao Shen, Tao Lin & Chao Wu. Improving Group Connectivity for Generalization of Federated Deep Learning
  25. Zhe Li, Bicheng Ying, Zidong Liu, Chaosheng Dong & Haibo Yang. DeComFL: Federated Learning with Dimension-Free Communication
  26. Zhilong Li, Xiaohu Wu, Xiaoli Tang, Tiantian He, Yew-Soon Ong, Mengmeng Chen, Qiqi Liu, Qicheng Lao & Han Yu. Benchmarking Data Heterogeneity Evaluation Approaches for Personalized Federated Learning

Call for Papers

Foundation models (FMs) are typically associated with large language models (LLMs), like ChatGPT, and are characterized by their scale and broad applicability. While these models provide transformative capabilities, they also introduce significant challenges, particularly concerning dis-tributed model management and related data privacy, efficiency, and scalability. The training of foundation models is data and resource intensive and the conventional methods are typically centralized; this creates significant challenges including regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to manage distributed data repositories, and development of and alignment with regulatory guidelines (e.g., GDPR) that restrict sharing sensitive data.

Federated learning (FL) is an emerging paradigm that can mitigate these challenges by training a global but distributed model using distributed data. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarity with and adoption of this relevant and timely topic within the general scientific community. As FL allows self-interested data owners to collaboratively train models, end-users can become co-creators of AI solutions. By adopting federated learning approaches, we can leverage distributed data and computing power available across different sources while respecting user privacy.

The rise of FMs amplifies the importance and relevance of FL as a crucial research direction. With FMs becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the use of FMs, enabling efficient and scalable training while safeguarding sensitive data.

FMs such as GPT-4 encoded with vast knowledge and powerful emergent abilities have achieved remarkable success in various natural language processing and computer vision tasks. Grounding FMs by adapting them to domain-specific tasks or augmenting them with domain-specific knowledge enables us to exploit the full potential of FMs. However, grounding FMs faces several challenges, stemming primarily from constrained computing resources, data privacy, model heterogeneity, and model ownership. Federated Transfer Learning (FTL), the combination of FL and transfer learning, provides promising solutions to address these challenges. In recent years, the need for grounding FMs leveraging FTL, coined FTL-FM, has arisen strongly in both academia and industry.

With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the era of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field.

This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of combining FL with FM to open up opportunities to address new challenges. The workshop topics include but are not limited to:
Theory and algorithmic foundations:
  • Federated in-context learning
  • Federated neuro-symbolic learning
  • Impact of heterogeneity in FL of large models
  • Multi-stage model training (e.g., base model + fine tuning)
  • Optimization advances in FL (e.g., beyond first-order and local methods)
  • Privacy-preserving machine learning
  • Prompt tuning and design in federated settings
  • Self-supervised learning in federated settings
Leveraging foundation models to improve federated learning:
  • Adaptive aggregation strategies for FL in heterogeneous environments
  • Foundation model enhanced FL knowledge distillation
  • Overcoming data interoperability challenges using foundation models
  • Personalization of FL with foundation models
Federated learning for training and tuning foundation models:
  • Fairness, bias, and interpretability challenges in FL with foundation models
  • Federated transfer learning with foundation models
  • FL-empowered multi-agent foundation model systems
  • FL techniques for training large-scale foundation models
  • Hardware for FL with foundation models
  • Optimization algorithms for federated training of foundation models
  • Privacy-preserving mechanisms in FL with foundation models
  • Resource-efficient FL with foundation models
  • Security and robustness considerations in FL with foundation models
  • Systems and infrastructure for FL with foundation models
  • Vertical federated learning with foundation models
  • Vulnerabilities of FL with foundation models

More information on previous workshops can be found here.


Submission Instructions

The main text of a submitted paper can be between 4 to 9 content pages, including all figures and tables, following NeurIPS'24 template. Additional pages containing references don't count as content pages. An optional appendix of any length is allowed and should be put at the end of the paper (after references). Submissions are double-blind (author identity shall not be revealed to the reviewers), so the submitted PDF file should not include any identifiable information of authors.

Late breaking papers refer to those papers that have been reviewed by NeurIPS'24 but were not accepted. Authors who wish to submit such papers can follow the same submission link and do so by 27 September 2024. In your submission, please include the NeurIPS'24 review comments in the appendix of your paper. These papers will not need to go through another round of peer review. Instead, the organizing committee will determine whether they are accepted into the FL@FM-NeurIPS'24 workshop.

Submissions are collected on OpenReview at the following link: https://openreview.net/group?id=NeurIPS.cc/2024/Workshop/Federated_Learning.
Accepted papers and their review comments will be posted on OpenReview in public. Due to the short timeline, we will not have a rebuttal period, but the authors are encouraged to interact and discuss with reviewers on OpenReview after the acceptance notifications are sent out. Rejected papers and their reviews will remain private and not posted in public.


Co-Chairs



Sai Praneeth Karimireddy
(USC)
   

Xiaoxiao Li
(UBC)
   

Songtao Lu
(IBM)
   

Stacy Patterson
(RPI)
   

Pascal Poupart
(U Waterloo)
   

Han Yu
(NTU)
   

Program Committee

  • Alysa Ziying Tan (Alibaba-NTU Singapore Joint Research Institute)
  • Anran Li (Yale University)
  • Chun-Yin Huang (The University of British Columbia)
  • FNU Hairi (University of Wisconsin, Whitewater)
  • Haibo Yang (Rochester Institute of Technology)
  • Hongyi Peng (Alibaba-NTU Singapore Joint Research Institute)
  • Huawei Huang (Sun Yat-Sen University)
  • Jiankai Sun (Pinterest)
  • Jiaqi Qin (The Ohio State University)
  • Jinhyun So (Daegu Gyeongbuk Institute of Science and Technology)
  • Kartik Mathur (Microsoft)
  • Kevin Hsieh (Microsoft)
  • Minghong Fang (University of Louisville)
  • Paulo Abelha Ferreira (Dell Technologies)
  • Ruinan Jin (The University of British Columbia)
  • Songze Li (Southeast University)
  • Wei Yang Bryan Lim (Nanyang Technological University)
  • Wenlong Deng (The University of British Columbia)
  • Xiaohu Wu (Beijing University of Posts and Telecommunications)
  • Xinwei Zhang (University of Southern California)
  • Yanci Zhang (Nanyang Technological University)
  • Yaodong Yu (University of California Berkeley)
  • Yuxin Shi (Shopee)
  • Zichen Chen (University of California, Santa Barbara)
  • Zichong Li (University of Texas at Austin)

Sponsored by

       

Organized by