International Workshop on Trustworthy Federated Learning
in Conjunction with IJCAI 2023 (FL-IJCAI'23)


Submission Due: May 14, 2023 (23:59:59 AoE)
Notification Due: June 15, 2023 (23:59:59 AoE)
Final Version Due: June 30, 2023 (23:59:59 AoE)

Workshop Date: August 21, 2023
Venue: Kokand 6304+6305, Sheraton Grand Macao Hotel, Macau

Workshop Program

  
Time (UTC+8) Activity
  
09:00 – 09:10 Opening Remarks
09:10 – 09:30 Launch of the 2023 Global Federated Learning Research and Application Report
09:30 – 10:00 Invited Talk 1: Privacy Attacks on Large Language Models, by Yangqiu Song
10:00 – 10:30 Invited Talk 2: Evaluating Large-Scale Learning Systems, by Virginia Smith
10:30 – 11:00 Coffee Break
11:00 – 11:30 Invited Talk 3: Trustworthy Federated Learning with Guarantees, by Bo Li
11:30 – 12:30 Oral Presentation Session 1 (10 min per talk, including Q&A)
  1. Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong and Qiang Yang. SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning
  2. Best Paper Award: Ljubomir Rokvic, Panayiotis Danassis and Boi Faltings. Privacy-Preserving Data Quality Evaluation in Federated Learning Using Influence Approximation
  3. Best Student Paper Award: Yuchen Liu, Chen Chen and Lingjuan Lyu. Exploit Gradient Skewness to Circumvent Defenses for Federated Learning
  4. Best Student Paper Award: Sahra Ghalebikesabi, Leonard Berrada, Sven Gowal, Ira Ktena, Robert Stanforth, Jamie Hayes, Soham De, Olivia Wiles and Borja Balle. Differentially Private Diffusion Models Generate Useful Synthetic Images
  5. Hao Sun, Xiaoli Tang, Chengyi Yang, Zhenpeng Yu, Xiuli Wang, Qijie Ding, Zengxiang Li and Han Yu. Hierarchical Federated Learning Incentivization for Gas Usage Estimation
  6. Peng Lan, Donglai Chen, Chong Xie, Keshu Chen, Jinyuan He, Juntao Zhang, Yonghong Chen and Yan Xu. Elastically-Constrained Meta-Learner for Federated Learning
12:30 – 14:00 Lunch Break
14:00 – 14:30 Invited Talk 4: Federated Learning in Healthcare: Overcoming Data Heterogeneity Challenges, by Xiaoxiao Li
14:30 – 15:30 Oral Presentation Session 2 (10 min per talk, including Q&A)
  1. Weiming Zhuang and Lingjuan Lyu. Is Batch Normalization Indispensable for Multi-domain Federated Learning?
  2. Jiehuang Zhang and Han Yu. A Design Methodology for Incorporating Privacy Preservation into AI Systems
  3. Zheng Wang, Xiaoliang Fan, Zhaopeng Peng, Xueheng Li, Ziqi Yang, Mingkuan Feng, Zhicheng Yang, Xiao Liu and Cheng Wang. FLGO: A Fully Customizable Federated Learning Platform
  4. Tianchen Zhou, Zhanyi Hu, Bingzhe Wu and Cen Chen. SLPerf: a Unified Framework for Benchmarking Split Learning
  5. Jin Xie, Chenqing Zhu and Songze Li. FedMeS: Personalized Federated Continual Learning Leveraging Local Memory
  6. Zhihao Hao, Guancheng Wang, Chunwei Tian and Bob Zhang. A Distributed Computation Model Based on Federated Learning Integrates Heterogeneous models and Consortium Blockchain for Solving Time-Varying Problems
15:30 – 16:00 Coffee Break
16:00 – 16:30 Invited Talk 5: Personalized Federated Learning, by Guodong Long
16:30 – 17:40 Oral Presentation Session 3 (10 min per talk, including Q&A)
  1. Yiqiang Chen, Teng Zhang, Xinlong Jiang, Qian Chen, Chenlong Gao and Wuliang Huang. FedBone: Towards Large-Scale Federated Multi-Task Learning
  2. Gaolei Li, Jun Wu, Jianhua Li, Yuanyuan Zhao and Longfei Zheng. DSBP: Data-free and Swift Backdoor Purification for Trustworthy Federated Learning via Multi-teacher Adversarial Distillation
  3. Wenjie Li, Qiaolin Xia, Hao Cheng, Kouying Xue and Shu-Tao Xia. Vertical Semi-Federated Learning for Efficient Online Advertising
  4. Mingxuan Fan, Yilun Jin, Liu Yang, Zhenghang Ren and Kai Chen. VERTICES: Efficient Two-Party Vertical Federated Linear Model with TTP-aided Secret Sharing
  5. Kangning Yin, Zhen Ding, Zhihua Dong, Dongsheng Chen, Jie Fu, Xinhui Ji, Guangqiang Yin Yin and Zhiguo Wang. NIPD: A Federated Learning Person Detection Benchmark Based on Real-World Non-IID Data
  6. Quentin Pajon, Swan Serre, Hugo Wissocq, Léo Rabaud, Siba Haidar and Antoun Yaacoub. Balancing Accuracy and Training Time in Federated Learning for Violence Detection in Surveillance Videos: A Study of Neural Network Architectures
  7. Fubao Zhu, Yanhui Tian, Chuang Han, Yanting Li, Jiaofen Nan, Ni Yao and Weihua Zhou. MLA-BIN: Model-level Attention and Batch-instance Style Normalization for Domain Generalization of Federated Learning on Medical Image Segmentation
17:40 – 17:45 Award Ceremony & Closing Remarks
   

Invited Talks

   

Title: Privacy Attacks on Large Language Models

Speaker: Yangqiu Song, Associate Professor, Hong Kong University of Science and Technology (HKUST), Hong Kong

Biography
Dr. Song is an associate professor at Department of CSE at HKUST, and an associate director of HKUST-WeBank Joint Lab. He was an assistant professor at Lane Department of CSEE at WVU (2015-2016); a post-doc researcher at UIUC (2013-2015), a post-doc researcher at HKUST and visiting researcher at Huawei Noah's Ark Lab, Hong Kong (2012-2013); an associate researcher at Microsoft Research Asia (2010-2012); a staff researcher at IBM Research-China (2009-2010). He received his B.E. and PhD degree from Tsinghua University, China, in July 2003 and January 2009. He also worked as interns at Google in 2007-2008 and at IBM Research-China in 2006-2007. He is now also a visiting academic scholar at Amazon Search Science and AI Team@A9 (Jan. 2022 - present).

   

Title: Evaluating Large-Scale Learning Systems

Speaker: Virginia Smith, Assistant Professor, Carnegie Mellon University (CMU), USA

Biography
Virginia Smith is an assistant professor in the Machine Learning Department at Carnegie Mellon University. Her research spans machine learning, optimization, and distributed systems. Virginia’s current work addresses challenges related to optimization, privacy, and robustness in distributed settings to enable trustworthy federated learning at scale. Virginia’s work has been recognized by an NSF CAREER Award, MIT TR35 Innovator Award, Intel Rising Star Award, and faculty awards from Google, Apple, and Meta. Prior to CMU, Virginia was a postdoc at Stanford University and received a Ph.D. in Computer Science from UC Berkeley.

   

Title: Trustworthy Federated Learning with Guarantees

Speaker: Bo Li, Associate Professor, University of Illinois at Urbana–Champaign (UIUC), USA

Biography
Dr. Bo Li is an Associate Professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, AI’s 10 to Watch, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Meta, Google, Intel, IBM, and eBay, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, which is at the intersection of machine learning, security, privacy, and game theory. She has designed several scalable frameworks for certifiably robust learning and privacy-preserving data publishing. Her work has been featured by several major publications and media outlets, including Nature, Wired, Fortune, and New York Times.

   

Title: Federated Learning in Healthcare: Overcoming Data Heterogeneity Challenges

Speaker: Xiaoxiao Li, Assistant Professor, the University of British Columbia (UBC), Canada

Biography
Xiaoxiao Li is an Assistant Professor at the Department of Electrical and Computer Engineering at the University of British Columbia (UBC) starting August 2021. In addition, Dr. Li holds positions as a Faculty Member at Vector Institute and an adjunct Assistant Professor at Yale University. Before joining UBC, Dr. Li was a Postdoc Research Fellow at Princeton University. Dr. Li obtained her Ph.D. degree from Yale University in 2020. Dr. Li's research focuses on developing theoretical and practical solutions for enhancing the trustworthiness of AI systems in healthcare. Specifically, her recent research has been dedicated to advancing federated learning techniques and their applications in the medical field. Dr. Li's work has been recognized with numerous publications in top-tier machine learning conferences and journals, including NeurIPS, ICML, ICLR, MICCAI, IPMI, ECCV, TMI, TNNLS, and Medical Image Analysis. Her contributions have been further acknowledged with several best paper awards at prestigious international conferences.

   

Title: Personalized Federated Learning

Speaker: Guodong Long, Associate Professor, University of Technology Sydney (UTS), Australia

Biography
Dr. Guodong Long is an Associate Professor in School of Computer Science, Faculty of Engineering and IT (FEIT), University of Technology Sydney (UTS), Australia. He is one of the core members of the Research Australian Artificial Intelligence Institute (AAII). He is currently leading a research group to conduct application-driven research on machine learning and data science. Particularly, his research interests focus on several application domains, such as NLP, Healthcare, Smart Home, Education and Social Media. He is dedicated on exploring the blue-sky research ideas with real-world value and impact. His group's research is funded by multiple sources of industry grants and ARC grants. He has published more than 100 papers on top-tier conferences including ICLR, ICML, NeurIPS, AAAI, IJCAI, ACL, KDD, WebConf, and journals including IEEE TPAMI, TKDE and TNNLS. His publications attract more than 10k citations. He will serve as a general co-chair for the WebConf 2025 to be hosted in Sydney.


Awards


Accepted Papers

  1. Yuchen Liu, Chen Chen and Lingjuan Lyu. Exploit Gradient Skewness to Circumvent Defenses for Federated Learning
  2. Hao Sun, Xiaoli Tang, Chengyi Yang, Zhenpeng Yu, Xiuli Wang, Qijie Ding, Zengxiang Li and Han Yu. Hierarchical Federated Learning Incentivization for Gas Usage Estimation
  3. Weiming Zhuang and Lingjuan Lyu. Is Batch Normalization Indispensable for Multi-domain Federated Learning?
  4. Ziyao Ren, Yan Kang, Lixin Fan, Linghua Yang, Yongxin Tong and Qiang Yang. SecureBoost Hyperparameter Tuning via Multi-Objective Federated Learning
  5. Jiehuang Zhang and Han Yu. A Design Methodology for Incorporating Privacy Preservation into AI Systems
  6. Peng Lan, Donglai Chen, Chong Xie, Keshu Chen, Jinyuan He, Juntao Zhang, Yonghong Chen and Yan Xu. Elastically-Constrained Meta-Learner for Federated Learning
  7. Sahra Ghalebikesabi, Leonard Berrada, Sven Gowal, Ira Ktena, Robert Stanforth, Jamie Hayes, Soham De, Olivia Wiles and Borja Balle. Differentially Private Diffusion Models Generate Useful Synthetic Images
  8. Ljubomir Rokvic, Panayiotis Danassis and Boi Faltings. Privacy-Preserving Data Quality Evaluation in Federated Learning Using Influence Approximation
  9. Zheng Wang, Xiaoliang Fan, Zhaopeng Peng, Xueheng Li, Ziqi Yang, Mingkuan Feng, Zhicheng Yang, Xiao Liu and Cheng Wang. FLGO: A Fully Customizable Federated Learning Platform
  10. Tianchen Zhou, Zhanyi Hu, Bingzhe Wu and Cen Chen. SLPerf: a Unified Framework for Benchmarking Split Learning
  11. Jin Xie, Chenqing Zhu and Songze Li. FedMeS: Personalized Federated Continual Learning Leveraging Local Memory
  12. Zhihao Hao, Guancheng Wang, Chunwei Tian and Bob Zhang. A Distributed Computation Model Based on Federated Learning Integrates Heterogeneous models and Consortium Blockchain for Solving Time-Varying Problems
  13. Yiqiang Chen, Teng Zhang, Xinlong Jiang, Qian Chen, Chenlong Gao and Wuliang Huang. FedBone: Towards Large-Scale Federated Multi-Task Learning
  14. Gaolei Li, Jun Wu, Jianhua Li, Yuanyuan Zhao and Longfei Zheng. DSBP: Data-free and Swift Backdoor Purification for Trustworthy Federated Learning via Multi-teacher Adversarial Distillation
  15. Wenjie Li, Qiaolin Xia, Hao Cheng, Kouying Xue and Shu-Tao Xia. Vertical Semi-Federated Learning for Efficient Online Advertising
  16. Mingxuan Fan, Yilun Jin, Liu Yang, Zhenghang Ren and Kai Chen. VERTICES: Efficient Two-Party Vertical Federated Linear Model with TTP-aided Secret Sharing
  17. Kangning Yin, Zhen Ding, Zhihua Dong, Dongsheng Chen, Jie Fu, Xinhui Ji, Guangqiang Yin Yin and Zhiguo Wang. NIPD: A Federated Learning Person Detection Benchmark Based on Real-World Non-IID Data
  18. Quentin Pajon, Swan Serre, Hugo Wissocq, Léo Rabaud, Siba Haidar and Antoun Yaacoub. Balancing Accuracy and Training Time in Federated Learning for Violence Detection in Surveillance Videos: A Study of Neural Network Architectures
  19. Fubao Zhu, Yanhui Tian, Chuang Han, Yanting Li, Jiaofen Nan, Ni Yao and Weihua Zhou. MLA-BIN: Model-level Attention and Batch-instance Style Normalization for Domain Generalization of Federated Learning on Medical Image Segmentation

Call for Papers

Federated Learning (FL), a learning paradigm that enables collaborative training of machine learning models in which data reside and remain in distributed data silos during the training process. FL is a necessary framework to ensure AI thrive in the privacy-focused regulatory environment. As FL allows self-interested data owners to collaboratively train machine learning models, end-users can become co-creators of AI solutions. To enable open collaboration among FL co-creators and enhance the adoption of the federated learning paradigm, we envision that communities of data owners must self-organize during FL model training based on diverse notions of trustworthy federated learning, which include, but not limited to, security and robustness, privacy-preservation, interpretability, fairness, verifiability, transparency, auditability, incremental aggregation of shared learned models, and creating healthy market mechanisms to enable open dynamic collaboration among data owners under the FL paradigm. This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of building trustworthiness into federated learning to enable open dynamic collaboration among data owners under the FL paradigm, and make FL solutions readily applicable to solve real-world problems.

Topics of interest include, but are not limited to:
Techniques:
  • Adversarial learning, data poisoning, adversarial examples,
    adversarial robustness, black box attacks
  • Architecture and privacy-preserving learning protocols
  • Auctions in federated learning
  • Auditable federated learning
  • Automated federated learning
  • Explainable federated learning
  • Fairness-aware federated learning
  • Federated learning and distributed privacy-preserving algorithms
  • Federated transfer learning
  • Human-in-the-loop for privacy-aware machine learning
  • Incentive mechanism and game theory for federated learning
  • Interpretable federated learning
  • Model merging and sharing
  • Personalization in federated learning
  • Privacy-aware knowledge driven federated learning
  • Privacy-preserving techniques (secure multi-party computation, homomorphic
    encryption, secret sharing techniques, differential privacy) for machine learning
  • Robustness in federated learning
  • Security for privacy, privacy leakage verification and self-healing etc.
  • Trade-off between privacy, safety, effectiveness and efficiency
  • Transparent federated learning
  • Verifiable federated learning
Applications:
  • Algorithm auditability
  • Approaches to make GDPR-compliant AI
  • Data value and economics of data federation
  • Open-source frameworks for privacy-preserving distributed learning
  • Safety and security assessment of federated learning
  • Solutions to data security and small-data challenges in industries
  • Standards of data privacy and security

More information on previous workshops can be found here.


Post Workshop Publications

There are two options for post workshop publications. Authors who wish to submit extended version of their papers elsewhere can opt out of both of these options.
   

Selected high quality papers will be invited for publication as chapters in an edited book in the Lecture Notes in Artificial Intelligence (LNAI) series under Springer. More information will be provided at a later time.

 
   

Alternatively, selected high quality papers will be invited to submit a journal version of the paper to the Journal of Computer Science & Technology (JCST), Springer. More information will be provided at a later time.


Submission Instructions

Each submission can be up to 7 pages of contents plus up to 2 additional pages of references and acknowledgements. The submitted papers must be written in English and in PDF format according to the IJCAI'23 template. All submitted papers will be under a single-blinded peer review for their novelty, technical quality and impact. The submissions can contain author details. Submission will be accepted via the Easychair submission website.

Based on the requirement from IJCAI'23, at least one author of each accepted paper must travel to the IJCAI venue in person. In addition, multiple submissions of the same paper to more than one IJCAI workshop are forbidden.

Easychair submission site: https://easychair.org/conferences/?conf=fl-ijcai-23

For enquiries, please email to: fl-ijcai-23@easychair.org


Organizing Committee


Program Committee

  • Alysa Ziying Tan (Alibaba-NTU Singapore Joint Research Institute, Singapore)
  • Allen Gu (WeBank, China)
  • Anran Li (Nanyang Technological University, Singapore)
  • Bo Zhao (Nanjing University of Aeronautics and Astronautics, China)
  • Dimitrios Papadopoulos (Hong Kong University of Science and Technology, Hong Kong)
  • Hongyi Peng (Nanyang Technological University, Singapore)
  • Huawei Huang (Sun Yat-Sen University, China)
  • Jiangtian Nie (Nanyang Technological University, Singapore)
  • Jiankai Sun (The Ohio State University, USA)
  • Jianshu Weng (Chubb, Singapore)
  • Jihong Park (Deakin University, Australia)
  • Jinhyun So (University of Southern California, USA)
  • Kevin Hsieh (Microsoft Research, USA)
  • Liping Yi (Nankai University, China)
  • Paulo Ferreira (Dell Technologies, USA)
  • Peng Zhang (Guangzhou University, China)
  • Qin Hu (George Washington University, USA)
  • Rui Liu (Nanyang Technological University, Singapore)
  • Shengchao Chen (University of Technology Sydney, Australia)
  • Shiqiang Wang (IBM Thomas J. Watson Research Center, USA)
  • Siwei Feng (Soochow University, China)
  • Songze Li (Hong Kong University of Science and Technology, Hong Kong)
  • Wei Yang Bryan Lim (Nanyang Technological University, Singapore)
  • Wen Wu (Peng Cheng Laboratory, China)
  • Xianjie Guo (Hefei University of Technology, China)
  • Xiaohu Wu (Beijing University of Posts and Telecommunications, China)
  • Xiaoli Tang (Nanyang Technological University, Singapore)
  • Xu Guo (Nanyang Technological University, Singapore)
  • Yan Kang (WeBank, China)
  • Yanci Zhang (Shandong University, China)
  • Yang Zhang (Nanjing University of Aeronautics and Astronautics, China)
  • Yiqiang Chen (Chinese Academy of Sciences, China)
  • Yuang Jiang (Yale University, USA)
  • Yuanqin He (WeBank, China)
  • Yulan Gao (Nanyang Technological University, Singapore)
  • Yuxin Shi (Nanyang Technological University, Singapore)
  • Zelei Liu (Unicom (Shanghai) Industrial Internet Co. Ltd., China)
  • Zhuan Shi (University of Science and Technology of China, China)
  • Zhuowei Wang (CSRIO, Australia)
  • Zichen Chen (University of California, Santa Barbara, USA)

Organized by