International Workshop on Trustworthy Federated Learning
in Conjunction with IJCAI 2022 (FL-IJCAI'22)


Submission Due: May 13, 2022 May 23, 2022 (23:59:59 AoE)
Notification Due: June 03, 2022 June 10, 2022 (23:59:59 AoE)
Final Version Due: June 17, 2022 June 20, 2022 (23:59:59 AoE)
Workshop Date: July 23-25, 2022
Venue: Vienna, Austria

Call for Papers

Federated Learning (FL), a learning paradigm that enables collaborative training of machine learning models in which data reside and remain in distributed data silos during the training process. FL is a necessary framework to ensure AI thrive in the privacy-focused regulatory environment. As FL allows self-interested data owners to collaboratively train machine learning models, end-users can become co-creators of AI solutions. To enable open collaboration among FL co-creators and enhance the adoption of the federated learning paradigm, we envision that communities of data owners must self-organize during FL model training based on diverse notions of trustworthy federated learning, which include, but not limited to, security and robustness, privacy-preservation, interpretability, fairness, verifiability, transparency, auditability, incremental aggregation of shared learned models, and creating healthy market mechanisms to enable open dynamic collaboration among data owners under the FL paradigm. This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of building trustworthiness into federated learning to enable open dynamic collaboration among data owners under the FL paradigm, and make FL solutions readily applicable to solve real-world problems.

Topics of interest include, but are not limited to:
Techniques:
  • Adversarial learning, data poisoning, adversarial examples,
    adversarial robustness, black box attacks
  • Architecture and privacy-preserving learning protocols
  • Auctions in federated learning
  • Auditable federated learning
  • Automated federated learning
  • Explainable federated learning
  • Fairness-aware federated learning
  • Federated learning and distributed privacy-preserving algorithms
  • Federated transfer learning
  • Human-in-the-loop for privacy-aware machine learning
  • Incentive mechanism and game theory for federated learning
  • Interpretable federated learning
  • Model merging and sharing
  • Personalization in federated learning
  • Privacy-aware knowledge driven federated learning
  • Privacy-preserving techniques (secure multi-party computation, homomorphic
    encryption, secret sharing techniques, differential privacy) for machine learning
  • Robustness in federated learning
  • Security for privacy, privacy leakage verification and self-healing etc.
  • Trade-off between privacy, safety, effectiveness and efficiency
  • Transparent federated learning
  • Verifiable federated learning
Applications:
  • Algorithm auditability
  • Approaches to make GDPR-compliant AI
  • Data value and economics of data federation
  • Open-source frameworks for privacy-preserving distributed learning
  • Safety and security assessment of federated learning
  • Solutions to data security and small-data challenges in industries
  • Standards of data privacy and security

More information on previous workshops can be found here.


Submission Instructions

Each submission can be up to 6 pages of contents plus up to 2 additional pages of references and acknowledgements. The submitted papers must be written in English and in PDF format according to the IJCAI'22 template. All submitted papers will be under a single-blinded peer review for their novelty, technical quality and impact. The submissions can contain author details. Submission will be accepted via the Easychair submission website.

Easychair submission site: https://easychair.org/conferences/?conf=fl-ijcai-22

For enquiries, please email to: fl-ijcai-22@easychair.org


Publications

   

For consideration of a post workshop LNAI publication, the organizing committee will invite a subset of accepted workshop papers to be extended and re-reviewed. More information regarding publications will be released at a later date.


Invited Talks

   

Title: TBD

Speaker: Martin Jaggi, EPFL, Switzerland

Biography
Martin Jaggi is a Tenure Track Assistant Professor at EPFL, heading the Machine Learning and Optimization Laboratory. Before that, he was a post-doctoral researcher at ETH Zurich, at the Simons Institute in Berkeley, and at École Polytechnique in Paris. He has earned his PhD in Machine Learning and Optimization from ETH Zurich in 2011, and a MSc in Mathematics also from ETH Zurich.

   

Title: MONAI improves Medical Imaging AI performance with Federated Learning

Speaker: Yongnan Ji, NVIDIA, China

Biography
Yongnan Ji is the NVIDIA Healthcare Ecosystem Manager in China, supporting NVIDIA Healthcare ecosystem with NVIDIA's latest technology. As an expert in medical imaging and artificial intelligence, he has published core patents and academic papers covering areas like medical imaging, image analysis and image AI. Dr Ji graduated from University of Nottingham, UK. He previously worked at GE Heatlchare, Toshiba Medical and Samsung Advanced Research Institute.

   

Title: TBD

Speaker: Margaret Pan, China Telecom, China

Biography
Margaret Pan is a researcher at China Telecom Research Institute. Her research focuses on AI mobile device, distributed computing, federated machine learning and data security. She has been involved in developing several AI standards, including IEEE and GSMA standard associations. She is also the Chair of IEEE SPFML-WG.

   

Title: TBD

Speaker: Victoria Wang, CXO & China Strategy Lead, IEEE SA

Biography
Dr. Victoria Wang is CXO of IEEE SA. In this position, she engages global technology community and enables them to use technology standards for the benefit of humanity, particularly, for its sustainable development goals. She advised a range of technology standards and in ecosystem building, including IEEE’s standardization of federated learning. Dr. Victoria Wang is also IEEE Standard Association’s China Strategy Lead.

   

Title: Privacy-Preserving Bayesian Evolutionary Optimization

Speaker: Yaochu Jin, Alexander von Humboldt Professor, Bielefeld University, Germany

Biography
Yaochu Jin is an Alexander von Humboldt Professor for Artificial Intelligence endowed by the German Federal Ministry of Education and Research, with the Faculty of Technology, Bielefeld University, Germany. He is also a Surrey Distinguished Chair, Professor in Computational Intelligence, Department of Computer Science, University of Surrey, Guildford, U.K. He was a "Finland Distinguished Professor" of University of Jyväskylä, Finland, "Changjiang Distinguished Visiting Professor", Northeastern University, China, and "Distinguished Visiting Scholar", University of Technology Sydney, Australia. His main research interests include evolutionary optimization and learning, trustworthy machine learning and optimization, and evolutionary developmental AI. Prof Jin is presently the Editor-in-Chief of Complex & Intelligent Systems. He was the Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems, an IEEE Distinguished Lecturer in 2013-2015 and 2017-2019, the Vice President for Technical Activities of the IEEE Computational Intelligence Society (2015-2016). He is the recipient of the 2018 and 2021 IEEE Transactions on Evolutionary Computation Outstanding Paper Award, and the 2015, 2017, and 2020 IEEE Computational Intelligence Magazine Outstanding Paper Award. He was named by the Web of Science as "a Highly Cited Researcher" consecutively from 2019 to 2021. He is a Member of Academia Europaea and Fellow of IEEE.


Organizing Committee


Program Committee

  • Alysa Ziying Tan (Alibaba-NTU Singapore Joint Research Institute)
  • Andreas Holzinger (University of Natural Resources and Life Sciences)
  • Adriano Koshiyama (University College London/Holistic AI)
  • Anran Li (University of Science and Technology of China)
  • Bing Luo (City University of Hong Kong, Shenzhen)
  • Dimitrios Papadopoulos (Hong Kong University of Science and Technology)
  • Guojun Zhang (Huawei)
  • Grigory Malinovsky (King Abdullah University of Science and Technology)
  • Hongyi Peng (Alibaba-NTU Singapore Joint Research Institute)
  • Ji Feng (Sinovation Ventures)
  • Jiangtian Nie (Nanyang Technological University)
  • Jiankai Sun (The Ohio State University)
  • Jianshu Weng (Swiss Re)
  • Jianyu Wang (Carnegie Mellon University)
  • Jiawen Kang (Guangdong University of Technology)
  • Jihong Park (Deakin University)
  • Jinhyun So (University of Southern California)
  • Junxue Zhang (Clustar)
  • Kallista (Kaylee) Bonawitz (Google)
  • Kevin Hsieh (Microsoft Research)
  • Margaret Pan (China Telecom)
  • Mehrdad Mahdavi (Pennsylvania State University)
  • Mingyue Ji (University of Utah)
  • Paulo Ferreira (Dell)
  • Peng Zhang (Guangzhou University)
  • Philipp Slusallek (Saarland University)
  • Praneeth Vepakomma (Massachusetts Institute of Technology)
  • Rui Liu (Nanyang Technological University)
  • Rui-Xiao Zhang (Tsinghua University)
  • Shiqiang Wang (IBM)
  • Siwei Feng (Soochow University)
  • Songze Li (Hong Kong University of Science and Technology)
  • Stefan Wrobel (University of Bonn)
  • Theodoros Salonidis (IBM)
  • Victoria Wang (IEEE)
  • Wei Yang Bryan Lim (Alibaba-NTU Singapore Joint Research Institute)
  • Xiaohu Wu (Nanyang Technological University)
  • Xiaoli Tang (Nanyang Technological University)
  • Xi Chen (Huawei)
  • Xu Guo (Nanyang Technological University)
  • Yanci Zhang (Nanyang Technological University)
  • Yiqiang Chen (Chinese Academy of Sciences)
  • Yang Liu (Tsinghua University)
  • Yuan Liu (Northeastern University)
  • Yuang Jiang (Yale University)
  • Yuxin Shi (Alibaba-NTU Singapore Joint Research Institute)
  • Zelei Liu (Nanyang Technological University)
  • Zhuan Shi (University of Science and Technology of China)
  • Zichen Chen (University of California, Santa Barbara)

Sponsored by

   

FedML, Inc. (https://fedml.ai) aims to provide an end-to-end machine learning operating system for people or organizations to transform their data to intelligence with minimum efforts. FedML stands for “Fundamental Ecosystem Development/Design for Machine Learning” in a broad scope, and “Federated Machine Learning” in a specific scope. At the current stage, FedML is developing and maintaining a machine learning platform that enables zero-code, lightweight, cross-platform, and provably secure federated learning and analytics. It enables machine learning from decentralized data at various users/silos/edge nodes, without the need to centralize any data to the cloud, hence providing maximum privacy and efficiency. It consists of a lightweight and cross-platform Edge AI SDK that is deployable over edge GPUs, smartphones, and IoT devices. Furthermore, it also provides a user-friendly MLOps platform to simplify decentralized machine learning and real-world deployment. FedML supports vertical solutions across a broad range of industries (healthcare, finance, insurance, smart cities, IoT, etc.) and applications (computer vision, natural language processing, data mining, and time-series forecasting).

Organized by