International Workshop on Federated and Transfer Learning for Data Sparsity and Confidentiality
in Conjunction with IJCAI 2021 (FTL-IJCAI'21)


Submission Due: June 05, 2021 June 20, 2021 (23:59:59 AoE)
Notification Due: June 25, 2021 July 20, 2021
Workshop Date: ()
Online Venue: Gathertown Room Green 2

Workshop Program

Timezone:

  
Time Activity
  
Openning Remarks
Distinguished Keynote Lecture: A Journey from Transfer Learning to Federated Learning, by Qiang Yang, Chief AI Officer (CAIO), WeBank / Chair Professor, Hong Kong University of Science and Technology (HKUST)
Invited Talk 1: Federated Learning in Large Clinical Research Networks, by Fei Wang, Cornell University
Invited Talk 2: Towards Robust and Efficient Federated Learning, by Shiqiang Wang, IBM T. J. Watson Research Center
Invited Talk 3: How to Secure the Generalization of a Pre-trained Model, by Ying Wei, City University of Hong Kong
Break
Contributed Oral Presentation Session 1 (15 minutes per talk including Q&A)
  1. Alysa Ziying Tan, Han Yu, Lizhen Cui and Qiang Yang. Towards Personalized Federated Learning
  2. Best Paper: Ching Pui Wan and Qifeng Chen. Robust Federated Learning with Attack-Adaptive Aggregation
  3. Best Student Paper: Mengmeng Tian, Yuxin Chen, Yuan Liu, Zehui Xiong, Cyril Leung and Chunyan Miao. A Contract Theory based Incentive Mechanism for Federated Learning
Invited Talk 4: Label Leakage and Protection in Two-party Split Learning, by Chong Wang, ByteDance
Invited Talk 5: Federated Optimization under Real-world Constraints, by Zheng Xu, Google
Invited Talk 6: Large Scale Vertical Federated Learning, by Liefeng Bo, JD
Lunch Break
Contributed Oral Presentation Session 2 (15 minutes per talk including Q&A)
  1. Lixin Fan, Bowen Li, Hanlin Gu, Yan Kang, Jie Li and Qiang Yang. FedIPR: Ownership Verification for Federated Deep Neural Network Models
  2. Hao Li, Mingkai Huang, Bing Bai, Chang Wang, Kun Bai, Fei Wang, Xinghua Zhu, Binwen Zhao, Ganggang Liu and Chao Qi. A Federated Multi-View Deep Learning Framework for Privacy-Preserving Recommendations
  3. Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma and Qiang Yang. Privacy-preserving Federated Adversarial Domain adaptation over Feature Groups for Interpretability
  4. Best Application Paper: Cengguang Zhang, Junxue Zhang, Di Chai and Kai Chen. Aegis: A Trusted, Automatic and Accurate Verification Framework for Vertical Federated Learning
Invited Talk 7: Federated Learning for Industrial Video Recommendation, by Hao Li, Tencent
Invited Talk 8: Federated Continual and Semi-Supervised Learning, by Sung Ju Hwang, Korea Advanced Institute of Science and Technology (KAIST)
Invited Talk 9: Transfer Learning: Theory, Algorithms, and Open Library, by Mingsheng Long, Tsinghua University
Contributed Oral Presentation Session 3 (15 minutes per talk including Q&A)
  1. Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan Yao and Qiang Yang. A novel approach to simultaneously improve privacy, efficiency and reliability of federated DNN learning
  2. Lingjuan Lyu and Chen Chen. A Novel Attribute Reconstruction Attack in Federated Learning
  3. Xiaodian Cheng, Wanhang Lu, Xinyang Huang, Shuihai Hu and Kai Chen. HAFLO: GPU-Based Acceleration for Federated Logistic Regression
  4. Nhan Khanh Le, Yang Liu, Quang Minh Nguyen, Qingchen Liu, Fangzhou Liu, Quanwei Cai and Sandra Hirche. FedXGBoost: Privacy-Preserving XGBoost for Federated Learning
Invited Talk 10: Rethinking Importance Weighting for Transfer Learning, by Masashi Sugiyama, The University of Tokyo
Award Ceremony
Poster Session (Gathertown - Green 2)
  1. Zhenheng Tang, Zhikai Hu, Shaohuai Shi, Yiu-Ming Cheung, Yilun Jin, Zhenghang Ren and Xiaowen Chu. Data Resampling for Federated Learning with Non-IID Labels [Poster]
  2. Yiqiang Chen, Wang Lu, Jindong Wang and Xin Qin. FedHealth 2: Weighted Federated Transfer Learning via Batch Normalization for Personalized Healthcare [Poster]
  3. Tsz-Him Cheung, Weihang Dai and Shuhan Li. FedSGC: Federated Simple Graph Convolution for Node Classification [Poster]
  4. Srikanth Chandar, Pravin Chandran, Raghavendra Bhat and Avinash Chakravarthi. Communication Optimization in Large Scale Federated Learning using Autoencoder Compressed Weight Updates [Poster]
  5. Kaiyun Li, Xiaojun Chen, Ye Dong, Dakui Wang, Shuai Zeng and Peng Zhang. Efficient Byzantine-Resilient Stochastic Gradient Descent [Poster]
  6. Zihan Chen, Kai Fong Ernest Chong and Tony Q.S. Quek. Dynamic Attention-based Communication-Efficient Federated Learning [Poster]
  7. Yang Pei, Renxin Mao, Yang Liu, Chaoran Chen, Shifeng Xu, Feng Qiang and Peng Zhang. Decentralized Federated Graph Neural Networks [Poster]
  8. Shuang Luo, Didi Zhu, Zexi Li and Chao Wu. Ensemble Federated Adversarial Training with Non-IID data [Poster]
  

Distinguished Keynote Lecture

   

Title: A Journey from Transfer Learning to Federated Learning

Speaker: Qiang Yang, Chief AI Officer (CAIO), WeBank / Chair Professor, Hong Kong University of Science and Technology (HKUST)

Biography
Qiang Yang is the head of the AI Department at WeBank (Chief AI Officer) and Chair Professor at the Computer Science and Engineering (CSE) Department of the Hong Kong University of Science and Technology (HKUST), where he was a former head of CSE Department and founding director of the Big Data Institute (2015-2018). His research interests include AI, machine learning, and data mining, especially in transfer learning, automated planning, federated learning, and case-based reasoning. He is a fellow of several international societies, including ACM, AAAI, IEEE, IAPR, and AAAS. He received his Ph.D. in Computer Science in 1989 and his M.Sc. in Astrophysics in 1985, both from the University of Maryland, College Park. He obtained his B.Sc. in Astrophysics from Peking University in 1982. He had been a faculty member at the University of Waterloo (1989-1995) and Simon Fraser University (1995-2001). He was the founding Editor-in-Chief of the ACM Transactions on Intelligent Systems and Technology (ACM TIST) and IEEE Transactions on Big Data (IEEE TBD). He served as the President of International Joint Conference on AI (IJCAI, 2017-2019) and an executive council member of Association for the Advancement of AI (AAAI, 2016-2020). Qiang Yang is a recipient of several awards, including the 2004/2005 ACM KDDCUP Championship, the ACM SIGKDD Distinguished Service Award (2017), and AAAI Innovative Applications of AI Award (2018 and 2020). He was the founding director of Huawei's Noah's Ark Lab (2012-2014) and a co-founder of 4Paradigm Corp, an AI platform company. He is an author of several books including Intelligent Planning (Springer), Crafting Your Research Future (Morgan & Claypool), and Constraint-based Design Recovery for Software Engineering (Springer).


Invited Talks

   

Title: Federated Learning in Large Clinical Research Networks

Speaker: Fei Wang, Cornell University

Biography
Fei Wang is an Associate Professor in Division of Health Informatics, Department of Population Health Sciences, Weill Cornell Medicine, Cornell University. He got his PhD from Department of Automation, Tsinghua University in 2008. His major research interest is machine learning and artificial intelligence in health data science. He extensively published on the top venues of machine learning such as ICML, KDD, NeurIPS, CVPR, AAAI, IJCAI, biomedical informatics venues such as JAMIA and Bioinformatics, as well as clinical medicine venues such as JAMA and Lancet series. His papers have received over 15,000 citations so far with an H-index 61. His (or his students’) papers have won 6 best paper (or nomination) awards at top international conferences on data mining and medical informatics. His team won the championship of the NIPS/Kaggle Challenge on Classification of Clinically Actionable Genetic Mutations in 2017 and Parkinson's Progression Markers' Initiative data challenge organized by Michael J. Fox Foundation in 2016. Dr. Wang is the recipient of the NSF CAREER Award in 2018, as well as the inaugural research leadership award in IEEE International Conference on Health Informatics (ICHI) 2019. Dr. Wang’s Research has been supported by NSF, NIH, ONR, PCORI, MJFF, AHA, Amazon, etc. Dr. Wang is a fellow of AMIA.

   

Title: Towards Robust and Efficient Federated Learning

Speaker: Shiqiang Wang, IBM T. J. Watson Research Center

Biography
Shiqiang Wang received his Ph.D. from the Department of Electrical and Electronic Engineering, Imperial College London, United Kingdom, in 2015. He is a Research Staff Member at IBM T. J. Watson Research Center, NY, USA since 2016, where he was also a Graduate-level Co-op in the summers of 2014 and 2013. In the fall of 2012, he was at NEC Laboratories Europe, Heidelberg, Germany. His current research focuses on the interdisciplinary areas in machine learning, distributed systems, optimization, networking, and signal processing. Dr. Wang served as a technical program committee (TPC) member of several international conferences, including ICML, ICDCS, AISTATS, IJCAI, WWW, IFIP Networking, IEEE GLOBECOM, IEEE ICC, and as an associate editor of the IEEE Transactions on Mobile Computing (starting in 2021). He received the IBM Outstanding Technical Achievement Award (OTAA) in 2019, multiple Invention Achievement Awards from IBM since 2016, Best Paper Finalist of the IEEE International Conference on Image Processing (ICIP) 2019, and Best Student Paper Award of the Network and Information Sciences International Technology Alliance (NIS-ITA) in 2015.

   

Title: How to Secure the Generalization of a Pre-trained Model

Speaker: Ying Wei, City University of Hong Kong

Biography
Ying Wei is an Assistant Professor in the Department of Computer Science, City University of Hong Kong since Feb 2021. Prior to that, she was a Senior Researcher at the Machine Learning Center of Tencent AI Lab from November 2017 to January 2021. She works on machine learning, and is especially interested in solving challenges in transfer and meta learning by pushing the boundaries of both theories and applications. She received her Ph.D. degree from Hong Kong University of Science and Technology in 2017 with the support of Hong Kong PhD Fellowship. Before this, she completed her Bachelor degree from Huazhong University of Science and Technology in 2012, with first class honor. She has been invited as a senior PC member for AAAI, and a PC member for many other top-tier conferences, such as ICML, NeurIPS, ICLR, and etc.

   

Title: Label Leakage and Protection in Two-party Split Learning

Speaker: Chong Wang, ByteDance

Biography
Chong Wang is the head of the applied machine learning (AML) research at ByteDance. The AML Research team works on fundamental machine learning research and its applications for many of our products, such as TikTok and Douyin, among others. Before ByteDance, He worked as a research scientist at Google and Microsoft Research. He received B.S. from Tsinghua University and PhD from Princeton University. His research has won several best paper awards in top machine learning conferences and some of them went into widely used products to serve the users from the globe. His homepage is https://chongw.github.io.

   

Title: Federated Optimization under Real-world Constraints

Speaker: Zheng Xu, Google

Biography
Zheng Xu is a research scientist working on federated learning at Google. He got his Ph.D. in optimization and machine learning from University of Maryland, College Park. Before that, he got his master's and bachelor's degree from University of Science and Technology of China. During the studies, Zheng has interned and collaborated with researchers from Apple, Adobe, Honda, Amazon, IBM, MSRA and NTU. His papers have received best (student) paper awards at several workshops and conferences.

   

Title: Large Scale Vertical Federated Learning

Speaker: Liefeng Bo, JD

Biography
Dr. Bo is the Head of Silicon Valley R&D Center at JD Technology, leading a team to develop advanced AI technologies. He was a Principal Scientist at Amazon for building a grab-and-go shopping experience using computer vision, deep learning and sensor fusion technologies. He received his PhD from Xidian University in 2007, and was a postdoctoral researcher at TTIC and University of Washington, respectively. His research interests are in machine learning, deep learning, computer vision, robotics, and natural language processing. He won the National Excellent Doctoral Dissertation Award of China in 2010, and the Best Vision Paper Award in ICRA 2011.

   

Title: Federated Learning for Industrial Video Recommendation

Speaker: Hao Li, Tencent

Biography
Hao Li is the Tech Lead of privacy computing at Tencent WeSee, an APP for users to create and share their short videos. As a principle researcher and engineer at Tencent since 2018, he’s been focusing on data security and privacy by distributed and decentralized cross-silo/cross-device federated learning, and privacy-preserving machine learning by program analysis and secure multi-party computation. Before Tencent, he worked as a software architect at Intel. He received his PhD from Peking University, and Postdoc from Columbia University. His has published several papers in top system conferences such SOSP, VEE, CODES+ISSS, etc.

   

Title: Federated Continual and Semi-Supervised Learning

Speaker: Sung Ju Hwang, Korea Advanced Institute of Science and Technology (KAIST)

Biography
Sung Ju Hwang is an associate professor in the Graduate School of Artificial Intelligence and School of Computing at KAIST. Prior to working at KAIST, Sung Ju worked as a postdoctoral research associate at Disney Research (2013-204), and as an assistant professor at UNIST (2014-2017). Sung Ju received his Ph.D. in Computer Science at University of Texas at Austin, under the supervision of Professor Kristen Grauman.

   

Title: Transfer Learning: Theory, Algorithms, and Open Library

Speaker: Mingsheng Long, Tsinghua University

Biography
Mingsheng Long received the BE degree in electrical engineering and the PhD degree in computer science from Tsinghua University in 2008 and 2014 respectively. He is an associate professor with the School of Software, Tsinghua University. He was a visiting researcher in the Department of Computer Science, UC Berkeley from 2014 to 2015. He serves as Area Chairs of major machine learning conferences (ICML/NeurIPS/ICLR). His research is dedicated to the theories and algorithms of machine learning, with special interests in transfer learning, deep learning, and learning with scientific knowledge.

   

Title: Rethinking Importance Weighting for Transfer Learning

Speaker: Masashi Sugiyama, The University of Tokyo

Biography
Masashi Sugiyama received Doctor of Engineering in Computer Science from Tokyo Institute of Technology, Japan in 2001. Experiencing Assistant Professor and Associate Professor at Tokyo Institute of Technology, he became Professor at the University of Tokyo in 2014. Since 2016, he has been concurrently serving as Director of RIKEN Center for Advanced Intelligence Project. He coauthored Machine Learning in Non-Stationary Environments (MIT Press, 2012), Density Ratio Estimation in Machine Learning (Cambridge University Press, 2012), Statistical Reinforcement Learning (Chapman and Hall, 2015), Introduction to Statistical Machine Learning (Morgan Kaufmann, 2015), and Machine Learning from Weak Supervision (MIT Press, to appear).


Awards


Accepted Papers (Oral Presentation)

  1. Alysa Ziying Tan, Han Yu, Lizhen Cui and Qiang Yang. Towards Personalized Federated Learning
  2. Ching Pui Wan and Qifeng Chen. Robust Federated Learning with Attack-Adaptive Aggregation
  3. Mengmeng Tian, Yuxin Chen, Yuan Liu, Zehui Xiong, Cyril Leung and Chunyan Miao. A Contract Theory based Incentive Mechanism for Federated Learning
  4. Lixin Fan, Bowen Li, Hanlin Gu, Yan Kang, Jie Li and Qiang Yang. FedIPR: Ownership Verification for Federated Deep Neural Network Models
  5. Hao Li, Mingkai Huang, Bing Bai, Chang Wang, Kun Bai, Fei Wang, Xinghua Zhu, Binwen Zhao, Ganggang Liu and Chao Qi. A Federated Multi-View Deep Learning Framework for Privacy-Preserving Recommendations
  6. Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma and Qiang Yang. Privacy-preserving Federated Adversarial Domain adaptation over Feature Groups for Interpretability
  7. Cengguang Zhang, Junxue Zhang, Di Chai and Kai Chen. Aegis: A Trusted, Automatic and Accurate Verification Framework for Vertical Federated Learning
  8. Hanlin Gu, Lixin Fan, Bowen Li, Yan Kang, Yuan Yao and Qiang Yang. A novel approach to simultaneously improve privacy, efficiency and reliability of federated DNN learning
  9. Lingjuan Lyu and Chen Chen. A Novel Attribute Reconstruction Attack in Federated Learning
  10. Xiaodian Cheng, Wanhang Lu, Xinyang Huang, Shuihai Hu and Kai Chen. HAFLO: GPU-Based Acceleration for Federated Logistic Regression
  11. Nhan Khanh Le, Yang Liu, Quang Minh Nguyen, Qingchen Liu, Fangzhou Liu, Quanwei Cai and Sandra Hirche. FedXGBoost: Privacy-Preserving XGBoost for Federated Learning

Accepted Papers (Poster Presentation)

  1. Zhenheng Tang, Zhikai Hu, Shaohuai Shi, Yiu-Ming Cheung, Yilun Jin, Zhenghang Ren and Xiaowen Chu. Data Resampling for Federated Learning with Non-IID Labels
  2. Yiqiang Chen, Wang Lu, Jindong Wang and Xin Qin. FedHealth 2: Weighted Federated Transfer Learning via Batch Normalization for Personalized Healthcare
  3. Tsz-Him Cheung, Weihang Dai and Shuhan Li. FedSGC: Federated Simple Graph Convolution for Node Classification
  4. Srikanth Chandar, Pravin Chandran, Raghavendra Bhat and Avinash Chakravarthi. Communication Optimization in Large Scale Federated Learning using Autoencoder Compressed Weight Updates
  5. Kaiyun Li, Xiaojun Chen, Ye Dong, Dakui Wang, Shuai Zeng and Peng Zhang. Efficient Byzantine-Resilient Stochastic Gradient Descent
  6. Zihan Chen, Kai Fong Ernest Chong and Tony Q.S. Quek. Dynamic Attention-based Communication-Efficient Federated Learning
  7. Yang Pei, Renxin Mao, Yang Liu, Chaoran Chen, Shifeng Xu, Feng Qiang and Peng Zhang. Decentralized Federated Graph Neural Networks
  8. Shuang Luo, Didi Zhu, Zexi Li and Chao Wu. Ensemble Federated Adversarial Training with Non-IID data


Call for Papers

Privacy and security are becoming a key concern in our digital age. Companies and organizations are collecting a wealth of data on a daily basis. Data owners have to be very cautious while exploiting the values in the data, since the most useful data for machine learning often tend to be confidential. Increasingly strict data privacy regulations such as the European Union’s General Data Protection Regulation (GDPR) bring new legislative challenges to the big data and artificial intelligence (AI) community. Many operations in the big data domain, such as merging user data from various sources for building an AI model, will be considered illegal under the new regulatory framework if they are performed without explicit user authorization.

In order to explore how the AI research community can adapt to this new regulatory reality, we organize this one-day workshop in conjunction with the 30th International Joint Conference on Artificial Intelligence (IJCAI'21). The workshop will focus on machine learning systems adhering to the privacy-preserving and security principles. Technical issues include but not limit to data collection, integration, training and modelling, both in the centralized and distributed setting. The workshop intends to provide a forum to discuss the open problems and share the most recent and ground-breaking work on the study and application of secure and privacy-preserving compliant machine learning. Both theoretical and application-based contributions are welcome. The FL-series workshops seek to explore new ideas with particular focus on addressing the following challenges:

We welcome submissions on recent advances in privacy-preserving, secure machine learning and artificial intelligence systems. All accepted papers will be presented during the workshop. At least one author of each accepted paper is expected to represent it at the workshop. Topics include but not limit to:

Techniques

  1. Adversarial learning, data poisoning, adversarial examples, adversarial robustness, black box attacks
  2. Architecture and privacy-preserving learning protocols
  3. Automated federated learning
  4. Federated learning and distributed privacy-preserving algorithms
  5. Federated transfer learning
  6. Human-in-the-loop for privacy-aware machine learning
  7. Incentive mechanism and game theory
  8. Privacy aware knowledge driven federated learning
  9. Privacy-preserving techniques (secure multi-party computation, homomorphic encryption, secret sharing techniques, differential privacy) for machine learning
  10. Responsible, explainable and interpretability of AI
  11. Security for privacy
  12. Trade-off between privacy and efficiency
  13. Heterogeneous computing systems for federated learning

Applications

  1. Approaches to make AI GDPR-compliant
  2. Crowd intelligence
  3. Data value and economics of data federation
  4. Open-source frameworks for distributed learning
  5. Safety and security assessment of AI solutions
  6. Solutions to data security and small-data challenges in industries
  7. Standards of data privacy and security

Position, perspective, and vision papers are also welcome.

More information on previous workshops can be found here.


Submission Instructions

Submissions should be between 4 to 7 pages following the IJCAI-21 template. Formatting guidelines, including LaTeX styles and a Word template, can be found at: https://www.ijcai.org/authors_kit. We do not accept submissions of work currently under review. The submissions should include author details as we do not carry out blind review. High quality submissions will be invited to submit an extended version to a journal special issue (to be announced later).

Submission link: https://easychair.org/conferences/?conf=flijcai21

For enquiries, please email to flijcai21@easychair.org.


Publications

Selected high quality submissions will be invited to contribute chapters in the following edited book (Call for Book Chapters):


Organizing Committee


Program Committee


Organized by

     

In Collaboration with