|   | |
Time (UTC+8) | Activity | |
  |   | |
09:00 – 09:10 | Opening Remarks | |
09:10 – 09:30 | Launch of the 2023 Global Federated Learning Research and Application Report | |
09:30 – 10:00 | Invited Talk 1: Privacy Attacks on Large Language Models, by Yangqiu Song | |
10:00 – 10:30 | Invited Talk 2: Evaluating Large-Scale Learning Systems, by Virginia Smith | |
10:30 – 11:00 | Coffee Break | |
11:00 – 11:30 | Invited Talk 3: Trustworthy Federated Learning with Guarantees, by Bo Li | |
11:30 – 12:30 | Oral Presentation Session 1 (10 min per talk, including Q&A) | |
|
||
12:30 – 14:00 | Lunch Break | |
14:00 – 14:30 | Invited Talk 4: Federated Learning in Healthcare: Overcoming Data Heterogeneity Challenges, by Xiaoxiao Li | |
14:30 – 15:30 | Oral Presentation Session 2 (10 min per talk, including Q&A) | |
|
||
15:30 – 16:00 | Coffee Break | |
16:00 – 16:30 | Invited Talk 5: Personalized Federated Learning, by Guodong Long | |
16:30 – 17:40 | Oral Presentation Session 3 (10 min per talk, including Q&A) | |
|
||
17:40 – 17:45 | Award Ceremony & Closing Remarks | |
  |   |   |
    |
Title: Privacy Attacks on Large Language Models Speaker: Yangqiu Song, Associate Professor, Hong Kong University of Science and Technology (HKUST), Hong Kong Biography
|
|
    |
Title: Evaluating Large-Scale Learning Systems Speaker: Virginia Smith, Assistant Professor, Carnegie Mellon University (CMU), USA Biography
|
|
    |
Title: Trustworthy Federated Learning with Guarantees Speaker: Bo Li, Associate Professor, University of Illinois at Urbana–Champaign (UIUC), USA Biography
|
|
    |
Title: Federated Learning in Healthcare: Overcoming Data Heterogeneity Challenges Speaker: Xiaoxiao Li, Assistant Professor, the University of British Columbia (UBC), Canada Biography
|
|
    |
Title: Personalized Federated Learning Speaker: Guodong Long, Associate Professor, University of Technology Sydney (UTS), Australia Biography
|
Federated Learning (FL), a learning paradigm that enables collaborative training of machine learning models in which data reside and remain in distributed data silos during the training process. FL is a necessary framework to ensure AI thrive in the privacy-focused regulatory environment. As FL allows self-interested data owners to collaboratively train machine learning models, end-users can become co-creators of AI solutions. To enable open collaboration among FL co-creators and enhance the adoption of the federated learning paradigm, we envision that communities of data owners must self-organize during FL model training based on diverse notions of trustworthy federated learning, which include, but not limited to, security and robustness, privacy-preservation, interpretability, fairness, verifiability, transparency, auditability, incremental aggregation of shared learned models, and creating healthy market mechanisms to enable open dynamic collaboration among data owners under the FL paradigm. This workshop aims to bring together academic researchers and industry practitioners to address open issues in this interdisciplinary research area. For industry participants, we intend to create a forum to communicate problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of building trustworthiness into federated learning to enable open dynamic collaboration among data owners under the FL paradigm, and make FL solutions readily applicable to solve real-world problems.
Topics of interest include, but are not limited to:
Techniques:
adversarial robustness, black box attacks encryption, secret sharing techniques, differential privacy) for machine learning |
Applications:
|
More information on previous workshops can be found here.
There are two options for post workshop publications. Authors who wish to submit extended version of their papers elsewhere can opt out of both of these options.
    |
Selected high quality papers will be invited for publication as chapters in an edited book in the Lecture Notes in Artificial Intelligence (LNAI) series under Springer. More information will be provided at a later time. |
|
  | ||
    |
Alternatively, selected high quality papers will be invited to submit a journal version of the paper to the Journal of Computer Science & Technology (JCST), Springer. More information will be provided at a later time. |
Each submission can be up to 7 pages of contents plus up to 2 additional pages of references and acknowledgements. The submitted papers must be written in English and in PDF format according to the IJCAI'23 template. All submitted papers will be under a single-blinded peer review for their novelty, technical quality and impact. The submissions can contain author details. Submission will be accepted via the Easychair submission website.
Based on the requirement from IJCAI'23, at least one author of each accepted paper must travel to the IJCAI venue in person. In addition, multiple submissions of the same paper to more than one IJCAI workshop are forbidden.
Easychair submission site: https://easychair.org/conferences/?conf=fl-ijcai-23
For enquiries, please email to: fl-ijcai-23@easychair.org
|
|