International Workshop on Secure and Efficient Federated Learning
In Conjunction with ACM AsiaCCS 2026 (FL-AsiaCCS’26)


Submission Due: 22 February, 2026 (23:59:59 AoE)
Notification Due: 22 March, 2026 (23:59:59 AoE)
Final Version Due: 01 April, 2026 (23:59:59 AoE)
Workshop Date: Tuesday, 02 June, 2026
Venue: Chancery Pavilion, Bangalore, India

Plenary Talks

   

Title: The Ed-Fed Framework: Bridging the Gap from Theoretical Simulations to Real-world Edge Deployment

Speaker: Prabhakar Venkata, Chief Research Scientist, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India

Abstract
Federated Learning and Semantic Communication are technologies that bridge the "digital divide" and are compelling enough to increase worldwide smartphone penetration (Currently 60%), particularly when users see benefits in livelihood and healthcare, such as telemedicine and remote robotic surgeries. Federated learning (FL) has evolved as a prominent method for edge devices to cooperatively create a unified prediction model while securing their sensitive training data local to the device. Despite numerous research frameworks for simulating FL algorithms, they do not enable comprehensive deployment for automatic speech recognition tasks on heterogeneous edge devices. This is where Ed-Fed, a comprehensive and generic FL framework, comes in as a foundation for future practical FL system research. We propose a novel resource-aware client selection algorithm to optimise the waiting time in the FL settings. We show that our approach can handle the straggler devices and dynamically set the training time for the selected devices in a round. Our evaluation has shown that the proposed approach significantly optimises waiting time in FL compared to conventional random client selection methods. Since Ed-Fed is model agnostic, we explore the application of training with cross-modal haptic data in the context of 6G networks.

   

Title: Byzantine-Robust Federated Learning under Heterogeneity: Personalization and Transfer as a Way Forward?

Speaker: Rafael Pinot, Junior Professor, Department of Mathematics, Sorbonne University, France

Abstract
Federated learning enables collaborative model training across decentralized data sources without requiring raw data sharing, making it especially appealing in privacy-sensitive and regulated domains. However, this distributed paradigm significantly enlarges the attack surface of the training process. In fact, the process can be compromised by Byzantine clients, i.e., participants that behave arbitrarily due to system faults or adversarial manipulation and may actively poison the learning procedure.

In this talk, we explore the fundamental tension between Byzantine robustness and data heterogeneity. We show that heterogeneity across clients not only degrades model performance but also amplifies adversarial influence, weakening the guarantees of many existing robust aggregation methods. This interaction exposes an inherent heterogeneity bottleneck that limits the effectiveness of classical defenses. We then present recent advances that revisit this challenge through the lenses of personalization and transfer learning. By relaxing the assumption of a single global model and enabling selective knowledge sharing among clients, these approaches open new avenues for mitigating adversarial impact while preserving model utility. We highlight key insights, emerging techniques, and open research directions in this evolving area.

   

Title: Photon: Establishing a New SOTA in Decentralized Foundation Model Training

Speaker: Nicholas D. Lane, Professor, University of Cambridge | Flower Labs

Abstract
Current scaling laws indicate that future advances in AI will hinge on access to massive amounts of compute and data. How will we obtain the computing power and data resources required to sustain the AI progress the world has grown accustomed to? I believe all roads lead to federated learning, and approaches of this kind. In the relatively near future, decentralized and federated techniques in machine learning will be how the strongest LLMs (and foundation models more generally) are trained; and in time, how aspirational capabilities like AGI will finally be achieved, in part, due to the adoption of federated methodologies. In this talk, I will describe why the future of AI will be federated, and describe early solutions developed by Flower Labs and CaMLSys that address the underlying technical challenges that the world will face as we shift from a centralized data-center mindset to decentralized alternatives.


Accepted Papers

  1. SLVR: Securely Leveraging Client Validation for Robust Federated Learning
    Jihye Choi (University of Wisconsin - Madison, USA); Sai Rahul Rachuri (Visa Inc, USA); Ke Wang (Nanjing University, China); Somesh Jha (University of Wisconsin - Madison, USA); Yizhen Wang (Visa Inc, USA)
  2. On the Suitability of Federated Learning Algorithms for Intrusion Detection
    Filip Johnsson, Zeeshan Afzal and Mikael Asplund (Linköping University, Sweden)
  3. CQSA: Byzantine-robust Clustered Quantum Secure Aggregation in Federated Learning
    Arnab Nath, Harsh Kasyap (Indian Institute of Technology (BHU), India)
  4. Don't Trust the Hidden Gradients!
    Luke Sperling and Sandeep Kulkarni (Michigan State University, USA)
  5. Safeguarding Knowledge in Federated Transfer Learning with Direction-Aware DP
    Yasas Supeksala Akurudda Liyanage Don (Swinburne University of Technology, Australia); Thilina Ranbaduge (CSIRO, Australia); Ming Ding (Data 61, Australia); Caslon Chua and Jun Zhang (Swinburne University of Technology, Australia)

Call for Papers

Since its inception in 2016, Federated Learning (FL) has become a popular framework for collaboratively training machine learning models across multiple devices, while ensuring that user data remains on the devices to enhance privacy. With the exponential growth of data and the increasing diversity of data types, coupled with the limited availability of computational resources, improving the efficiency of training processes in FL is even more urgent than before. This challenge is further amplified by the rise in popularity of training and fine-tuning large-scale models, such as Large Language Models (LLMs), which demand significant computational power. In addition, as FL is now being deployed in more complex and heterogeneous environments, it is more pressing to strengthen security and ensure data privacy in FL to maintain user trust. This workshop aims to bring together academics and industry experts to discuss the future directions of federated learning research, along with practical setups and promising extensions of baseline approaches, with a special focus on how to enhance both the training efficiency and the security in FL. By dealing with these critical issues, we aim to pave the way for more sustainable and secure FL implementations that can effectively handle the requirements of modern AI applications.

The Workshop on Secure and Efficient Federated Learning aims to provide a platform for discussing the key promises of federated learning and how they can be addressed simultaneously. Given the growing concern over data leakage in modern distributed systems and the requirement of training large-scaled models with limited resources, the security and efficiency of federated learning is the central focus of this workshop.

Topics of interest include, but are not limited to:

More information on previous workshops can be found here.


Submission Instructions

We invite submissions of original research papers, case studies, and position papers related to the workshop's themes. Submissions should follow the latest ACM Sigconf style conference format and will undergo a double-blind review process. All submissions should be anonymized appropriately. Author names and affiliations should not appear in the paper. The authors should avoid obvious self-references and should appropriately blind them if used. The list of authors cannot be changed (but the order can be) after the submission is made unless approved by the Program Chairs. Submissions must not substantially overlap with papers that are published or simultaneously submitted to other venues (including journals or conferences/workshops). Double-submission will result in immediate rejection. We may report detected violations to other conference chairs and journal editors.

Papers in double-blind ACM format of up to six pages, including all text, figures and references can be submitted via EDAS at https://edas.info/newPaper.php?c=34574.

For questions, please contact: asiaccsfl@gmail.com


Workshop Chairs



Huaxiong Wang
(NTU)
   

Mikael Skoglund
(KTH)
   

Stanislav Kruglik
(DTU)
   

Organizing Committee Members



Chengxi Li
(KTH)
   

Rawad Bitar
(TUM)
   

Han Yu
(NTU)
   

Program Committee

  • Antonia Wachter-Zeh (Technical University of Munich)
  • Christopher G. Brinton (Purdue University)
  • Deniz Gunduz (Imperial College, London)
  • Han Mao Kiah (Nanyang Technological University)
  • Han Wu (University of Southampton)
  • Harshan Jagadeesh (IIT Delhi)
  • Jingge Zhu (University of Melbourne)
  • Kok-Seng Wong (VinUniversity)
  • Liang Feng Zhang (ShanghaiTech)
  • Li-Ping Wang (Institute of Information Engineering, Chinese Academy of Sciences)
  • Mang Ye (Wuhan University)
  • Ming Xiao (KTH Royal Institute of Technology)
  • Mingzhe Chen (University of Miami)
  • Nirupam Gupta (University of Copenhagen)
  • Nirvana Meratnia (Eindhoven University of Technology)
  • Pasin Manurangsi (Google Research)
  • Qiongxiu Li (Aalborg University)
  • Ragnar Thobaben (KTH Royal Institute of Technology)
  • Salim El Rouayheb (Rutgers)
  • Samuel Horvath (Mohamed bin Zayed University of Artificial Intelligence)
  • Son Hoang Dau (RMIT)
  • Songze Li (Southeast University)
  • Vadim Safronov (University of Oxford)
  • Willy Susilo (University of Wollongong)
  • Yifei Zhu (Shanghai Jiao Tong University)
  • Ziyue Xu (NVidia)

Organized by