![]() |
    |
Title: The Ed-Fed Framework: Bridging the Gap from Theoretical Simulations to Real-world Edge Deployment Speaker: Prabhakar Venkata, Chief Research Scientist, Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India Abstract
|
![]() |
    |
Title: Byzantine-Robust Federated Learning under Heterogeneity: Personalization and Transfer as a Way Forward? Speaker: Rafael Pinot, Junior Professor, Department of Mathematics, Sorbonne University, France Abstract
In this talk, we explore the fundamental tension between Byzantine robustness and data heterogeneity. We show that heterogeneity across clients not only degrades model performance but also amplifies adversarial influence, weakening the guarantees of many existing robust aggregation methods. This interaction exposes an inherent heterogeneity bottleneck that limits the effectiveness of classical defenses. We then present recent advances that revisit this challenge through the lenses of personalization and transfer learning. By relaxing the assumption of a single global model and enabling selective knowledge sharing among clients, these approaches open new avenues for mitigating adversarial impact while preserving model utility. We highlight key insights, emerging techniques, and open research directions in this evolving area. |
![]() |
    |
Title: Photon: Establishing a New SOTA in Decentralized Foundation Model Training Speaker: Nicholas D. Lane, Professor, University of Cambridge | Flower Labs Abstract
|
Since its inception in 2016, Federated Learning (FL) has become a popular framework for collaboratively training machine learning models across multiple devices, while ensuring that user data remains on the devices to enhance privacy. With the exponential growth of data and the increasing diversity of data types, coupled with the limited availability of computational resources, improving the efficiency of training processes in FL is even more urgent than before. This challenge is further amplified by the rise in popularity of training and fine-tuning large-scale models, such as Large Language Models (LLMs), which demand significant computational power. In addition, as FL is now being deployed in more complex and heterogeneous environments, it is more pressing to strengthen security and ensure data privacy in FL to maintain user trust. This workshop aims to bring together academics and industry experts to discuss the future directions of federated learning research, along with practical setups and promising extensions of baseline approaches, with a special focus on how to enhance both the training efficiency and the security in FL. By dealing with these critical issues, we aim to pave the way for more sustainable and secure FL implementations that can effectively handle the requirements of modern AI applications.
The Workshop on Secure and Efficient Federated Learning aims to provide a platform for discussing the key promises of federated learning and how they can be addressed simultaneously. Given the growing concern over data leakage in modern distributed systems and the requirement of training large-scaled models with limited resources, the security and efficiency of federated learning is the central focus of this workshop.
Topics of interest include, but are not limited to:
More information on previous workshops can be found here.
We invite submissions of original research papers, case studies, and position papers related to the workshop's themes. Submissions should follow the latest ACM Sigconf style conference format and will undergo a double-blind review process. All submissions should be anonymized appropriately. Author names and affiliations should not appear in the paper. The authors should avoid obvious self-references and should appropriately blind them if used. The list of authors cannot be changed (but the order can be) after the submission is made unless approved by the Program Chairs. Submissions must not substantially overlap with papers that are published or simultaneously submitted to other venues (including journals or conferences/workshops). Double-submission will result in immediate rejection. We may report detected violations to other conference chairs and journal editors.
Papers in double-blind ACM format of up to six pages, including all text, figures and references can be submitted via EDAS at https://edas.info/newPaper.php?c=34574.
For questions, please contact: asiaccsfl@gmail.com
![]() Huaxiong Wang (NTU) |
    | ![]() Mikael Skoglund (KTH) |
    | ![]() Stanislav Kruglik (DTU) |
    |
![]() Chengxi Li (KTH) |
    | ![]() Rawad Bitar (TUM) |
    | ![]() Han Yu (NTU) |
    |
|
|