International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS’23)


Final Submission Deadline: October 02, 2023 (23:59:59 AoE)
Notification Due: October 27, 2023
Workshop Date: Saturday, December 16, 2023
Venue: Hall D-2, New Orleans Ernest N. Morial Convention Center, New Orleans, LA, USA

Workshop Program

  
Time (New Orleans) Activity
  
08:25 – 08:30 Opening Remarks
08:30 – 09:10 Oral Presentation Session 1 (10 min per talk, including Q&A)
  1. Chen Qiu, Xingyu Li, Chaithanya Kumar Mummadi, Madan Ganesh, Zhenzhen Li, Lu Peng & Wan-Yi Lin. Text-driven Prompt Generation for Vision-Language Models in Federated Learning
  2. Shaunak Halbe, James Smith, Junjiao Tian & Zsolt Kira. HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning
  3. Jianwei Li, Sheng Liu & Qi Lei. Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning
  4. Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Arash Nourian, Han Zhao & Bo Li. FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
09:10 – 09:35 Invited Talk 1: Cho-Jui Hsieh, Federated Learning by Dataset Distillation
09:35 – 10:00 Invited Talk 2: Zheng Xu, Federated Learning with Public and Private Data: From Small Models to Large, and Back
10:00 – 10:30 Coffee Break
10:30 – 10:55 Invited Talk 3: Lingjuan Lyu, When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
10:55 – 11:15 Oral Presentation Session 2 (10 min per talk, including Q&A)
  1. Gihun Lee, Minchan Jeong, SangMook Kim, Jaehoon Oh & Se-Young Yun. FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
  2. Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh McMahan & Vinith Suriyakumar. One-shot Empirical Privacy Estimation for Federated Learning
11:15 – 12:00 Panel Discussion
12:00 – 13:30 Lunch Break
13:30 – 13:55 Invited Talk 4: Jayashree Kalpathy-Cramer, Federated Learning in Medical Imaging
13:55 – 14:20 Invited Talk 5: Aiden Chaoyang He, Decentralized LLM Agent Cloud Platform
14:20 – 15:00 Oral Presentation Session 3 (10 min per talk, including Q&A)
  1. Best Paper Award: Liam Collins, Shanshan Wu, Sewoong Oh & Khe Chai Sim. Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
  2. Best Paper Award: Sara Babakniya, Ahmed Elkordy, Yahya Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa EL-Khamy & Salman Avestimehr. SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
  3. Justin Kang, Kannan Ramchandran & Ramtin Pedarsani. The Fair Value of Data Under Heterogeneous Privacy Constraints in Federated Learning
  4. Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Guoyin Wang & Yiran Chen. Towards Building the FederatedGPT: Federated Instruction Tuning
15:00 – 15:30 Coffee Break
15:30 – 15:55 Invited Talk 6: Peter Richtarik, On the 5th Generation of Local Training Methods in Federated Learning
15:55 – 16:15 Oral Presentation Session 4 (10 min per talk, including Q&A)
  1. Sheikh Shams Azam, Martin Pelikan, Vitaly Feldman, Kunal Talwar, Jan Silovsky & Tatiana Likhomanenko. Federated Learning for Speech Recognition: Revisiting Current Trends Towards Large-Scale ASR
  2. Ashok Makkuva, Marco Bondaschi, Thijs Vogels, Martin Jaggi, Hyeji Kim & Michael Gastpar. LASER: Linear Compression in Wireless Distributed Optimization
16:15 – 16:20 Best Paper Award Ceremony
16:20 – 17:30 Poster Session (including all accepted papers)
  1. Chen Qiu, Xingyu Li, Chaithanya Kumar Mummadi, Madan Ganesh, Zhenzhen Li, Lu Peng & Wan-Yi Lin. Text-driven Prompt Generation for Vision-Language Models in Federated Learning
  2. Shaunak Halbe, James Smith, Junjiao Tian & Zsolt Kira. HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning
  3. Jianwei Li, Sheng Liu & Qi Lei. Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning
  4. Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Arash Nourian, Han Zhao & Bo Li. FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
  5. Gihun Lee, Minchan Jeong, SangMook Kim, Jaehoon Oh & Se-Young Yun. FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
  6. Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh McMahan & Vinith Suriyakumar. One-shot Empirical Privacy Estimation for Federated Learning
  7. Liam Collins, Shanshan Wu, Sewoong Oh & Khe Chai Sim. Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
  8. Sara Babakniya, Ahmed Elkordy, Yahya Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa EL-Khamy & Salman Avestimehr. SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
  9. Justin Kang, Kannan Ramchandran & Ramtin Pedarsani. The Fair Value of Data Under Heterogeneous Privacy Constraints in Federated Learning
  10. Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Guoyin Wang & Yiran Chen. Towards Building the FederatedGPT: Federated Instruction Tuning
  11. Laurent Condat, Ivan Agarský, Grigory Malinovsky & Peter Richtárik. TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation
  12. Ashok Makkuva, Marco Bondaschi, Thijs Vogels, Martin Jaggi, Hyeji Kim & Michael Gastpar. LASER: Linear Compression in Wireless Distributed Optimization
  13. Hanmin Li, Avetik Karagulyan & Peter Richtárik. MARINA Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization
  14. Alekh Agarwal, Hugh McMahan & Zheng Xu. An Empirical Evaluation of Federated Contextual Bandit Algorithms
  15. Marco Bornstein, Amrit Bedi, Anit Kumar Sahu, Furqan Khan & Furong Huang. RealFM: A Realistic Mechanism to Incentivize Data Contribution and Device Participation
  16. Lekang Jiang, Filip Svoboda & Nicholas Lane. FDAPT: Federated Domain-adaptive Pre-training for Language Models
  17. Jike Zhong, Hong-You Chen & Wei-Lun Chao. Making Batch Normalization Great in Federated Deep Learning
  18. Christopher Choquette-Choo, Krishnamurthy Dvijotham, Krishna Pillutla, Arun Ganesh, Thomas Steinke & Abhradeep Guha Thakurta. Correlated Noise Provably Beats Independent Noise for Differentially Private Learning
  19. Woojin Chung, Hyowon Cho, James Thorne & Se-Young Yun. Parameter Averaging Laws for Multitask Language Models
  20. Wanru Zhao, Yihong Chen, Royson Lee, Xinchi Qiu, Yan Gao, Hongxiang Fan & Nicholas Lane. Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
  21. Pol G. Recasens, Jordi Torres, Josep Berral, Søren Hauberg & Pablo Moreno-Muñoz. Beyond Parameter Averaging in Model Aggregation
  22. Xuechen Zhang, Mingchen Li, Xiangyu Chang, Jiasi Chen, Amit Roy-Chowdhury, Ananda Suresh & Samet Oymak. Augmenting Federated Learning with Pretrained Transformers
  23. Heng Zhu & Arya Mazumdar. Consensus Optimization at Representation: Improving Personalized Federated Learning via Data-Centric Regularization
  24. Liang Zhang, Kiran Thekumparampil, Sewoong Oh & Niao He. DPZero: Dimension-Independent and Differentially Private Zeroth-Order Optimization
  25. Xidong Wu, Wan-Yi Lin, Devin Willmott, Filipe Condessa, Yufei Huang, Zhenzhen Li & Madan Ganesh. Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning
  26. Weizhao Jin, Yuhang Yao, Shanshan Han, Carlee Joe-Wong, Srivatsan Ravi, Salman Avestimehr & Chaoyang He. FedHE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System
  27. Charles-Étienne Joseph, Benjamin Thérien, Abhinav Moudgil, Boris Knyazev & Eugene Belilovsky. Learning Optimizers for Local SGD
  28. Zhuohang Li, Andrew Lowy, Jing Liu, Toshiaki Koike-Akino, Bradley Malin, Kieran Parsons & Ye Wang. Exploring User-level Gradient Inversion with a Diffusion Prior
  29. Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher Choquette-Choo & Zheng Xu. User Inference Attacks on Large Language Models
  30. Connor Mclaughlin & Lili Su. FedLDA: Personalized Federated Learning Through Collaborative Linear Discriminant Analysis
  31. Sheikh Shams Azam, Martin Pelikan, Vitaly Feldman, Kunal Talwar, Jan Silovsky & Tatiana Likhomanenko. Federated Learning for Speech Recognition: Revisiting Current Trends Towards Large-Scale ASR
  32. Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, Matt Barnes & Gauri Joshi. Heterogeneous LoRA for Federated Fine-tuning of On-device Foundation Models
  33. Xi Li, Songhe Wang, Chen Wu, Hao Zhou & Jiaqi Wang. Backdoor Threats from Compromised Foundation Models to Federated Learning
  34. Maria Hartmann, Grégoire Danoy, Mohammed Alswaitti & Pascal Bouvry. MOFL/D: A Federated Multi-objective Learning Framework with Decomposition
  35. Georgios Papadopoulos, Yash Satsangi, Shaltiel Eloul & Marco Pistoia. Absolute Variation Distance: an Inversion Attack Evaluation Metric for Federated Learning
  36. Eros Fanì, Raffaello Camoriano, Barbara Caputo & Marco Ciccone. Fed3R: Recursive Ridge Regression for Federated Learning with strong pre-trained models
  37. Seongyoon Kim, Gihun Lee, Jaehoon Oh & Se-Young Yun. FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning
  38. Amrith Setlur, Vitaly Feldman & Kunal Talwar. Private and Personalized Histogram Estimation in a Federated Setting
17:30 End of Workshop
   

Invited Talks

   

Title: Decentralized LLM Agent Cloud Platform

Speaker: Aiden Chaoyang He, Co-founder and CTO, FedML, Inc., USA

Biography
Aiden Chaoyang He is Co-founder of FedML, Inc. (raised 13M+ USD in SEED round), a Silicon Valley-based company building machine learning infrastructure to train, serve, and deploy AI models easily, economically, and securely, with holistic support of high-performance ML libraries, user-friendly AIOps, and a well-managed distributed GPU Cloud. Previously, he worked closely with researchers/engineers at Google, Facebook, and Amazon. He was an R&D Team Manager and Principal Software Engineer at Tencent (2014-2018), a Team Leader and Senior Software Engineer at Baidu (2012-2014), and a Software Engineer at Huawei (2011-2012). He has received a number of awards in academia and industry, including the Amazon ML Fellowship (2021-2022), Qualcomm Innovation Fellowship (2021-2022), Tencent Outstanding Staff Award (2015-2016), WeChat Special Award for Innovation (2016), Baidu LBS Group Star Awards (2013), and Huawei Golden Network Award (2012). His research focuses on machine learning, distributed systems, blockchain, edge/cloud computing, primarily distributed/federated machine learning, and efficient distributed training of large foundation models (LLM, Vision Transformer). For these topics, he has published papers at ICML, NeurIPS, CVPR, ICLR, AAAI, MLSys, and VLDB, among others. Besides pure research, he has experience in Internet-scale products and businesses such as Tencent Cloud, Tencent WeChat Automotive / AI in Car, Tencent Games, Tencent Maps, Baidu Maps, and Huawei Smartphone. He received his Ph.D. in Computer Science from the University of Southern California, Los Angeles, USA, advised by Salman Avestimehr (USC), Professor Mahdi Soltanolkotabi (USC), Professor Murali Annavaram (USC), and Professor Tong Zhang (HKUST).

   

Title: Federated Learning by Dataset Distillation

Speaker: Cho-Jui Hsieh, Associate Professor, University of California, Los Angeles, USA

Biography
Cho-Jui Hsieh is an associate professor of Computer Science at UCLA. He was a Ph.D. student at UT Austin working with Prof. Inderjit Dhillon. He received his master degree from National Taiwan University under supervision of Prof. Chih-Jen Lin. Before joining UCLA, he has worked as an Assistant Professor at UC Davis Computer Science and Statistics for three years, and was a visiting scholar in Google since summer 2018. He is interested in developing new algorithms and optimization techniques for large-scale machine learning problems. Currently, he is working on developing new machine learning models as well as improving the model size, training speed, prediction speed, and robustness of popular (deep learning) models.

   

Title: Federated Learning in Medical Imaging

Speaker: Jayashree Kalpathy-Cramer, Professor, University of Colorado, Anschutz, USA

Biography
Jayashree Kalpathy-Cramer, PhD, has been named chief of the new Division of Artificial Medical Intelligence in Ophthalmology at the University of Colorado (CU) School of Medicine. In her new role, Kalpathy-Cramer will translate novel artificial intelligence (AI) methods into effective patient care practices at the Sue Anschutz-Rodgers Eye Center. Kalpathy-Cramer is currently director of the QTIM lab and the Center for Machine Learning at the Athinoula A. Martinos Center for Biomedical Imaging and an associate professor of radiology at Harvard Medical School. Her reasearch lies at the intersection of machine learning, statistics, informatics, image acquisition and analysis with a goal towards clinical translation. She is an electrical engineer by training, having receied a B.Tech in EE from IIT Bombay and a PhD in EE from Rensselaer Polytechnic Institute. Her current projects include quantitative imaging in cancer, image analysis and decision support for retinal imaging, cloud computing, mathematical modeling of drug delivery in cancer, crowd sourcing and challenges, algorithm development and deep learning.

   

Title: When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions

Speaker: Lingjuan Lyu, Head of Privacy and Security, Sony AI, Japan

Biography
Lingjuan Lyu is the Head of Privacy and Security team in Sony AI. Her current research interest is trustworthy AI. She had published over 100 papers in top conferences and journals, including NeurIPS, ICML, ICLR, Nature, etc. Her papers had won a long list of best or outstanding paper awards from top main venues, including ICML, ACL, CIKM, IEEE, etc. She was also a winner of the IBM Ph.D. Fellowship Worldwide.

   

Title: On the 5th Generation of Local Training Methods in Federated Learning

Speaker: Peter Richtárik, Professor, King Abdullah University of Science and Technology, Saudi Arabia

Biography
Peter Richtarik is a professor of Computer Science at the King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia, where he leads the Optimization and Machine Learning Lab. At KAUST, he has a courtesy affiliation with the Applied Mathematics and Computational Sciences program and the Statistics program, and is a member of the Visual Computing Center, and the Extreme Computing Research Center. Prof Richtarik is a founding member and a Fellow of the Alan Turing Institute (UK National Institute for Data Science and Artificial Intelligence), and an EPSRC Fellow in Mathematical Sciences. During 2017-2019, he was a Visiting Professor at the Moscow Institute of Physics and Technology. Prior to joining KAUST, he was an Associate Professor of Mathematics at the University of Edinburgh, and held postdoctoral and visiting positions at Université Catholique de Louvain, Belgium, and University of California, Berkeley, USA, respectively. He received his PhD in 2007 from Cornell University, USA.

   

Title: Federated Learning with Public and Private Data: From Small Models to Large, and Back

Speaker: Zheng Xu, Senior Research Scientist, Google Research, USA

Biography
Zheng is a research scientist working on federated learning. He got his PhD on optimization and machine learning from University of Maryland, College Park.


Awards


Accepted Papers

  1. Chen Qiu, Xingyu Li, Chaithanya Kumar Mummadi, Madan Ganesh, Zhenzhen Li, Lu Peng & Wan-Yi Lin. Text-driven Prompt Generation for Vision-Language Models in Federated Learning
  2. Shaunak Halbe, James Smith, Junjiao Tian & Zsolt Kira. HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning
  3. Jianwei Li, Sheng Liu & Qi Lei. Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning
  4. Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Arash Nourian, Han Zhao & Bo Li. FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data
  5. Gihun Lee, Minchan Jeong, SangMook Kim, Jaehoon Oh & Se-Young Yun. FedSoL: Bridging Global Alignment and Local Generality in Federated Learning
  6. Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh McMahan & Vinith Suriyakumar. One-shot Empirical Privacy Estimation for Federated Learning
  7. Liam Collins, Shanshan Wu, Sewoong Oh & Khe Chai Sim. Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
  8. Sara Babakniya, Ahmed Elkordy, Yahya Ezzeldin, Qingfeng Liu, Kee-Bong Song, Mostafa EL-Khamy & Salman Avestimehr. SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
  9. Justin Kang, Kannan Ramchandran & Ramtin Pedarsani. The Fair Value of Data Under Heterogeneous Privacy Constraints in Federated Learning
  10. Jianyi Zhang, Saeed Vahidian, Martin Kuo, Chunyuan Li, Ruiyi Zhang, Tong Yu, Guoyin Wang & Yiran Chen. Towards Building the FederatedGPT: Federated Instruction Tuning
  11. Laurent Condat, Ivan Agarský, Grigory Malinovsky & Peter Richtárik. TAMUNA: Doubly Accelerated Federated Learning with Local Training, Compression, and Partial Participation
  12. Ashok Makkuva, Marco Bondaschi, Thijs Vogels, Martin Jaggi, Hyeji Kim & Michael Gastpar. LASER: Linear Compression in Wireless Distributed Optimization
  13. Hanmin Li, Avetik Karagulyan & Peter Richtárik. MARINA Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization
  14. Alekh Agarwal, Hugh McMahan & Zheng Xu. An Empirical Evaluation of Federated Contextual Bandit Algorithms
  15. Marco Bornstein, Amrit Bedi, Anit Kumar Sahu, Furqan Khan & Furong Huang. RealFM: A Realistic Mechanism to Incentivize Data Contribution and Device Participation
  16. Lekang Jiang, Filip Svoboda & Nicholas Lane. FDAPT: Federated Domain-adaptive Pre-training for Language Models
  17. Jike Zhong, Hong-You Chen & Wei-Lun Chao. Making Batch Normalization Great in Federated Deep Learning
  18. Christopher Choquette-Choo, Krishnamurthy Dvijotham, Krishna Pillutla, Arun Ganesh, Thomas Steinke & Abhradeep Guha Thakurta. Correlated Noise Provably Beats Independent Noise for Differentially Private Learning
  19. Woojin Chung, Hyowon Cho, James Thorne & Se-Young Yun. Parameter Averaging Laws for Multitask Language Models
  20. Wanru Zhao, Yihong Chen, Royson Lee, Xinchi Qiu, Yan Gao, Hongxiang Fan & Nicholas Lane. Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages
  21. Pol G. Recasens, Jordi Torres, Josep Berral, Søren Hauberg & Pablo Moreno-Muñoz. Beyond Parameter Averaging in Model Aggregation
  22. Xuechen Zhang, Mingchen Li, Xiangyu Chang, Jiasi Chen, Amit Roy-Chowdhury, Ananda Suresh & Samet Oymak. Augmenting Federated Learning with Pretrained Transformers
  23. Heng Zhu & Arya Mazumdar. Consensus Optimization at Representation: Improving Personalized Federated Learning via Data-Centric Regularization
  24. Liang Zhang, Kiran Thekumparampil, Sewoong Oh & Niao He. DPZero: Dimension-Independent and Differentially Private Zeroth-Order Optimization
  25. Xidong Wu, Wan-Yi Lin, Devin Willmott, Filipe Condessa, Yufei Huang, Zhenzhen Li & Madan Ganesh. Leveraging Foundation Models to Improve Lightweight Clients in Federated Learning
  26. Weizhao Jin, Yuhang Yao, Shanshan Han, Carlee Joe-Wong, Srivatsan Ravi, Salman Avestimehr & Chaoyang He. FedHE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System
  27. Charles-Étienne Joseph, Benjamin Thérien, Abhinav Moudgil, Boris Knyazev & Eugene Belilovsky. Learning Optimizers for Local SGD
  28. Zhuohang Li, Andrew Lowy, Jing Liu, Toshiaki Koike-Akino, Bradley Malin, Kieran Parsons & Ye Wang. Exploring User-level Gradient Inversion with a Diffusion Prior
  29. Nikhil Kandpal, Krishna Pillutla, Alina Oprea, Peter Kairouz, Christopher Choquette-Choo & Zheng Xu. User Inference Attacks on Large Language Models
  30. Connor Mclaughlin & Lili Su. FedLDA: Personalized Federated Learning Through Collaborative Linear Discriminant Analysis
  31. Sheikh Shams Azam, Martin Pelikan, Vitaly Feldman, Kunal Talwar, Jan Silovsky & Tatiana Likhomanenko. Federated Learning for Speech Recognition: Revisiting Current Trends Towards Large-Scale ASR
  32. Yae Jee Cho, Luyang Liu, Zheng Xu, Aldi Fahrezi, Matt Barnes & Gauri Joshi. Heterogeneous LoRA for Federated Fine-tuning of On-device Foundation Models
  33. Xi Li, Songhe Wang, Chen Wu, Hao Zhou & Jiaqi Wang. Backdoor Threats from Compromised Foundation Models to Federated Learning
  34. Maria Hartmann, Grégoire Danoy, Mohammed Alswaitti & Pascal Bouvry. MOFL/D: A Federated Multi-objective Learning Framework with Decomposition
  35. Georgios Papadopoulos, Yash Satsangi, Shaltiel Eloul & Marco Pistoia. Absolute Variation Distance: an Inversion Attack Evaluation Metric for Federated Learning
  36. Eros Fanì, Raffaello Camoriano, Barbara Caputo & Marco Ciccone. Fed3R: Recursive Ridge Regression for Federated Learning with strong pre-trained models
  37. Seongyoon Kim, Gihun Lee, Jaehoon Oh & Se-Young Yun. FedFN: Feature Normalization for Alleviating Data Heterogeneity Problem in Federated Learning
  38. Amrith Setlur, Vitaly Feldman & Kunal Talwar. Private and Personalized Histogram Estimation in a Federated Setting

Registration Instructions

All workshop attendees need to register for NeurIPS workshops at https://neurips.cc/Register/view-registration. For pricing information, please visit https://neurips.cc/Conferences/2023/Pricing.


Call for Papers

Training machine learning models in a centralized fashion often faces significant challenges due to regulatory and privacy concerns in real-world use cases. These include distributed training data, computational resources to create and maintain a central data repository, and regulatory guidelines (GDPR, HIPAA) that restrict sharing sensitive data. Federated learning (FL) is a new paradigm in machine learning that can mitigate these challenges by training a global model using distributed data, without the need for data sharing. The extensive application of machine learning to analyze and draw insight from real-world, distributed, and sensitive data necessitates familiarization with and adoption of this relevant and timely topic among the scientific community.

Recently, foundation models such as ChatGPT have revolutionized the field of machine learning by demonstrating remarkable capabilities across a wide range of tasks. These models have democratized the development of machine learning models, empowering developers to focus more on tuning a foundation model to their specific task rather than building complex models from scratch. This paradigm shift has the potential to remove the barriers to entry for machine learning development, and enables a broader community of developers to create high-quality models.

However, as the model development process itself becomes increasingly accessible, a new bottleneck emerges: computation power and data access. While foundation models have the potential to perform exceptionally well across various tasks, they pose two challenges: 1) training them requires vast amounts of training data and compute power, and 2) fine-tuning them to specific applications requires specialized and potentially sensitive data. Acquiring and centralizing datasets for both training and fine-tuning poses several challenges, including data privacy concerns, legal constraints (such as GDPR, HIPAA), and computational burdens.

FL is a promising solution to address these challenges in the era of foundation models. The fundamental goal of federated learning is to train models collaboratively across decentralized devices or data silos while keeping the data securely on those devices or within specific organizations. By adopting federated learning approaches, we can leverage the vast amounts of distributed data and compute available across different sources while respecting privacy regulations and data ownership.

The rise of foundation models amplifies the importance and relevance of FL as a crucial research direction. With foundation models becoming the norm in machine learning development, the focus shifts from model architecture design to tackling the issues surrounding privacy-preserving and distributed learning. Advancements in FL methods have the potential to unlock the full potential of foundation models, enabling efficient and scalable training while safeguarding sensitive data.

With this in mind, we invite original research contributions, position papers, and work-in-progress reports on various aspects of federated learning in the age of foundation models. Since the emergence of foundation models has been a relatively recent phenomenon, their full impact on federated learning has not yet been well explored or understood. We hope to provide a platform to facilitate interaction among students, scholars, and industry professionals from around the world to discuss the latest advancements, share insights, and identify future directions in this exciting field. The workshop topics include but are not limited to the following.
Theory and algorithmic foundations:
  • Impact of heterogeneity in FL of large models
  • Multi-stage model training (e.g., base model + fine tuning)
  • Optimization advances in FL (e.g., beyond first-order and local methods)
  • Prompt tuning in federated settings
  • Self-supervised learning in federated settings
Leveraging foundation models to improve federated learning:
  • Adaptive aggregation strategies for FL in heterogeneous environments
  • Foundation model enhanced FL knowledge distillation
  • Overcoming data interoperability challenges using foundation models
  • Personalization of FL with foundation models
Federated learning for training and tuning foundation models:
  • Fairness, bias, and interpretability challenges in FL with foundation models
  • Federated transfer learning with foundation models
  • FL techniques for traning large-scale foundation models
  • Hardware for FL with foundation models
  • Optimization algorithms for federated training of foundation models
  • Privacy-preserving mechanisms in FL with foundation models
  • Resource-efficient FL with foundation models
  • Security and robustness considerations in FL with foundation models
  • Systems and infrastructure for FL with foundation models
  • Vertical federated learning with foundation models
  • Vulnerabilities of FL with foundation models

More information on previous workshops can be found here.


Submission Instructions

Submissions should be no more than 6 pages long, excluding references, and follow NeurIPS'23 template. Submissions are double-blind (author identity shall not be revealed to the reviewers), so the submitted PDF file should not include any identifiable information of authors. An optional appendix of any length is allowed and should be put at the end of the paper (after references).

Submissions are collected on OpenReview at the following link: https://openreview.net/group?id=NeurIPS.cc/2023/Workshop/Federated_Learning.
Accepted papers and their review comments will be posted on OpenReview in public. Due to the short timeline, we will not have a rebuttal period, but the authors are encouraged to interact and discuss with reviewers on OpenReview after the acceptance notifications are sent out. Rejected papers and their reviews will remain private and not posted in public.

For questions, please contact: flfm-neurips-2023@googlegroups.com


Proceedings and Dual Submission Policy

Our workshop does not have formal proceedings, i.e., it is non-archival. Accepted papers will be available in public on OpenReview together with the reviewers' comments. Revisions to accepted papers will be allowed until shortly before the workshop date.

We welcome submissions of unpublished papers, including those that are submitted to other venues if that other venue allows so. However, papers that have been accepted to an archival venue as of Sept. 28, 2023 should not be resubmitted to this workshop, because the goal of the workshop is to share recent results and discuss open problems. Specifically, papers that have been accepted to NeurIPS'23 main conference should not be resubmitted to this workshop.


Presentation Format

The workshop will primarily take place physically with in person attendance. For presenters who cannot attend in person, it is planned to be made possible to connect remotely over Zoom for the oral talks. However, the poster sessions will be in-person only. Depending on the situation, we may include a lightening talk session for accepted poster presentations where the presenters cannot attend physically, or organize a separate virtual session after the official workshop date. If a paper is accepted as an oral talk, the NeurIPS organizers require a pre-recording of the presentation by early November, which will be made available for virtual participants to view. All accepted papers will be posted on OpenReview and linked on our webpage.


Organizing Committee


Program Committee

  • Alp Yurtsever (Umeå University)
  • Ambrish Rawat (International Business Machines)
  • Anastasios Kyrillidis (Rice University)
  • Ang Li (University of Maryland, College Park)
  • Anirban Das (Capital One)
  • Anran Li (Nanyang Technological University)
  • Aurélien Bellet (INRIA)
  • Berivan Isik (Google)
  • Bing Luo (Duke Kunshan University)
  • Bingsheng He (National University of Singapore)
  • Bo Zhao (Nanyang Technological University)
  • Chao Ren (Nanyang Technological University)
  • Charles Lu (Massachusetts Institute of Technology)
  • Christian Makaya (École Polytechnique de Montréal, Université de Montréal)
  • Chuizheng Meng (University of Southern California)
  • Chulin Xie (University of Illinois, Urbana Champaign)
  • Dimitrios Dimitriadis (Amazon)
  • Divyansh Jhunjhunwala (Carnegie Mellon University)
  • Egor Shulgin (KAUST)
  • Farzin Haddadpour (Biogen)
  • Feng Yan (University of Houston)
  • Giulio Zizzo (International Business Machines)
  • Grigory Malinovsky (King Abdullah University of Science and Technology)
  • Guojun Xiong (State University of New York at Stony Brook)
  • Haibo Yang (Rochester Institute of Technology)
  • Herbert Woisetschläger (Technische Universität München)
  • Hongyi Wang (Carnegie Mellon University)
  • Hongyuan Zhan (Meta)
  • Javier Fernandez-Marques (Samsung AI)
  • Jayanth Regatti (Ohio State University)
  • Jianyu Wang (Apple)
  • Jiayi Wang (University of Utah)
  • Jihong Park (Deakin University)
  • Jinhyun So (Samsung)
  • Junyi Li (University of Pittsburgh)
  • Kallista Bonawitz (Google)
  • Karthik Prasad (Facebook AI)
  • Kevin Hsieh (Microsoft)
  • Konstantin Mishchenko (Samsung)
  • Lie He (Swiss Federal Institute of Technology Lausanne)
  • Liping Yi (Nankai University)
  • Matthias Reisser (Qualcomm Inc, QualComm)
  • Michael Kamp (Institute for AI in Medicine IKIM)
  • Minghong Fang (Duke University)
  • Minhao Cheng (Hong Kong University of Science and Technology)
  • Narasimha Raghavan Veeraragavan (Cancer Registry of Norway)
  • Paulo Ferreira (Dell Technologies)
  • Pengchao Han (The Chinese University of Hong Kong, Shenzhen)
  • Pengfei Li (University of California, Riverside)
  • Pranay Sharma (Carnegie Mellon University)
  • Prashant Khanduri (Wayne State University)
  • Radu Marculescu (University of Texas, Austin)
  • Samuel Horváth (Mohamed bin Zayed University of Artificial Intelligence)
  • Se-Young Yun (KAIST)
  • Sebastian Stich (CISPA Helmholtz Center for Information Security)
  • Shangwei Guo (Chongqing University)
  • Siyao Zhou (McMaster University)
  • Songze Li (Southeast University)
  • Stefanos Laskaridis (Brave Software)
  • Taha Toghani (Rice University)
  • Tahseen Rabbani (University of Maryland, College Park)
  • Virendra Marathe (Oracle)
  • Wenshuo Guo (University of California Berkeley)
  • Xianjie Guo (Hefei University of Technology)
  • Xiaoliang Fan (Xiamen University)
  • Yae Jee Cho (Carnegie Mellon University)
  • Yang Liu (Tsinghua University, Tsinghua University)
  • Yaosen Lin (Apple)
  • Yi Zhou (International Business Machines)
  • Yuanpu Cao (Pennsylvania State University)
  • Yujia Wang (Pennsylvania State University)
  • Zhanhong Jiang (Johnson Controls Inc.)
  • Zhaozhuo Xu (Rice University)
  • Zheng Xu (Google)

Organized by