AAAI 2021 Workshop

Towards Robust, Secure and Efficient Machine Learning

Venue: Online (Click to join)

RocketChat: Click to join

YouTube: Click to watch

February 8, 2021 (16:00 - 21:00 PST)


Overview

Machine learning technology has been improving with every passing day and has been extensively applied to nearly every corner of the society that offers substantial benefits to our daily lives. However, machine learning models face various threats. For example, it is known that machine learning models are vulnerable to adversarial samples. The existence of adversarial examples reveals that current machine learning models are vulnerable and can be easily fooled, leading to serious security concerns in machine learning systems such as autonomous driving vehicles or face recognition systems.

More recently, due to both data privacy requirements as specified in the European Union’s General Data Protection Regulation (GDPR), and the limitations of computation power, the training process of machine learning models has extended from centralized to decentralized (i.e. distributed or federated learning) where the model will suffer from even more threats. For example, in a federated learning setting, every client can perform various attacks such as backdoor attacks on the global model as clients have direct access to the global model. How to prevent privacy leaking during information exchange of a decentralized training method is also a critical issue.

At the same time, computation efficiency is a big concern for modern deep learning, both inference and training. For inference, people prefer inference on edge devices due to better privacy, but edge devices have very limited computational resource. For training, gradient or weight exchange is necessary for decentralized training, but such exchange requires communication, which may be slow. Furthermore, models that are robust to adversarial attacks usually require longer training time and orders of magnitude more computation FLOPs than normal networks.

This one-day workshop intends to bring experts from machine learning, security communities, and federated learning together to work more closely in addressing the posed concerns. Specifically, we seek to study threats and defenses to machine learning not only in a single node setting but also in a distributed setting. In summary, we seek solutions to achieve a wholistic solution for robust, secure and efficient machine learning.


Accepted Papers

  1. Alberto Matachana, Kenneth Co, Luis Muñoz-González, David Martinez and Emil Lupu. Robustness and Transferability of Universal Attacks on Compressed Models
  2. Arnaud Van Looveren and Giovanni Vacanti. Adversarial Detection and Correction by Matching Prediction Distributions
  3. Arno Blaas and Stephen Roberts. The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks
  4. Chang Song, Elias Fallon and Hai Li. Improving Adversarial Robustness in Weight-quantized Neural Networks
  5. Dashan Gao, Ben Tan, Ce Ju, Vincent W. Zheng and Qiang Yang. Federated Factorization Machine for Secure Recommendation with Sparse Data
  6. Hang Chen, Syed Ali Asif, Jihong Park, Chien-Chung Shen and Mehdi Bennis. Robust Blockchained Federated Learning with Model Validation and Proof-of-Stake Inspired Consensus
  7. Jin-woo Lee, Jaehoon Oh, Sungsu Lim, Se-Young Yun and Jae-Gil Lee. TornadoAggregate: Accurate and Scalable Federated Learning via the Ring-Based Architecture
  8. Marissa Dotter, Keith Manville, Josh Harguess, Colin Busho and Mikel Rodriguez. Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks
  9. Mengting Xu, Tao Zhang, Zhongnian Li, Wei Shao and Daoqiang Zhang. Improving the Certified Robustness of Neural Networks via Consistency Regularization
  10. Mohammadreza Ebrahimi, Ning Zhang, James Hu, Muhammad Taqi Raza and Hsinchun Chen. Binary Black-box Evasion Attacks Against Deep Learning-based Static Malware Detectors with Adversarial Byte-Level Language Model
  11. Nasser Aldaghri, Hessam Mahdavifar and Ahmad Beirami. Coded Machine Unlearning
  12. Ruixuan Luo, Wei Li, Zhiyuan Zhang, Ruihan Bao, Keiko Harimoto and Xu Sun. Learning Robust Representation for Clustering through Locality Preserving Variational Discriminative Network
  13. Sheng Jia, Ehsan Nezhadarya, Yuhuai Wu and Jimmy Ba. Efficient Outlier Detection and Statistical Tests: A Neural Tangent Kernel Approach
  14. Shuhao Fu, Chulin Xie, Bo Li and Qifeng Chen. Attack-Resistant Federated Learning with Residual-based Reweighting
  15. Sungkwon An, Jeonghoon Kim, Myungjoo Kang, Shahbaz Razaei and Xin Liu. OAAE: Adversarial Autoencoders for Novelty Detection in Multi-modal Normality Case via Orthogonalized Latent Space
  16. Tomohiro Hayase, Suguru Yasutomi and Takashi Kato. Selective Forgetting of Deep Networks at a Finer Level than Samples
  17. Utkarsh Uppal and Bharat Giddwani. Normalized Label Distribution: Towards Learning Calibrated, Adaptable and Efficient Activation Maps
  18. Xiaoyang Wang, Bo Li, Jacky Zhang, Bhavya Kailkhura and Klara Nahrstedt. Robusta: Robust AutoML for Feature Selection via Reinforcement Learning
  19. Yi Zhu, Yiwei Zhou and Menglin Xia. Generating Semantically Valid Adversarial Questions for TableQA
  20. Yuting Liang and Reza Samavi. Towards Robust Deep Learning With Ensemble Networks and Noisy Layers

Program Schedule

Time (PST) Activity
15:45 – 16:00 Presenters to connect and test the system
16:00 – 16:05 Opening Remark by Prof. Qiang Yang [Video]
16:05 – 16:35 Keynote Session 1: Efficiency is the Key to Privacy (and Security) by Prof. Kurt Keutzer [PDF] [Video]
16:35 – 17:15 Technical Talks Session 1 (2 talks, 20 mins each including Q&A)
  1. Alberto Matachana, Kenneth Co, Luis Muñoz-González, David Martinez and Emil Lupu. Robustness and Transferability of Universal Attacks on Compressed Models [PDF] [Video]
  2. Yi Zhu, Yiwei Zhou and Menglin Xia. Generating Semantically Valid Adversarial Questions for TableQA [PDF] [Video]
17:15 – 17:20 Break (Presenters should connect and test the system)
17:20 – 17:50 Keynote Session 2: Vertical Federated Kernel Learning by Prof. Heng Huang [PDF] [Video]
17:50 – 18:30 Technical Talks Session 2 (2 talks, 20 mins each including Q&A)
  1. Shuhao Fu, Chulin Xie, Bo Li and Qifeng Chen. Attack-Resistant Federated Learning with Residual-based Reweighting [PDF] [Video]
  2. Chang Song, Elias Fallon and Hai Li. Improving Adversarial Robustness in Weight-quantized Neural Networks [PDF] [Video]
18:30 – 18:35 Break (Presenters should connect and test the system)
18:35 – 19:05 Keynote Session 3: On Private Prediction and Certified Removal by Dr. Laurens van der Maaten [Video]
19:05 – 19:45 Technical Talks Session 3 (2 talks, 20 mins each including Q&A)
  1. Xiaoyang Wang, Bo Li, Jacky Zhang, Bhavya Kailkhura and Klara Nahrstedt. Robusta: Robust AutoML for Feature Selection via Reinforcement Learning [PDF] [Video]
  2. Nasser Aldaghri, Hessam Mahdavifar and Ahmad Beirami. Coded Machine Unlearning [Video]
19:45 – 21:00 Poster Session
21:00 - 21:00 End of Workshop

Invited Speaker


Topic of Interests

Topics including (but not limit to):

Submission Guidelines

Submissions can be a full technical paper (up to 8 pages) or short paper (up to 4 pages) excluding references or supplementary materials.

Authors should only rely on the supplementary material to include minor details that do not fit in the main paper.

The submissions are anonymous for double-blind review.

The workshop will not have formal proceedings.

Please follow AAAI 2021 Latex style for paper format.

The final submission must be in PDF and please submit your papers to https://easychair.org/conferences/?conf=rseml2021


Organizing Committee

General Chair

Program Chair

Program Committee

Industrial Chair

Publicity Chair


Sponsored by


Organized by


Please do not hesitate to contact us or Kam Woh if you have questions. This website is linked with http://federated-learning.org/rseml2021/.

The webpage template is by the courtesy of ICCV 2019 Tutorial on Interpretable Machine Learning for Computer Vision.