TY - GEN
T1 - Resilient Machine Learning (rML) Ensemble Against Adversarial Machine Learning Attacks
AU - Yao, Likai
AU - Tunc, Cihan
AU - Satam, Pratik
AU - Hariri, Salim
N1 - Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Machine Learning (ML) algorithms have been widely used in many critical applications, including Dynamic Data Driven Applications Systems (DDDAS) applications, automated financial trading applications, autonomous vehicles, and intrusion detection systems for the decision-making process of users or automated systems. However, malicious adversaries have strong interests in manipulation the operations of machine learning algorithms to achieve their objectives in gaining financially, injecting injury or disasters. Adversaries against ML can be classified based on their capabilities and goals into two types: Adversary who has full knowledge of the ML models and parameters (white-box scenario) and one that does not have any knowledge and use guessing techniques to figure out the ML model and its parameters (black-box scenario). In both scenarios, the adversaries will attempt to maliciously manipulate model either during training or testing. Defending against these attacks can be successful by following three methods: 1) making the ML model robust to adversary, 2) validating and verifying input, or 3) changing ML architecture. In this paper, we present a resilient machine learning (rML) ensemble against adversarial attacks by dynamically changing the ML architecture and the ML models to be used such that the adversaries have no knowledge about the current ML model being used and consequently stop their attempt to manipulate the ML operations at testing phase. We evaluate the effectiveness of our rML ensemble using the benchmarking, zero-query dataset “DAmageNet” that contains both clean and adversarial image samples. We use three main neural networks in our ensemble that includes VGG16, ResNet-50, and ResNet-101. The experimental results show that our rML can tolerate the adversarial samples and achieve high classification accuracy with small execution time degradation.
AB - Machine Learning (ML) algorithms have been widely used in many critical applications, including Dynamic Data Driven Applications Systems (DDDAS) applications, automated financial trading applications, autonomous vehicles, and intrusion detection systems for the decision-making process of users or automated systems. However, malicious adversaries have strong interests in manipulation the operations of machine learning algorithms to achieve their objectives in gaining financially, injecting injury or disasters. Adversaries against ML can be classified based on their capabilities and goals into two types: Adversary who has full knowledge of the ML models and parameters (white-box scenario) and one that does not have any knowledge and use guessing techniques to figure out the ML model and its parameters (black-box scenario). In both scenarios, the adversaries will attempt to maliciously manipulate model either during training or testing. Defending against these attacks can be successful by following three methods: 1) making the ML model robust to adversary, 2) validating and verifying input, or 3) changing ML architecture. In this paper, we present a resilient machine learning (rML) ensemble against adversarial attacks by dynamically changing the ML architecture and the ML models to be used such that the adversaries have no knowledge about the current ML model being used and consequently stop their attempt to manipulate the ML operations at testing phase. We evaluate the effectiveness of our rML ensemble using the benchmarking, zero-query dataset “DAmageNet” that contains both clean and adversarial image samples. We use three main neural networks in our ensemble that includes VGG16, ResNet-50, and ResNet-101. The experimental results show that our rML can tolerate the adversarial samples and achieve high classification accuracy with small execution time degradation.
KW - Adversarial machine learning
KW - Dynamic data driven applications systems
KW - Moving target defense
KW - Resiliency
KW - Resilient decision support
UR - http://www.scopus.com/inward/record.url?scp=85097371254&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85097371254&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-61725-7_32
DO - 10.1007/978-3-030-61725-7_32
M3 - Conference contribution
AN - SCOPUS:85097371254
SN - 9783030617240
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 274
EP - 282
BT - Dynamic Data Driven Application Systems - Third International Conference, DDDAS 2020, Proceedings
A2 - Darema, Frederica
A2 - Blasch, Erik
A2 - Ravela, Sai
A2 - Aved, Alex
PB - Springer Science and Business Media Deutschland GmbH
T2 - 3rd International Conference on Dynamic Data Driven Application Systems, DDDAS 2020
Y2 - 2 October 2020 through 4 October 2020
ER -