Resilient Machine Learning (rML) Ensemble Against Adversarial Machine Learning Attacks

Likai Yao, Cihan Tunc, Pratik Satam, Salim Hariri

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Scopus citations

Abstract

Machine Learning (ML) algorithms have been widely used in many critical applications, including Dynamic Data Driven Applications Systems (DDDAS) applications, automated financial trading applications, autonomous vehicles, and intrusion detection systems for the decision-making process of users or automated systems. However, malicious adversaries have strong interests in manipulation the operations of machine learning algorithms to achieve their objectives in gaining financially, injecting injury or disasters. Adversaries against ML can be classified based on their capabilities and goals into two types: Adversary who has full knowledge of the ML models and parameters (white-box scenario) and one that does not have any knowledge and use guessing techniques to figure out the ML model and its parameters (black-box scenario). In both scenarios, the adversaries will attempt to maliciously manipulate model either during training or testing. Defending against these attacks can be successful by following three methods: 1) making the ML model robust to adversary, 2) validating and verifying input, or 3) changing ML architecture. In this paper, we present a resilient machine learning (rML) ensemble against adversarial attacks by dynamically changing the ML architecture and the ML models to be used such that the adversaries have no knowledge about the current ML model being used and consequently stop their attempt to manipulate the ML operations at testing phase. We evaluate the effectiveness of our rML ensemble using the benchmarking, zero-query dataset “DAmageNet” that contains both clean and adversarial image samples. We use three main neural networks in our ensemble that includes VGG16, ResNet-50, and ResNet-101. The experimental results show that our rML can tolerate the adversarial samples and achieve high classification accuracy with small execution time degradation.

Original languageEnglish (US)
Title of host publicationDynamic Data Driven Application Systems - Third International Conference, DDDAS 2020, Proceedings
EditorsFrederica Darema, Erik Blasch, Sai Ravela, Alex Aved
PublisherSpringer Science and Business Media Deutschland GmbH
Pages274-282
Number of pages9
ISBN (Print)9783030617240
DOIs
StatePublished - 2020
Externally publishedYes
Event3rd International Conference on Dynamic Data Driven Application Systems, DDDAS 2020 - Boston, United States
Duration: Oct 2 2020Oct 4 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12312 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference3rd International Conference on Dynamic Data Driven Application Systems, DDDAS 2020
Country/TerritoryUnited States
CityBoston
Period10/2/2010/4/20

Keywords

  • Adversarial machine learning
  • Dynamic data driven applications systems
  • Moving target defense
  • Resiliency
  • Resilient decision support

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Resilient Machine Learning (rML) Ensemble Against Adversarial Machine Learning Attacks'. Together they form a unique fingerprint.

Cite this