Resilient Machine Learning (rML) Against Adversarial Attacks on Industrial Control Systems

Likai Yao, Sicong Shao, Salim Hariri

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Machine learning (ML) algorithms have been widely used in many critical automated systems, including as a technique in Dynamic Data Driven Applications Systems (DDDAS)-based methods and areas such as financial trading, autonomous vehicles, and intrusion detection systems. However, malicious adversaries have strong interests in manipulating the operations of machine learning algorithms to achieve their objectives of gaining financial, social, or political influence. Adversarial ML (AML) users can be classified based on their capabilities and goals into three types: Adversary who has full knowledge of the ML models and parameters (white-box scenario), partial knowledge of ML models (gray-box scenario), and one who does not have any knowledge and uses guessing techniques to figure out the ML model and its parameters (black-box scenario). In these scenarios, the adversaries attempt to maliciously manipulate the model/data either during training or testing. Defending against these AML attacks can be successful by following methods such as making the ML model robust, validating and verifying inputs and outputs, and changing the ML architecture. This paper presents a resilient machine learning (rML) against adversarial attacks by dynamically conducting feature space anonymization and model randomization in ML services such that the adversaries lack knowledge about the feature space and model used and consequently prevent them from maliciously manipulating the ML operations during the runtime. In our approach, the rML utilizes autoencoders as an anonymization technique for encoding feature space to minimize the effect of adversarial samples. The rML method is evaluated using the benchmarking Industrial Control Systems (ICS) data and the corresponding adversarial data generated using the Jacobian-based Saliency Map Attack (JSMA) method. The experiment demonstrated that the proposed approach can detect attacks targeting ICS and prevent adversarial attacks compromising ML models used to secure ICS.

Original languageEnglish (US)
Title of host publication2023 20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023 - Proceedings
PublisherIEEE Computer Society
ISBN (Electronic)9798350319439
DOIs
StatePublished - 2023
Externally publishedYes
Event20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023 - Giza, Egypt
Duration: Dec 4 2023Dec 7 2023

Publication series

NameProceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSA
ISSN (Print)2161-5322
ISSN (Electronic)2161-5330

Conference

Conference20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023
Country/TerritoryEgypt
CityGiza
Period12/4/2312/7/23

Keywords

  • adversarial machine learning
  • Anonymization
  • Dynamic Data Driven Application Systems
  • moving target defense
  • resiliency
  • resilient decision support

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Computer Science Applications
  • Hardware and Architecture
  • Signal Processing
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Resilient Machine Learning (rML) Against Adversarial Attacks on Industrial Control Systems'. Together they form a unique fingerprint.

Cite this