TY - GEN
T1 - Resilient Machine Learning (rML) Against Adversarial Attacks on Industrial Control Systems
AU - Yao, Likai
AU - Shao, Sicong
AU - Hariri, Salim
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Machine learning (ML) algorithms have been widely used in many critical automated systems, including as a technique in Dynamic Data Driven Applications Systems (DDDAS)-based methods and areas such as financial trading, autonomous vehicles, and intrusion detection systems. However, malicious adversaries have strong interests in manipulating the operations of machine learning algorithms to achieve their objectives of gaining financial, social, or political influence. Adversarial ML (AML) users can be classified based on their capabilities and goals into three types: Adversary who has full knowledge of the ML models and parameters (white-box scenario), partial knowledge of ML models (gray-box scenario), and one who does not have any knowledge and uses guessing techniques to figure out the ML model and its parameters (black-box scenario). In these scenarios, the adversaries attempt to maliciously manipulate the model/data either during training or testing. Defending against these AML attacks can be successful by following methods such as making the ML model robust, validating and verifying inputs and outputs, and changing the ML architecture. This paper presents a resilient machine learning (rML) against adversarial attacks by dynamically conducting feature space anonymization and model randomization in ML services such that the adversaries lack knowledge about the feature space and model used and consequently prevent them from maliciously manipulating the ML operations during the runtime. In our approach, the rML utilizes autoencoders as an anonymization technique for encoding feature space to minimize the effect of adversarial samples. The rML method is evaluated using the benchmarking Industrial Control Systems (ICS) data and the corresponding adversarial data generated using the Jacobian-based Saliency Map Attack (JSMA) method. The experiment demonstrated that the proposed approach can detect attacks targeting ICS and prevent adversarial attacks compromising ML models used to secure ICS.
AB - Machine learning (ML) algorithms have been widely used in many critical automated systems, including as a technique in Dynamic Data Driven Applications Systems (DDDAS)-based methods and areas such as financial trading, autonomous vehicles, and intrusion detection systems. However, malicious adversaries have strong interests in manipulating the operations of machine learning algorithms to achieve their objectives of gaining financial, social, or political influence. Adversarial ML (AML) users can be classified based on their capabilities and goals into three types: Adversary who has full knowledge of the ML models and parameters (white-box scenario), partial knowledge of ML models (gray-box scenario), and one who does not have any knowledge and uses guessing techniques to figure out the ML model and its parameters (black-box scenario). In these scenarios, the adversaries attempt to maliciously manipulate the model/data either during training or testing. Defending against these AML attacks can be successful by following methods such as making the ML model robust, validating and verifying inputs and outputs, and changing the ML architecture. This paper presents a resilient machine learning (rML) against adversarial attacks by dynamically conducting feature space anonymization and model randomization in ML services such that the adversaries lack knowledge about the feature space and model used and consequently prevent them from maliciously manipulating the ML operations during the runtime. In our approach, the rML utilizes autoencoders as an anonymization technique for encoding feature space to minimize the effect of adversarial samples. The rML method is evaluated using the benchmarking Industrial Control Systems (ICS) data and the corresponding adversarial data generated using the Jacobian-based Saliency Map Attack (JSMA) method. The experiment demonstrated that the proposed approach can detect attacks targeting ICS and prevent adversarial attacks compromising ML models used to secure ICS.
KW - adversarial machine learning
KW - Anonymization
KW - Dynamic Data Driven Application Systems
KW - moving target defense
KW - resiliency
KW - resilient decision support
UR - https://www.scopus.com/pages/publications/85190147221
UR - https://www.scopus.com/inward/citedby.url?scp=85190147221&partnerID=8YFLogxK
U2 - 10.1109/AICCSA59173.2023.10479279
DO - 10.1109/AICCSA59173.2023.10479279
M3 - Conference contribution
AN - SCOPUS:85190147221
T3 - Proceedings of IEEE/ACS International Conference on Computer Systems and Applications, AICCSA
BT - 2023 20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023 - Proceedings
PB - IEEE Computer Society
T2 - 20th ACS/IEEE International Conference on Computer Systems and Applications, AICCSA 2023
Y2 - 4 December 2023 through 7 December 2023
ER -