TY - JOUR
T1 - Attack Transferability Against Information-Theoretic Feature Selection
AU - Gupta, Srishti
AU - Golota, Roman
AU - Ditzler, Gregory
N1 - Publisher Copyright:
© 2021 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
PY - 2021
Y1 - 2021
N2 - Machine learning (ML) is vital to many application-driven fields, such as image and signal classification, cyber-security, and health sciences. Unfortunately, many of these fields can easily have their training data tampered with by an adversary to thwart an ML algorithm’s objective. Further, the adversary can impact any stage in an ML pipeline (e.g., preprocessing, learning, and classification). Recent work has shown that many models can be attacked by poisoning the training data, and the impact of the poisoned data can be quite significant. Prior works on adversarial feature selection have shown that the attacks can damage feature selection (FS). Filter FS algorithms, a type of FS, are widely used for their ability to model nonlinear relationships, classifier independence and lower computational requirements. One important question from the security perspective of these widely used approaches is, whether filter FS algorithms are robust against other FS attacks. In this work, we focus on the task of information-theoretic filter FS such MIM, MIFS, and mRMR, and the impact that gradient-based attack can have on these selections. The experiments on five benchmark datasets demonstrate that the stability of different information-theoretic algorithms can be significantly degraded by injecting poisonous data into the training dataset.
AB - Machine learning (ML) is vital to many application-driven fields, such as image and signal classification, cyber-security, and health sciences. Unfortunately, many of these fields can easily have their training data tampered with by an adversary to thwart an ML algorithm’s objective. Further, the adversary can impact any stage in an ML pipeline (e.g., preprocessing, learning, and classification). Recent work has shown that many models can be attacked by poisoning the training data, and the impact of the poisoned data can be quite significant. Prior works on adversarial feature selection have shown that the attacks can damage feature selection (FS). Filter FS algorithms, a type of FS, are widely used for their ability to model nonlinear relationships, classifier independence and lower computational requirements. One important question from the security perspective of these widely used approaches is, whether filter FS algorithms are robust against other FS attacks. In this work, we focus on the task of information-theoretic filter FS such MIM, MIFS, and mRMR, and the impact that gradient-based attack can have on these selections. The experiments on five benchmark datasets demonstrate that the stability of different information-theoretic algorithms can be significantly degraded by injecting poisonous data into the training dataset.
KW - Adversarial machine learning
KW - feature selection
KW - information theory
UR - http://www.scopus.com/inward/record.url?scp=85113200708&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85113200708&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2021.3105555
DO - 10.1109/ACCESS.2021.3105555
M3 - Article
AN - SCOPUS:85113200708
SN - 2169-3536
VL - 9
SP - 11885
EP - 115894
JO - IEEE Access
JF - IEEE Access
ER -