TY - GEN
T1 - Fight Fire with Fire
T2 - 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2021
AU - Wu, Chenwang
AU - Lian, Defu
AU - Ge, Yong
AU - Zhu, Zhihao
AU - Chen, Enhong
AU - Yuan, Senchao
N1 - Funding Information:
The work was supported by grants from the National Key R&D Program of China under Grant No. 2020AAA0103800, the National Natural Science Foundation of China (No. 61976198 and 62022077), JD AI Research and the Fundamental Research Funds for the Central Universities (No. WK2150110017).
Publisher Copyright:
© 2021 ACM.
PY - 2021/7/11
Y1 - 2021/7/11
N2 - Recent studies have shown that recommender systems are vulnerable, and it is easy for attackers to inject well-designed malicious profiles into the system, leading to biased recommendations. We cannot deny these data's rationality, making it imperative to establish a robust recommender system. Adversarial training has been extensively studied for robust recommendations. However, traditional adversarial training adds small perturbations to the parameters (inputs), which do not comply with the poisoning mechanism in the recommender system. Thus for the practical models that are very good at learning existing data, it does not perform well. To address the above limitations, we propose adversarial poisoning training (APT). It simulates the poisoning process by injecting fake users (ERM users) who are dedicated to minimizing empirical risk to build a robust system. Besides, to generate ERM users, we explore an approximation approach to estimate each fake user's influence on the empirical risk. Although the strategy of "fighting fire with fire"seems counterintuitive, we theoretically prove that the proposed APT can boost the upper bound of poisoning robustness. Also, we deliver the first theoretical proof that adversarial training holds a positive effect on enhancing recommendation robustness. Through extensive experiments with five poisoning attacks on four real-world datasets, the results show that the robustness improvement of APT significantly outperforms baselines. It is worth mentioning that APT also improves model generalization in most cases.
AB - Recent studies have shown that recommender systems are vulnerable, and it is easy for attackers to inject well-designed malicious profiles into the system, leading to biased recommendations. We cannot deny these data's rationality, making it imperative to establish a robust recommender system. Adversarial training has been extensively studied for robust recommendations. However, traditional adversarial training adds small perturbations to the parameters (inputs), which do not comply with the poisoning mechanism in the recommender system. Thus for the practical models that are very good at learning existing data, it does not perform well. To address the above limitations, we propose adversarial poisoning training (APT). It simulates the poisoning process by injecting fake users (ERM users) who are dedicated to minimizing empirical risk to build a robust system. Besides, to generate ERM users, we explore an approximation approach to estimate each fake user's influence on the empirical risk. Although the strategy of "fighting fire with fire"seems counterintuitive, we theoretically prove that the proposed APT can boost the upper bound of poisoning robustness. Also, we deliver the first theoretical proof that adversarial training holds a positive effect on enhancing recommendation robustness. Through extensive experiments with five poisoning attacks on four real-world datasets, the results show that the robustness improvement of APT significantly outperforms baselines. It is worth mentioning that APT also improves model generalization in most cases.
KW - adversarial training
KW - poisoning attacks
KW - robust recommender systems
UR - http://www.scopus.com/inward/record.url?scp=85111678960&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85111678960&partnerID=8YFLogxK
U2 - 10.1145/3404835.3462914
DO - 10.1145/3404835.3462914
M3 - Conference contribution
AN - SCOPUS:85111678960
T3 - SIGIR 2021 - Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
SP - 1074
EP - 1083
BT - SIGIR 2021 - Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
PB - Association for Computing Machinery, Inc
Y2 - 11 July 2021 through 15 July 2021
ER -