Abstract
In vision-based object recognition systems, imaging sensors perceive the environment and then objects are detected and classified for decision-making purposes, e.g., to maneuver an automated vehicle around an obstacle or to raise alarms for intruders in surveillance settings. In this work we demonstrate how camera-based perception can be unobtrusively manipulated to enable an attacker to create spurious objects or alter an existing object, by remotely projecting adversarial patterns into cameras, exploiting two common effects in optical imaging systems, viz., lens flare/ghost effects and auto-exposure control. To improve the robustness of the attack, we generate optimal patterns by integrating adversarial machine learning techniques with a trained end-to-end channel model. We experimentally demonstrate our attacks using a low-cost projector on three different cameras, and under different environments. Results show that, depending on the attack distance, attack success rates can reach as high as 100%, including under targeted conditions. We develop a countermeasure that reduces the problem of detecting ghost-based attacks into verifying whether there is a ghost overlapping with a detected object. We leverage spatiotemporal consistency to eliminate false positives. Evaluation on experimental data provides a worst-case equal error rate of 5%.
Original language | English (US) |
---|---|
Article number | 14 |
Journal | ACM Transactions on Cyber-Physical Systems |
Volume | 8 |
Issue number | 2 |
DOIs | |
State | Published - May 14 2024 |
Keywords
- adversarial examples
- autonomous systems
- Sensor attacks
ASJC Scopus subject areas
- Human-Computer Interaction
- Hardware and Architecture
- Computer Networks and Communications
- Control and Optimization
- Artificial Intelligence