Abstract
This chapter presents methods to address the problem of deception detection in videos. Current approaches to detect deception in videos are limited since they (1) are used in short videos focusing only on a small act of deception; (2) are hard to interpret; and (3) do not make use of any human model or insights that could help in the detection task. To address these limitations, a novel framework based on the Dynamic Data-Driven Applications Systems (DDDAS) paradigm is proposed that uses as input the one-dimensional Facial Action Unit (FAU) and gaze signals, and model enhancements. By using facial features as input and not the raw video, we are able to train a conceptually simple, modular, and powerful model that achieves state-of-the-art performance in video-based deception detection. The proposed DDDAS methodology allows to interpret predictions of the referenced model by computing the attention of the neural network in the time domain, identifying key frames. The previous can (a) enable domain scientists to perform retrospective analysis of deceptive behavior; (b) identify informative data for model re-training.
Original language | English (US) |
---|---|
Title of host publication | Handbook of Dynamic Data Driven Applications Systems |
Subtitle of host publication | Volume 2 |
Publisher | Springer International Publishing |
Pages | 725-741 |
Number of pages | 17 |
Volume | 2 |
ISBN (Electronic) | 9783031279867 |
ISBN (Print) | 9783031279850 |
DOIs | |
State | Published - Jan 1 2023 |
Keywords
- DDDAS
- Deception detection
- Explainable AI
- Video classification
ASJC Scopus subject areas
- General Computer Science
- General Mathematics
- General Social Sciences
- General Engineering