Deception Detection in Videos Using Robust Facial Features with Attention Feedback

Anastasis Stathopoulos, Ligong Han, Norah Dunbar, Judee K. Burgoon, Dimitris Metaxas

Research output: Chapter in Book/Report/Conference proceedingChapter

1 Scopus citations

Abstract

This chapter presents methods to address the problem of deception detection in videos. Current approaches to detect deception in videos are limited since they (1) are used in short videos focusing only on a small act of deception; (2) are hard to interpret; and (3) do not make use of any human model or insights that could help in the detection task. To address these limitations, a novel framework based on the Dynamic Data-Driven Applications Systems (DDDAS) paradigm is proposed that uses as input the one-dimensional Facial Action Unit (FAU) and gaze signals, and model enhancements. By using facial features as input and not the raw video, we are able to train a conceptually simple, modular, and powerful model that achieves state-of-the-art performance in video-based deception detection. The proposed DDDAS methodology allows to interpret predictions of the referenced model by computing the attention of the neural network in the time domain, identifying key frames. The previous can (a) enable domain scientists to perform retrospective analysis of deceptive behavior; (b) identify informative data for model re-training.

Original languageEnglish (US)
Title of host publicationHandbook of Dynamic Data Driven Applications Systems
Subtitle of host publicationVolume 2
PublisherSpringer International Publishing
Pages725-741
Number of pages17
Volume2
ISBN (Electronic)9783031279867
ISBN (Print)9783031279850
DOIs
StatePublished - Jan 1 2023

Keywords

  • DDDAS
  • Deception detection
  • Explainable AI
  • Video classification

ASJC Scopus subject areas

  • General Computer Science
  • General Mathematics
  • General Social Sciences
  • General Engineering

Fingerprint

Dive into the research topics of 'Deception Detection in Videos Using Robust Facial Features with Attention Feedback'. Together they form a unique fingerprint.

Cite this