Abstract
Detecting deception in interpersonal dialog is challenging since deceivers take advantage of the give-and-take of interaction to adapt to any sign of skepticism in an interlocutor's verbal and nonverbal feedback. Human detection accuracy is poor, often with no better than chance performance. In this investigation, we consider whether automated methods can produce better results and if emphasizing the possible disruption in interactional synchrony can signal whether an interactant is truthful or deceptive. We propose a data-driven and unobtrusive framework using visual cues that consists of face tracking, head movement detection, facial expression recognition, and interactional synchrony estimation. Analysis were conducted on 242 video samples from an experiment in which deceivers and truth-tellers interacted with professional interviewers either face-to-face or through computer mediation. Results revealed that the framework is able to automatically track head movements and expressions of both interlocutors to extract normalized meaningful synchrony features and to learn classification models for deception recognition. Further experiments show that these features reliably capture interactional synchrony and efficiently discriminate deception from truth.
Original language | English (US) |
---|---|
Article number | 06845335 |
Pages (from-to) | 492-506 |
Number of pages | 15 |
Journal | IEEE Transactions on Cybernetics |
Volume | 45 |
Issue number | 3 |
DOIs | |
State | Published - Mar 1 2015 |
Keywords
- Deception detection
- expression recognition
- face tracking
- gesture detection
- interactional synchrony
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
- Electrical and Electronic Engineering