Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues

Xiang Yu, Shaoting Zhang, Zhennan Yan, Fei Yang, Junzhou Huang, Norah E. Dunbar, Matthew L. Jensen, Judee K Burgoon, Dimitris N. Metaxas

Research output: Contribution to journalArticle

12 Citations (Scopus)

Abstract

Detecting deception in interpersonal dialog is challenging since deceivers take advantage of the give-and-take of interaction to adapt to any sign of skepticism in an interlocutor's verbal and nonverbal feedback. Human detection accuracy is poor, often with no better than chance performance. In this investigation, we consider whether automated methods can produce better results and if emphasizing the possible disruption in interactional synchrony can signal whether an interactant is truthful or deceptive. We propose a data-driven and unobtrusive framework using visual cues that consists of face tracking, head movement detection, facial expression recognition, and interactional synchrony estimation. Analysis were conducted on 242 video samples from an experiment in which deceivers and truth-tellers interacted with professional interviewers either face-to-face or through computer mediation. Results revealed that the framework is able to automatically track head movements and expressions of both interlocutors to extract normalized meaningful synchrony features and to learn classification models for deception recognition. Further experiments show that these features reliably capture interactional synchrony and efficiently discriminate deception from truth.

Original languageEnglish (US)
Article number06845335
Pages (from-to)506-520
Number of pages15
JournalIEEE Transactions on Cybernetics
Volume45
Issue number3
DOIs
StatePublished - Mar 1 2015

Fingerprint

Experiments
Feedback

Keywords

  • Deception detection
  • expression recognition
  • face tracking
  • gesture detection
  • interactional synchrony

ASJC Scopus subject areas

  • Computer Science Applications
  • Human-Computer Interaction
  • Information Systems
  • Software
  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Cite this

Yu, X., Zhang, S., Yan, Z., Yang, F., Huang, J., Dunbar, N. E., ... Metaxas, D. N. (2015). Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues. IEEE Transactions on Cybernetics, 45(3), 506-520. [06845335]. https://doi.org/10.1109/TCYB.2014.2329673

Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues. / Yu, Xiang; Zhang, Shaoting; Yan, Zhennan; Yang, Fei; Huang, Junzhou; Dunbar, Norah E.; Jensen, Matthew L.; Burgoon, Judee K; Metaxas, Dimitris N.

In: IEEE Transactions on Cybernetics, Vol. 45, No. 3, 06845335, 01.03.2015, p. 506-520.

Research output: Contribution to journalArticle

Yu, X, Zhang, S, Yan, Z, Yang, F, Huang, J, Dunbar, NE, Jensen, ML, Burgoon, JK & Metaxas, DN 2015, 'Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues', IEEE Transactions on Cybernetics, vol. 45, no. 3, 06845335, pp. 506-520. https://doi.org/10.1109/TCYB.2014.2329673
Yu, Xiang ; Zhang, Shaoting ; Yan, Zhennan ; Yang, Fei ; Huang, Junzhou ; Dunbar, Norah E. ; Jensen, Matthew L. ; Burgoon, Judee K ; Metaxas, Dimitris N. / Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues. In: IEEE Transactions on Cybernetics. 2015 ; Vol. 45, No. 3. pp. 506-520.
@article{970759417b2346cb80be9c5f8e2c1613,
title = "Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues",
abstract = "Detecting deception in interpersonal dialog is challenging since deceivers take advantage of the give-and-take of interaction to adapt to any sign of skepticism in an interlocutor's verbal and nonverbal feedback. Human detection accuracy is poor, often with no better than chance performance. In this investigation, we consider whether automated methods can produce better results and if emphasizing the possible disruption in interactional synchrony can signal whether an interactant is truthful or deceptive. We propose a data-driven and unobtrusive framework using visual cues that consists of face tracking, head movement detection, facial expression recognition, and interactional synchrony estimation. Analysis were conducted on 242 video samples from an experiment in which deceivers and truth-tellers interacted with professional interviewers either face-to-face or through computer mediation. Results revealed that the framework is able to automatically track head movements and expressions of both interlocutors to extract normalized meaningful synchrony features and to learn classification models for deception recognition. Further experiments show that these features reliably capture interactional synchrony and efficiently discriminate deception from truth.",
keywords = "Deception detection, expression recognition, face tracking, gesture detection, interactional synchrony",
author = "Xiang Yu and Shaoting Zhang and Zhennan Yan and Fei Yang and Junzhou Huang and Dunbar, {Norah E.} and Jensen, {Matthew L.} and Burgoon, {Judee K} and Metaxas, {Dimitris N.}",
year = "2015",
month = "3",
day = "1",
doi = "10.1109/TCYB.2014.2329673",
language = "English (US)",
volume = "45",
pages = "506--520",
journal = "IEEE Transactions on Cybernetics",
issn = "2168-2267",
publisher = "IEEE Advancing Technology for Humanity",
number = "3",

}

TY - JOUR

T1 - Is interactional dissynchrony a clue to deception? Insights from automated analysis of nonverbal visual cues

AU - Yu, Xiang

AU - Zhang, Shaoting

AU - Yan, Zhennan

AU - Yang, Fei

AU - Huang, Junzhou

AU - Dunbar, Norah E.

AU - Jensen, Matthew L.

AU - Burgoon, Judee K

AU - Metaxas, Dimitris N.

PY - 2015/3/1

Y1 - 2015/3/1

N2 - Detecting deception in interpersonal dialog is challenging since deceivers take advantage of the give-and-take of interaction to adapt to any sign of skepticism in an interlocutor's verbal and nonverbal feedback. Human detection accuracy is poor, often with no better than chance performance. In this investigation, we consider whether automated methods can produce better results and if emphasizing the possible disruption in interactional synchrony can signal whether an interactant is truthful or deceptive. We propose a data-driven and unobtrusive framework using visual cues that consists of face tracking, head movement detection, facial expression recognition, and interactional synchrony estimation. Analysis were conducted on 242 video samples from an experiment in which deceivers and truth-tellers interacted with professional interviewers either face-to-face or through computer mediation. Results revealed that the framework is able to automatically track head movements and expressions of both interlocutors to extract normalized meaningful synchrony features and to learn classification models for deception recognition. Further experiments show that these features reliably capture interactional synchrony and efficiently discriminate deception from truth.

AB - Detecting deception in interpersonal dialog is challenging since deceivers take advantage of the give-and-take of interaction to adapt to any sign of skepticism in an interlocutor's verbal and nonverbal feedback. Human detection accuracy is poor, often with no better than chance performance. In this investigation, we consider whether automated methods can produce better results and if emphasizing the possible disruption in interactional synchrony can signal whether an interactant is truthful or deceptive. We propose a data-driven and unobtrusive framework using visual cues that consists of face tracking, head movement detection, facial expression recognition, and interactional synchrony estimation. Analysis were conducted on 242 video samples from an experiment in which deceivers and truth-tellers interacted with professional interviewers either face-to-face or through computer mediation. Results revealed that the framework is able to automatically track head movements and expressions of both interlocutors to extract normalized meaningful synchrony features and to learn classification models for deception recognition. Further experiments show that these features reliably capture interactional synchrony and efficiently discriminate deception from truth.

KW - Deception detection

KW - expression recognition

KW - face tracking

KW - gesture detection

KW - interactional synchrony

UR - http://www.scopus.com/inward/record.url?scp=84923362327&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84923362327&partnerID=8YFLogxK

U2 - 10.1109/TCYB.2014.2329673

DO - 10.1109/TCYB.2014.2329673

M3 - Article

VL - 45

SP - 506

EP - 520

JO - IEEE Transactions on Cybernetics

JF - IEEE Transactions on Cybernetics

SN - 2168-2267

IS - 3

M1 - 06845335

ER -