Explainable Deep Learning for False Information Identification: An Argumentation Theory Approach

Kyuhan Lee, Sudha Ram

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

In today’s world, where online information is proliferating in an unprecedented way, a significant challenge is whether to believe the information we encounter. Ironically, this flood of information provides us with an opportunity to combat false claims by understanding their nature. That is, with the help of machine learning, it is now possible to effectively capture the characteristics of false information by analyzing massive amounts of false claims published online. These methods, however, have neglected the nature of human argumentation, delegating the process of making inferences of the truth to the black box of neural networks. This has created several challenges (namely latent text representations containing entangled syntactic and semantic information, an irrelevant part of text being considered when abstracting text as a latent vector, and counterintuitive model explanation). To resolve these issues, based on Toulmin’s model of argumentation, we propose a computational framework that helps machine learning for false information identification (FII) understand the connection between a claim (whose veracity needs to be verified) and evidence (which contains information to support or refute the claim). Specifically, we first build a word network of a claim and evidence reflecting their syntaxes and convert it into a signed word network using their semantics. The structural balance of this word network is then calculated as a proxy metric to determine the consistency between a claim and evidence. The consistency level is fed into machine learning as input, providing information for verifying claim veracity and explaining the model’s decision making. The two experiments for testing model performance and explainability reveal that our framework shows stronger performance and better explainability, outperforming cutting-edge methods and presenting positive effects on human task performance, trust in algorithms, and confidence in decision making. Our results shed new light on the growing field of automated FII.

Original languageEnglish (US)
Pages (from-to)890-907
Number of pages18
JournalInformation Systems Research
Volume35
Issue number2
DOIs
StatePublished - Jun 2024

Keywords

  • argumentation theory
  • explainable deep learning
  • fake news
  • false information
  • machine learning
  • natural language processing
  • structural balance theory

ASJC Scopus subject areas

  • Management Information Systems
  • Information Systems
  • Computer Networks and Communications
  • Information Systems and Management
  • Library and Information Sciences

Fingerprint

Dive into the research topics of 'Explainable Deep Learning for False Information Identification: An Argumentation Theory Approach'. Together they form a unique fingerprint.

Cite this