When Machine and Bandwagon Heuristics Compete: Understanding Users' Response to Conflicting AI and Crowdsourced Fact-Checking

John A. Banas, Nicholas A. Palomares, Adam S. Richards, David M. Keating, Nick Joyce, Stephen A. Rains

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Three experiments tested if the machine and bandwagon heuristics moderate beliefs in fact-checked claims under different conditions of human/machine (dis)agreement and of transparency of the fact-checking system. Across experiments, people were more likely to align their belief in the claim when artificial intelligence (AI) and crowdsourcing agents' fact-checks were congruent rather than incongruent. The heuristics provided further nuance to the processes, especially as a particular agent suggested truth verdicts. That is, people with stronger belief in the machine heuristic were more likely to judge the claim as true when an AI agent's fact-check suggested the claim was likely true but not false; likewise, people with stronger belief in the bandwagon heuristic were more likely to judge the claim as true when the crowdsource agent fact-checked the claim to be true but not false. Making the system more transparent to users does not appear to change results.

Original languageEnglish (US)
Pages (from-to)430-461
Number of pages32
JournalHuman Communication Research
Volume48
Issue number3
DOIs
StatePublished - Jul 1 2022

Keywords

  • Agency affordances
  • Ai
  • Crowdsource
  • Fact-checking
  • Heuristics
  • Social cognition

ASJC Scopus subject areas

  • Communication
  • Developmental and Educational Psychology
  • Anthropology
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'When Machine and Bandwagon Heuristics Compete: Understanding Users' Response to Conflicting AI and Crowdsourced Fact-Checking'. Together they form a unique fingerprint.

Cite this