Abstract
Three experiments tested if the machine and bandwagon heuristics moderate beliefs in fact-checked claims under different conditions of human/machine (dis)agreement and of transparency of the fact-checking system. Across experiments, people were more likely to align their belief in the claim when artificial intelligence (AI) and crowdsourcing agents' fact-checks were congruent rather than incongruent. The heuristics provided further nuance to the processes, especially as a particular agent suggested truth verdicts. That is, people with stronger belief in the machine heuristic were more likely to judge the claim as true when an AI agent's fact-check suggested the claim was likely true but not false; likewise, people with stronger belief in the bandwagon heuristic were more likely to judge the claim as true when the crowdsource agent fact-checked the claim to be true but not false. Making the system more transparent to users does not appear to change results.
Original language | English (US) |
---|---|
Pages (from-to) | 430-461 |
Number of pages | 32 |
Journal | Human Communication Research |
Volume | 48 |
Issue number | 3 |
DOIs | |
State | Published - Jul 1 2022 |
Keywords
- Agency affordances
- Ai
- Crowdsource
- Fact-checking
- Heuristics
- Social cognition
ASJC Scopus subject areas
- Communication
- Developmental and Educational Psychology
- Anthropology
- Linguistics and Language