Criterion noise in ratings-based recognition: Evidence from the effects of response scale length on recognition accuracy

Aaron S. Benjamin, Jonathan G. Tullis, Ji Hae Lee

Research output: Contribution to journalArticlepeer-review

35 Scopus citations

Abstract

Rating scales are a standard measurement tool in psychological research. However, research has suggested that the cognitive burden involved in maintaining the criteria used to parcel subjective evidence into ratings introduces decision noise and affects estimates of performance in the underlying task. There has been debate over whether such decision noise is evident in recognition, with some authors arguing that it is substantial and others arguing that it is trivial or nonexistent. Here we directly assess the presence of decision noise by evaluating whether the length of a rating scale on which recognition judgments are provided is inversely related to performance on the recognition task. That prediction was confirmed: Rating scales with more options led to lower estimates of recognition than did scales with fewer options. This result supports the claim that decision noise contributes to recognition judgments and additionally suggests that caution is warranted when using rating scales more generally.

Original languageEnglish (US)
Pages (from-to)1601-1608
Number of pages8
JournalJournal of Experimental Psychology: Learning Memory and Cognition
Volume39
Issue number5
DOIs
StatePublished - Sep 2013
Externally publishedYes

Keywords

  • Criteria
  • Criterion noise
  • Decision noise
  • Ratings
  • Recognition

ASJC Scopus subject areas

  • Language and Linguistics
  • Experimental and Cognitive Psychology
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Criterion noise in ratings-based recognition: Evidence from the effects of response scale length on recognition accuracy'. Together they form a unique fingerprint.

Cite this