Abstract
In conversational speech, phones and entire syllables are often missing. This can make “he’s” and “he was” homophonous, realized for example as [ɨz]. Similarly, “you’re” and “you were” can both be realized as [jɚ], [ɨ], etc. We investigated what types of information native listeners use to perceive such verb tenses. Possible types included acoustic cues in the phrase (e.g., in “he was”), the rate of the surrounding speech, and syntactic and semantic information in the utterance, such as the presence of time adverbs such as “yesterday” or other tensed verbs. We extracted utterances such as “So they’re gonna have like a random roommate” and “And he was like, ‘What’s wrong?!’” from recordings of spontaneous conversations. We presented parts of these utterances to listeners, in either a written or auditory modality, to determine which types of information facilitated listeners’ comprehension. Listeners rely primarily on acoustic cues in or near the target words rather than meaning and syntactic information in the context. While that information also improves comprehension in some conditions, the acoustic cues in the target itself are strong enough to reverse the percept that listeners gain from all other information together. Acoustic cues override other information in comprehending reduced productions in conversational speech.
Original language | English (US) |
---|---|
Article number | 930 |
Journal | Brain Sciences |
Volume | 12 |
Issue number | 7 |
DOIs | |
State | Published - Jul 2022 |
Keywords
- acoustic cues
- comprehension
- context
- conversation
- reduced speech
ASJC Scopus subject areas
- General Neuroscience