Abstract
A model is described in which the effects of articulatory movements to produce speech are generated by specifying relative acoustic events along a time axis. These events consist of directional changes of the vocal tract resonance frequencies that, when associated with a temporal event function, are transformed via acoustic sensitivity functions, into time-varying modulations of the vocal tract shape. Because the time course of the events may be considerably overlapped in time, coarticulatory effects are automatically generated. Production of sentence-level speech with the model is demonstrated with audio samples and vocal tract animations.
Original language | English (US) |
---|---|
Pages (from-to) | 2522-2528 |
Number of pages | 7 |
Journal | Journal of the Acoustical Society of America |
Volume | 146 |
Issue number | 4 |
DOIs | |
State | Published - Oct 1 2019 |
ASJC Scopus subject areas
- Arts and Humanities (miscellaneous)
- Acoustics and Ultrasonics