TY - JOUR
T1 - Time-Varying Quasi-Closed-Phase Analysis for Accurate Formant Tracking in Speech Signals
AU - Gowda, Dhananjaya
AU - Kadiri, Sudarsana Reddy
AU - Story, Brad
AU - Alku, Paavo
N1 - Funding Information:
Manuscript received March 4, 2019; revised November 1, 2019 and May 4, 2020; accepted May 21, 2020. Date of publication June 4, 2020; date of current version June 29, 2020. This work was supported by the Academy of Finland (Project 284671, 312490). The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Lei Xie. (Corresponding author: Sudarsana Reddy Kadiri.) Dhananjaya Gowda was with Aalto University, 02150 Espoo, Finland. He is now with Samsung Research, Seoul R&D Campus, Seoul 06765, Republic of Korea (e-mail: njaygowda@gmail.com).
Publisher Copyright:
© 2014 IEEE.
PY - 2020
Y1 - 2020
N2 - In this paper, we propose a new method for the accurate estimation and tracking of formants in speech signals using time-varying quasi-closed-phase (TVQCP) analysis. Conventional formant tracking methods typically adopt a two-stage estimate-and-track strategy wherein an initial set of formant candidates are estimated using short-time analysis (e.g., 10-50 ms), followed by a tracking stage based on dynamic programming or a linear state-space model. One of the main disadvantages of these approaches is that the tracking stage, however good it may be, cannot improve upon the formant estimation accuracy of the first stage. The proposed TVQCP method provides a single-stage formant tracking that combines the estimation and tracking stages into one. TVQCP analysis combines three approaches to improve formant estimation and tracking: (1) it uses temporally weighted quasi-closed-phase analysis to derive closed-phase estimates of the vocal tract with reduced interference from the excitation source, (2) it increases the residual sparsity by using the L1 optimization and (3) it uses time-varying linear prediction analysis over long time windows (e.g., 100-200 ms) to impose a continuity constraint on the vocal tract model and hence on the formant trajectories. Formant tracking experiments with a wide variety of synthetic and natural speech signals show that the proposed TVQCP method performs better than conventional and popular formant tracking tools, such as Wavesurfer and Praat (based on dynamic programming), the KARMA algorithm (based on Kalman filtering), and DeepFormants (based on deep neural networks trained in a supervised manner). Matlab scripts for the proposed method can be found at: https://github.com/njaygowda/ftrack
AB - In this paper, we propose a new method for the accurate estimation and tracking of formants in speech signals using time-varying quasi-closed-phase (TVQCP) analysis. Conventional formant tracking methods typically adopt a two-stage estimate-and-track strategy wherein an initial set of formant candidates are estimated using short-time analysis (e.g., 10-50 ms), followed by a tracking stage based on dynamic programming or a linear state-space model. One of the main disadvantages of these approaches is that the tracking stage, however good it may be, cannot improve upon the formant estimation accuracy of the first stage. The proposed TVQCP method provides a single-stage formant tracking that combines the estimation and tracking stages into one. TVQCP analysis combines three approaches to improve formant estimation and tracking: (1) it uses temporally weighted quasi-closed-phase analysis to derive closed-phase estimates of the vocal tract with reduced interference from the excitation source, (2) it increases the residual sparsity by using the L1 optimization and (3) it uses time-varying linear prediction analysis over long time windows (e.g., 100-200 ms) to impose a continuity constraint on the vocal tract model and hence on the formant trajectories. Formant tracking experiments with a wide variety of synthetic and natural speech signals show that the proposed TVQCP method performs better than conventional and popular formant tracking tools, such as Wavesurfer and Praat (based on dynamic programming), the KARMA algorithm (based on Kalman filtering), and DeepFormants (based on deep neural networks trained in a supervised manner). Matlab scripts for the proposed method can be found at: https://github.com/njaygowda/ftrack
KW - Time-varying linear prediction
KW - formant tracking
KW - quasi-closed-phase analysis
KW - weighted linear prediction
UR - http://www.scopus.com/inward/record.url?scp=85087774959&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85087774959&partnerID=8YFLogxK
U2 - 10.1109/TASLP.2020.3000037
DO - 10.1109/TASLP.2020.3000037
M3 - Article
AN - SCOPUS:85087774959
SN - 2329-9290
VL - 28
SP - 1901
EP - 1914
JO - IEEE/ACM Transactions on Audio Speech and Language Processing
JF - IEEE/ACM Transactions on Audio Speech and Language Processing
M1 - 9108548
ER -