TY - JOUR
T1 - Personalized alertness prediction using video-based ocular and facial features
AU - Subramaniyan, Manivannan
AU - Vital-Lopez, Francisco G.
AU - Doty, Tracy J.
AU - Anlap, Ian
AU - Killgore, William D.S.
AU - Reifman, Jaques
N1 - Publisher Copyright:
© Published by Oxford University Press on behalf of Sleep Research Society (SRS) 2025.
PY - 2025/11/1
Y1 - 2025/11/1
N2 - Study Objectives: Alertness impairment is generally assessed by the psychomotor vigilance test (PVT). However, performing a PVT in the real world is not practical because it is time-consuming and interrupts everyday activities. Here, we aimed to replace the PVT with passively recorded facial videos and use these measurements to make personalized alertness-impairment predictions. Methods: We retrospectively analyzed data from a 62-hour total sleep deprivation (TSD) challenge involving 26 healthy young adults (14 men), where every 3 hours they performed a 5-minute PVT followed by a 3-minute video recording of the face. We then extracted ocular and facial features from the first 1 minute of the videos, used the features to train linear mixed-effects models that predicted PVT mean reaction times, and used the predicted PVT to customize the unified model of performance (UMP) and make personalized alertness-impairment predictions for each participant. Results: For the mixed-effects models, the average root mean square error (RMSE) between the measured and predicted PVT data was 39 ms (standard deviation, 9 ms). For the personalized UMP predictions based on PVT predicted from the videos, the average RMSE between the measured PVT data and the model-predicted alertness impairment was 36 ms (standard error, 5 ms), which is nearly indistinguishable from the within-participant variability of 30 ms for PVT mean reaction time under rested conditions. Conclusions: As a proof of principle, we developed a practical approach for predicting an individual’s alertness impairment using passively recorded facial videos.
AB - Study Objectives: Alertness impairment is generally assessed by the psychomotor vigilance test (PVT). However, performing a PVT in the real world is not practical because it is time-consuming and interrupts everyday activities. Here, we aimed to replace the PVT with passively recorded facial videos and use these measurements to make personalized alertness-impairment predictions. Methods: We retrospectively analyzed data from a 62-hour total sleep deprivation (TSD) challenge involving 26 healthy young adults (14 men), where every 3 hours they performed a 5-minute PVT followed by a 3-minute video recording of the face. We then extracted ocular and facial features from the first 1 minute of the videos, used the features to train linear mixed-effects models that predicted PVT mean reaction times, and used the predicted PVT to customize the unified model of performance (UMP) and make personalized alertness-impairment predictions for each participant. Results: For the mixed-effects models, the average root mean square error (RMSE) between the measured and predicted PVT data was 39 ms (standard deviation, 9 ms). For the personalized UMP predictions based on PVT predicted from the videos, the average RMSE between the measured PVT data and the model-predicted alertness impairment was 36 ms (standard error, 5 ms), which is nearly indistinguishable from the within-participant variability of 30 ms for PVT mean reaction time under rested conditions. Conclusions: As a proof of principle, we developed a practical approach for predicting an individual’s alertness impairment using passively recorded facial videos.
KW - alertness
KW - eye blinks
KW - mathematical model
KW - psychomotor vigilance test
KW - sleep loss
KW - video recordings
UR - https://www.scopus.com/pages/publications/105021269995
UR - https://www.scopus.com/pages/publications/105021269995#tab=citedBy
U2 - 10.1093/sleep/zsaf149
DO - 10.1093/sleep/zsaf149
M3 - Article
C2 - 40457721
AN - SCOPUS:105021269995
SN - 0161-8105
VL - 48
JO - Sleep
JF - Sleep
IS - 11
M1 - zsaf149
ER -