TY - GEN
T1 - BigEAR
T2 - 1st IEEE International Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2016
AU - Dubey, Harishchandra
AU - Mehl, Matthias R.
AU - Mankodiya, Kunal
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/8/16
Y1 - 2016/8/16
N2 - This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor/infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76% on real-world data from couples coping with breast cancer.
AB - This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor/infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76% on real-world data from couples coping with breast cancer.
UR - http://www.scopus.com/inward/record.url?scp=84987623787&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84987623787&partnerID=8YFLogxK
U2 - 10.1109/CHASE.2016.46
DO - 10.1109/CHASE.2016.46
M3 - Conference contribution
AN - SCOPUS:84987623787
T3 - Proceedings - 2016 IEEE 1st International Conference on Connected Health: Applications, Systems and Engineering Technologies, CHASE 2016
SP - 78
EP - 83
BT - Proceedings - 2016 IEEE 1st International Conference on Connected Health
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 27 June 2016 through 29 June 2016
ER -