TY - GEN
T1 - Quantized Transformer Language Model Implementations on Edge Devices
AU - Ur Rahman, Mohammad Wali
AU - Abrar, Murad Mehrab
AU - Copening, Hunter Gibbons
AU - Hariri, Salim
AU - Shao, Sicong
AU - Satam, Pratik
AU - Salehi, Soheil
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Large-scale transformer-based models like the Bidi-rectional Encoder Representations from Transformers (BERT) are widely used for Natural Language Processing (NLP) applications, wherein these models are initially pre-trained with a large corpus with millions of parameters and then fine-tuned for a downstream NLP task. One of the major limitations of these large-scale models is that they cannot be deployed on resource- constrained devices due to their large model size and increased inference latency. In order to overcome these limitations, such large-scale models can be converted to an optimized FlatBuffer format, tailored for deployment on resource-constrained edge devices. Herein, we evaluate the performance of such FlatBuffer transformed MobileBERT models on three different edge devices, fine-tuned for Reputation analysis of English language tweets in the Rep Lab 2013 dataset. In addition, this study encompassed an evaluation of the deployed models, wherein their latency, performance, and resource efficiency were meticulously assessed. Our experiment results show that, compared to the original BERT large model, the converted and quantized MobileBERT models have 160x smaller footprints for a 4.1 % drop in accuracy while analyzing at least one tweet per second on edge devices. Furthermore, our study highlights the privacy-preserving aspect of TinyML systems as all data is processed locally within a serverless environment.
AB - Large-scale transformer-based models like the Bidi-rectional Encoder Representations from Transformers (BERT) are widely used for Natural Language Processing (NLP) applications, wherein these models are initially pre-trained with a large corpus with millions of parameters and then fine-tuned for a downstream NLP task. One of the major limitations of these large-scale models is that they cannot be deployed on resource- constrained devices due to their large model size and increased inference latency. In order to overcome these limitations, such large-scale models can be converted to an optimized FlatBuffer format, tailored for deployment on resource-constrained edge devices. Herein, we evaluate the performance of such FlatBuffer transformed MobileBERT models on three different edge devices, fine-tuned for Reputation analysis of English language tweets in the Rep Lab 2013 dataset. In addition, this study encompassed an evaluation of the deployed models, wherein their latency, performance, and resource efficiency were meticulously assessed. Our experiment results show that, compared to the original BERT large model, the converted and quantized MobileBERT models have 160x smaller footprints for a 4.1 % drop in accuracy while analyzing at least one tweet per second on edge devices. Furthermore, our study highlights the privacy-preserving aspect of TinyML systems as all data is processed locally within a serverless environment.
KW - BERT
KW - Embedded Systems
KW - Machine Learning
KW - Natural Language Processing
KW - Privacy
KW - Reputation Polarity
KW - Social Media
KW - TinyML
KW - loT
UR - http://www.scopus.com/inward/record.url?scp=85190123323&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85190123323&partnerID=8YFLogxK
U2 - 10.1109/ICMLA58977.2023.00104
DO - 10.1109/ICMLA58977.2023.00104
M3 - Conference contribution
AN - SCOPUS:85190123323
T3 - Proceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
SP - 709
EP - 716
BT - Proceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
A2 - Arif Wani, M.
A2 - Boicu, Mihai
A2 - Sayed-Mouchaweh, Moamar
A2 - Abreu, Pedro Henriques
A2 - Gama, Joao
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
Y2 - 15 December 2023 through 17 December 2023
ER -