Quantized Transformer Language Model Implementations on Edge Devices

Mohammad Wali Ur Rahman, Murad Mehrab Abrar, Hunter Gibbons Copening, Salim Hariri, Sicong Shao, Pratik Satam, Soheil Salehi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Large-scale transformer-based models like the Bidi-rectional Encoder Representations from Transformers (BERT) are widely used for Natural Language Processing (NLP) applications, wherein these models are initially pre-trained with a large corpus with millions of parameters and then fine-tuned for a downstream NLP task. One of the major limitations of these large-scale models is that they cannot be deployed on resource- constrained devices due to their large model size and increased inference latency. In order to overcome these limitations, such large-scale models can be converted to an optimized FlatBuffer format, tailored for deployment on resource-constrained edge devices. Herein, we evaluate the performance of such FlatBuffer transformed MobileBERT models on three different edge devices, fine-tuned for Reputation analysis of English language tweets in the Rep Lab 2013 dataset. In addition, this study encompassed an evaluation of the deployed models, wherein their latency, performance, and resource efficiency were meticulously assessed. Our experiment results show that, compared to the original BERT large model, the converted and quantized MobileBERT models have 160x smaller footprints for a 4.1 % drop in accuracy while analyzing at least one tweet per second on edge devices. Furthermore, our study highlights the privacy-preserving aspect of TinyML systems as all data is processed locally within a serverless environment.

Original languageEnglish (US)
Title of host publicationProceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
EditorsM. Arif Wani, Mihai Boicu, Moamar Sayed-Mouchaweh, Pedro Henriques Abreu, Joao Gama
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages709-716
Number of pages8
ISBN (Electronic)9798350345346
DOIs
StatePublished - 2023
Externally publishedYes
Event22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023 - Jacksonville, United States
Duration: Dec 15 2023Dec 17 2023

Publication series

NameProceedings - 22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023

Conference

Conference22nd IEEE International Conference on Machine Learning and Applications, ICMLA 2023
Country/TerritoryUnited States
CityJacksonville
Period12/15/2312/17/23

Keywords

  • BERT
  • Embedded Systems
  • Machine Learning
  • Natural Language Processing
  • Privacy
  • Reputation Polarity
  • Social Media
  • TinyML
  • loT

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Quantized Transformer Language Model Implementations on Edge Devices'. Together they form a unique fingerprint.

Cite this