Human Activity Recognition using tiny wearable devices presents unique challenges due to limited computational resources and battery life. Transformers have recently emerged as the most effective tools for time-series classification. This is due to their capacity to capture long-range dependencies and complex temporal dynamics, combined with relatively lower computational complexity, which allows them to surpass the performance of more traditional systems such as convolutional or recurrent neural networks. In fact, transformers do not process data sequentially. Instead, they leverage an attention mechanism that allows them to weigh the importance of different parts of the input data differently. This study experimentally investigates the applicability of transformer models on tiny devices by porting them on a low-power ESP32 device. Comprehensive evaluations are conducted on several benchmark datasets, demonstrating that transformer-based models are not viable solutions for tiny devices compared to convolutional and recurrent neural networks, with respect to which it achieves up to 14% lower accuracy depending on the dataset used.

Are Transformers a Useful Tool for Tiny devices in Human Activity Recognition?

Emanuele Lattanzi
;
Lorenzo Calisti;Chiara Contoli
2024

Abstract

Human Activity Recognition using tiny wearable devices presents unique challenges due to limited computational resources and battery life. Transformers have recently emerged as the most effective tools for time-series classification. This is due to their capacity to capture long-range dependencies and complex temporal dynamics, combined with relatively lower computational complexity, which allows them to surpass the performance of more traditional systems such as convolutional or recurrent neural networks. In fact, transformers do not process data sequentially. Instead, they leverage an attention mechanism that allows them to weigh the importance of different parts of the input data differently. This study experimentally investigates the applicability of transformer models on tiny devices by porting them on a low-power ESP32 device. Comprehensive evaluations are conducted on several benchmark datasets, demonstrating that transformer-based models are not viable solutions for tiny devices compared to convolutional and recurrent neural networks, with respect to which it achieves up to 14% lower accuracy depending on the dataset used.
2024
979-8-4007-1801-4
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2746892
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact