Human activity recognition in the Internet of Things devices has become a pivotal area of research in health monitoring, fitness tracking, and smart homes, especially with the increasing use of wearable and smart devices equipped with inertial sensors. Sizing the length of the signal window used to segment the sensor data stream is challenging. Conventional fixed-time interval methods may not guarantee an optimal level of generalization needed to handle the heterogeneous nature of human activities and their inter-/intra-personal variability at runtime, which may result in suboptimal recognition accuracy. While the fixed window strategy has been largely investigated, the variable window approach still remains little explored. In this paper, we propose an inference-driven method targeting low-power devices to vary the window size dynamically. Unlike traditional methods, the proposed technique dynamically adjusts the window length based on the classification confidence derived from model inference. We also propose an innovative validation methodology, namely leave-clustered-subjects-out, for generalization assessment. This paper describes how the window size adaptability ensures an increased ability to recognize activities whose characteristics exhibit temporal or subject-specific variations. Our approach is validated on four publicly available datasets, namely MotionSense, MobiAct, UCI-HAR (all three using smartphones), and WISDM using smartphone and smartwatch data. Experiments show an improvement in classification accuracy of up to 9% compared to conventional fixed-window approaches, with an energy consumption overhead remaining within a factor of 2. Results also show how our approach balances the tradeoff between recognition accuracy and computational efficiency on a target low-power device.

Inference-Driven Window Sizing for Enhanced Human Activity Recognition in IoT Devices

Kania, Nicholas;Contoli, Chiara;Freschi, Valerio;Lattanzi, Emanuele
2025

Abstract

Human activity recognition in the Internet of Things devices has become a pivotal area of research in health monitoring, fitness tracking, and smart homes, especially with the increasing use of wearable and smart devices equipped with inertial sensors. Sizing the length of the signal window used to segment the sensor data stream is challenging. Conventional fixed-time interval methods may not guarantee an optimal level of generalization needed to handle the heterogeneous nature of human activities and their inter-/intra-personal variability at runtime, which may result in suboptimal recognition accuracy. While the fixed window strategy has been largely investigated, the variable window approach still remains little explored. In this paper, we propose an inference-driven method targeting low-power devices to vary the window size dynamically. Unlike traditional methods, the proposed technique dynamically adjusts the window length based on the classification confidence derived from model inference. We also propose an innovative validation methodology, namely leave-clustered-subjects-out, for generalization assessment. This paper describes how the window size adaptability ensures an increased ability to recognize activities whose characteristics exhibit temporal or subject-specific variations. Our approach is validated on four publicly available datasets, namely MotionSense, MobiAct, UCI-HAR (all three using smartphones), and WISDM using smartphone and smartwatch data. Experiments show an improvement in classification accuracy of up to 9% compared to conventional fixed-window approaches, with an energy consumption overhead remaining within a factor of 2. Results also show how our approach balances the tradeoff between recognition accuracy and computational efficiency on a target low-power device.
File in questo prodotto:
File Dimensione Formato  
Inference-Driven_Window_Sizing_for_Enhanced_Human_Activity_Recognition_in_IoT_Devices.pdf

accesso aperto

Tipologia: Versione editoriale
Licenza: Creative commons
Dimensione 1.74 MB
Formato Adobe PDF
1.74 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2766031
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact