Neural networks excel at addressing real-world tasks, yet their computational demands often confine them to cloud-based platforms. Recent literature has responded with compute-efficient neural architectures for edge devices (e.g. microcontroller units). Nonetheless, the proliferation of edge devices makes it relevant to address the dynamic nature of real-world environments, where models need to adapt to shifting data distributions and integrate new information without forgetting the old one. Continual Learning (CL) solves this issue by enabling models to learn new tasks while sequentially retaining knowledge from previous ones. In this paper, we study how efficient neural networks perform when solving the task of Class-Incremental Continual Learning. In particular, we evaluate the PhiNets architecture family on the well-established CORe50 and CIFAR-10 benchmarks and present a feasibility study for Latent Replay on edge devices. In terms of performance, PhiNet models exhibit superior results compared to MobileNet architectures on the CIFAR-10 dataset, achieving a 4.47% higher accuracy. Remarkably, PhiNet achieves this level of accuracy while utilizing only 0.012% of the computation required by MobileNet. This not only attests to its superior performance but also its substantial computational efficiency, affirming the feasibility of deploying PhiNet models in real-world applications.

An empirical evaluation of tinyML architectures for Class-Incremental Continual Learning

Dalle Pezze D.;Susto G. A.
2024

Abstract

Neural networks excel at addressing real-world tasks, yet their computational demands often confine them to cloud-based platforms. Recent literature has responded with compute-efficient neural architectures for edge devices (e.g. microcontroller units). Nonetheless, the proliferation of edge devices makes it relevant to address the dynamic nature of real-world environments, where models need to adapt to shifting data distributions and integrate new information without forgetting the old one. Continual Learning (CL) solves this issue by enabling models to learn new tasks while sequentially retaining knowledge from previous ones. In this paper, we study how efficient neural networks perform when solving the task of Class-Incremental Continual Learning. In particular, we evaluate the PhiNets architecture family on the well-established CORe50 and CIFAR-10 benchmarks and present a feasibility study for Latent Replay on edge devices. In terms of performance, PhiNet models exhibit superior results compared to MobileNet architectures on the CIFAR-10 dataset, achieving a 4.47% higher accuracy. Remarkably, PhiNet achieves this level of accuracy while utilizing only 0.012% of the computation required by MobileNet. This not only attests to its superior performance but also its substantial computational efficiency, affirming the feasibility of deploying PhiNet models in real-world applications.
2024
2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, PerCom Workshops 2024
2024 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events, PerCom Workshops 2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3531207
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact