Electroencephalografic (EEG) data are complex multi-dimensional time-series which are very useful in many different applications, i.e., from diagnostics of epilepsy to driving brain-computer interface systems. Their classification is still a challenging task, due to the inherent within- and between-subject variability as well as their low signal-to-noise ratio. On the other hand, the reconstruction of raw EEG data is even more difficult because of the high temporal resolution of these signals. Recent literature has proposed numerous machine and deep learning models that could classify, e.g., different types of movements, with an accuracy in the range 70% to 80% (with 4 classes). On the other hand, a limited number of works targetted the reconstruction problem, with very limited results. In this work, we propose vEEGNet, a DL architecture with two modules, i.e., an unsupervised module based on variational autoencoders to extract a latent representation of the multi-channel EEG data, and a supervised module based on a feed-forward neural network to classify different movements. Furthermore, to build the encoder and the decoder of VAE we exploited the well-known EEGNet network, specifically designed since 2016 to process EEG data. We implemented two slightly different architectures of vEEGNet, thus showing state of the art classification performance, and the ability to reconstruct both low frequency and middle-range components of the raw EEG. Although preliminary, this work is promising as we found out that the low-frequency reconstructed signals are consistent with the so-called motor-related cortical potentials, very specific and well-known motor-related EEG patterns and we could improve over previous literature by reconstructing faster EEG components, too. Further investigations are needed to explore the potentialities of vEEGNet in reconstructing the full EEG data, to generate new samples, and to study the relationship between classification and reconstruction performance.
vEEGNet: learning latent representations to reconstruct EEG raw data via variational autoencoders
Alberto Zancanaro;
2024
Abstract
Electroencephalografic (EEG) data are complex multi-dimensional time-series which are very useful in many different applications, i.e., from diagnostics of epilepsy to driving brain-computer interface systems. Their classification is still a challenging task, due to the inherent within- and between-subject variability as well as their low signal-to-noise ratio. On the other hand, the reconstruction of raw EEG data is even more difficult because of the high temporal resolution of these signals. Recent literature has proposed numerous machine and deep learning models that could classify, e.g., different types of movements, with an accuracy in the range 70% to 80% (with 4 classes). On the other hand, a limited number of works targetted the reconstruction problem, with very limited results. In this work, we propose vEEGNet, a DL architecture with two modules, i.e., an unsupervised module based on variational autoencoders to extract a latent representation of the multi-channel EEG data, and a supervised module based on a feed-forward neural network to classify different movements. Furthermore, to build the encoder and the decoder of VAE we exploited the well-known EEGNet network, specifically designed since 2016 to process EEG data. We implemented two slightly different architectures of vEEGNet, thus showing state of the art classification performance, and the ability to reconstruct both low frequency and middle-range components of the raw EEG. Although preliminary, this work is promising as we found out that the low-frequency reconstructed signals are consistent with the so-called motor-related cortical potentials, very specific and well-known motor-related EEG patterns and we could improve over previous literature by reconstructing faster EEG components, too. Further investigations are needed to explore the potentialities of vEEGNet in reconstructing the full EEG data, to generate new samples, and to study the relationship between classification and reconstruction performance.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.