pubmed.ncbi.nlm.nih.gov

Online Learning for Wearable EEG-Based Emotion Classification - PubMed

  • ️Sun Jan 01 2023

Online Learning for Wearable EEG-Based Emotion Classification

Sidratul Moontaha et al. Sensors (Basel). 2023.

Abstract

Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by the brain. Therefore, we used non-invasive and portable EEG sensors to develop a real-time emotion classification pipeline. The pipeline trains different binary classifiers for Valence and Arousal dimensions from an incoming EEG data stream achieving a 23.9% (Arousal) and 25.8% (Valence) higher F1-Score on the state-of-art AMIGOS dataset than previous work. Afterward, the pipeline was applied to the curated dataset from 15 participants using two consumer-grade EEG devices while watching 16 short emotional videos in a controlled environment. Mean F1-Scores of 87% (Arousal) and 82% (Valence) were achieved for an immediate label setting. Additionally, the pipeline proved to be fast enough to achieve predictions in real-time in a live scenario with delayed labels while continuously being updated. The significant discrepancy from the readily available labels on the classification scores leads to future work to include more data. Thereafter, the pipeline is ready to be used for real-time applications of emotion classification.

Keywords: AMIGOS dataset; emotion classification; online learning; psychopy experiments; real-time; wearable EEG (muse and neurosity crown).

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 3
Figure 3

Screenshots from the PsychoPy [48] setup of self-assessment questions. (a) Partial PANAS questionnaire with five different levels represented by clickable radio buttons (in red) with the levels’ explanation on top, (b) AS for valence displayed on top and the slider for arousal on the bottom.

Figure 1
Figure 1

Different electrode positions according to the international 10–20 system of the EEG devices used in Dataset I (a) and in Dataset II and III (b,c). Sensor locations are marked in blue, references are in orange.

Figure 2
Figure 2

Two consumer-grade EEG devices with integrated electrodes used in the experiments.

Figure 4
Figure 4

Experimental setup for curating Dataset II. The participants watched a relaxation video at the beginning and eight videos, two of each dimension category wearing one of the two devices. Between the eight videos, they answered AS slider and familiarity with the video.

Figure 5
Figure 5

Experimental setup for curating Dataset III. In the first session, the participants watched a relaxation video at the beginning and eight videos, two of each dimension category wearing one of the two devices. Between the eight videos, they answered AS slider, familiarity with the video, and had seen the actual AS label. In the second session, they watched the same set of videos while the prediction was available to the experimenter before the delayed label arrived.

Figure 6
Figure 6

Overview of pipeline steps for affect classification. The top gray rectangle shows the pipeline steps employed in an immediate label setting with prerecorded data. For each extracted feature vector the model (1) first classifies its label before (2) being updated with the true label for that sample. In the live setting, the model is not updated after every prediction, as the true label of a video only becomes available after the stimulus has ended. The timestamp of the video is matched to the samples’ timestamps to find all samples that fall into the corresponding time frame and update the model with their true labels (shown in dotted lines).

Figure 7
Figure 7

The incoming data stream is processed in tumbling windows (gray rectangles). One window includes all samples xi,xi+1,… arriving during a specified time period, e.g., 1 s. The pipeline extracts one feature vector, Fi, per window. Windows during a stimulus (video) are marked in dark gray. Participants rated each video with one label per affect dimension, Yj. All feature vectors extracted from windows that fall into the time frame of a video (between tstart and tend of that video) receive a label yi corresponding to the reported label, Yj, of that video. If possible, the windows are aligned with the end of the stimulus; otherwise, all windows that lie completely inside a video’s time range are considered.

Figure 8
Figure 8

(a) Progressive validation incorporated into the basic flow of the training process (‘test-then-train’) of an online classifier in an immediate label setting. (xi,yi) represents an input feature vector and its corresponding label. (b) Evaluation incorporated into the basic flow of the training process of an online classifier when labels arrive delayed (i≥j).

Figure 9
Figure 9

F1-Score for Valence and Arousal classification achieved by ARF and SRP per subject from Dataset I.

Figure 10
Figure 10

Mean F1-Score achieved by ARF, SRP, and LR over the whole dataset for both affect dimension with respect to window length.

Figure 11
Figure 11

Confusion matrices for the live affect classification (Dataset III, part 2). Employed model: ARF (four trees), window length = 1 s. Recall was calculated only for a low class for both the models.

Similar articles

Cited by

References

    1. Picard R.W. Affective Computing. The MIT Press; Cambridge, MA, USA: 2000. - DOI
    1. Cowie R., Douglas-Cowie E., Tsapatsoulis N., Votsis G., Kollias S., Fellenz W., Taylor J. Emotion recognition in human–computer interaction. IEEE Signal Process. Mag. 2001;18:32–80. doi: 10.1109/79.911197. - DOI
    1. Haut S.R., Hall C.B., Borkowski T., Tennen H., Lipton R.B. Clinical features of the pre-ictal state: Mood changes and premonitory symptoms. Epilepsy Behav. 2012;23:415–421. doi: 10.1016/j.yebeh.2012.02.007. - DOI - PubMed
    1. Kocielnik R., Sidorova N., Maggi F.M., Ouwerkerk M., Westerink J.H.D.M. Smart technologies for long-term stress monitoring at work; Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems (CBMS); Porto, Portugal. 20–22 June 2013; pp. 53–58. - DOI
    1. Schulze-Bonhage A., Kurth C., Carius A., Steinhoff B.J., Mayer T. Seizure anticipation by patients with focal and generalized epilepsy: A multicentre assessment of premonitory symptoms. Epilepsy Res. 2006;70:83–88. doi: 10.1016/j.eplepsyres.2006.02.001. - DOI - PubMed

MeSH terms

Grants and funding

This research was (partially) funded by the Hasso-Plattner Institute Research School on Data Science and Engineering. The publication costs were covered by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project number 491466077.

LinkOut - more resources