01/2021 – 07/2021
Brain-computer interfaces (BCIs) strive to improve handicapped peoples’ quality of life. In this
area electroencephalography (EEG) has been established as key communication channel because
of its non-invasive and easy setup [1–3]. Sensors placed on the head of the user allow to measure
the activity of several brain regions. Based on this, applications can be designed that analyze
these signals for patterns to decide on the intent of the user. Persons with a prosthesis for example
can move their artificial body parts using a certain trained mental command. While the successful
execution of mental commands require high cognitive skills and continuous and long training, it
is also possible to exploit the brain’s subconscious reaction for communication purposes. When
different visual (or auditory ) stimuli are presented to the subject, e. g. with a monitor displaying
several colored rectangles, the subject’s brainwaves change depending on which element
is focused. The specific effects of each color [4–6] allows the application to conclude which option
was focused on and execute the action associated with it.
While most healthy people could achieve the same result with eye tracking, some of them preferred
the EEG version because re-calibration is needed less often. Furthermore, the eye tracker’s
accuracy will also be insufficient for individuals with limited eye gaze control. Even if no eye
movement is possible, BCI continues working with lower accuracy just by shifting focus, making
it suitable for persons with muscle disease and locked-in syndrome [1, 2].
Color is not the only visual stimulus the decision can be based on. Many researchers focused
on steady-state visual evoked potential (SSVEP) due to their high classification accuracy. At
SSVEP each element flickers with its own frequency or phase. Upon focusing on one element the
brainwaves will greatly correlate with the input frequency and phase. The main disadvantage of
this concept is the high fatigue and discomfort when looking at flashing stimuli which result in
lower performance .
Once a reasonable accuracy is reached color stimuli could be used as more user friendly alternative
to SSVEP. The results could also be used for a hybrid version which utilize both frequency and
color effect to maximize performance.
Although there exists work that classifies primary colors presented successively on a screen
[5, 6, 8, 9], the suitability of an interaction using color as stimulus is insufficiently studied. Torres-
Garcia et al. presented the advantages of such a system in practical use and encourages further
research [10, 11].
The goal of this thesis is to implement an advanced version of the interaction concept developed
by Yang et al. . The authors presented a red and a blue rectangle to the subject while analyzing
band powers of the EEG data to detect the rectangle that was focused. We want to increase
usability by improving the response time of the classification and allow asynchronous interaction
instead of having a time limit within which the user should make a decision. Different layouts
with varying color and positions of the options will be compared.
Furthermore additional tests will be performed in order to assess the effect of color stimulus on
the EEG without interference of eye muscles. The first setup measures the system’s classification
accuracy for single centrally shown colors similar to existing work. In a second test the system
tries to classify different positions the user looks at on plain colored background. The results will
be reviewed to rate the suitability of color stimulus in the presence of eye movement.
For the proposed application EMOTIV’s EEG headsets1 are utilized to receive band powers.
HTML is used for the application’s interface since graphical elements can be added with low effort
interact with the HTML elements easily as well as with EMOTIV’s API.
For the color classification the first step is the feature extraction [3, 8, 9]. Here the feature vector
consists of selected frequency bands and sensor locations, each averaged in a moving window similar
to [3,12]. A user specific model is used to overcome individual differences as Sara Åsly’s thesis
advises . Therefore the feature vector for a test layout is generated for each user individually
depending on statistical metrics during training.
Afterwards the achieved accuracy of the classification at each variant will be evaluated by conducting
a user study with 20 participants. The dependent variables for the study are the accuracy
of the classification and the classification time. Moreover chosen features will be logged for comparison.
 Ivo Käthner, Andrea Kübler, and Sebastian Halder. Comparison of eye tracking, electrooculography
and an auditory brain-computer interface for binary communication: A case study
with a participant in the locked-in state. Journal of NeuroEngineering and Rehabilitation,
12(1):76, sep 2015.
 Hooman Nezamfar, Seyed Sadegh Mohseni Salehi, Matt Higger, and Deniz Erdogmus. Code-
VEP vs. eye tracking: A comparison study. Brain Sciences, 8(7), jul 2018.
 Lingling Yang and Howard Leung. An online BCI game based on the decoding of users’
attention to color stimulus. In Proceedings of the Annual International Conference of the
IEEE Engineering in Medicine and Biology Society, EMBS, pages 5267–5270, 2013.
 Ai Yoto, Tetsuo Katsuura, Koichi Iwanaga, and Yoshihiro Shimomura. Effects of object color
stimuli on human brain activities in perception and attention referred to EEG alpha band
response. Journal of Physiological Anthropology, 26(3):373–379, 2007.
 Saim Rasheed and Daniele Marini. Classification of EEG Signals Produced by RGB Colour
Stimuli. Journal of Biomedical Engineering and Medical Imaging, 2(5), 2015.
 Huiran Zhang and Zheng Tang. To judge what color the subject watched by color effect on
brain activity. Technical Report 2, 2011.
 Teng Cao, Feng Wan, Peng Un Mak, Pui In Mak, Mang I. Vai, and Yong Hu. Flashing color
on the performance of SSVEP-based brain-computer interfaces. In Proceedings of the Annual
International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS,
pages 1819–1822, 2012.
 Eman Alharbi, Saim Rasheed, and Seyed Buhari. Single Trial Classification of Evoked EEG
Signals Due to RGB Colors. BRAIN. Broad Research in Artificial Intelligence and Neuroscience,
 Eman T. Alharbi, Saim Rasheed, and Seyed M. Buhari. Feature selection algorithm for
evoked EEG signal due to RGB colors. In Proceedings – 2016 9th International Congress on
Image and Signal Processing, BioMedical Engineering and Informatics, CISP-BMEI 2016,
pages 1503–1520. Institute of Electrical and Electronics Engineers Inc., feb 2017.
 Alejandro A. Torres-García, Luis Alfredo Moctezuma, and Marta Molinas. Assessing the
impact of idle state type on the identification of RGB color exposure for BCI. BIOSIGNALS
2020 – 13th International Conference on Bio-Inspired Systems and Signal Processing, Proceedings;
Part of 13th International Joint Conference on Biomedical Engineering Systems
and Technologies, BIOSTEC 2020, (March):187–194, 2020.
 Alejandro A. Torres-Garcia, Luis Alfredo Moctezuma, Sara Asly, and Marta Molinas. Discriminating
between color exposure and idle state using EEG signals for BCI application.
2019 7th E-Health and Bioengineering Conference, EHB 2019, (January 2020), 2019.
 Alexander Ya Kaplan, Jong Jin Lim, Kyung Soo Jin, Byoung Woo Park, Jong Gil Byeon,
and Sofia U. Tarasova. Unconscious operant conditioning in the paradigm of brain-computer
interface based on color perception. International Journal of Neuroscience, 115(6):781–802,
 Sara Åsly. Supervised learning for classification of EEG signals evoked by visual exposure to
RGB colors (MSc thesis). 2019.