EEG-based Brain-Computer Interfaces (BCI) is a non-invasive technique used to translate brain activity to commands that control an effector (such as a computer keyboard, mouse, etc). Many patients who cannot communicate effectively, such as those who have suffered from a stroke, locked-in syndrome, or other neurodegenerative diseases, rely on BCI’s to stay connected. A few of the most common types of BCI’s modalities are P300, SSVEP, slow cortical potentials, and sensorimotor rhythms. With Wearable Sensing’s revolutionary dry EEG technology, nearly any type of BCI is possible with our research-grade signal quality. Since DSI systems are extremely easy to use and comfortable, this has opened the door to translating a wide range of BCI applications to the real- and virtual- worlds.
P300, otherwise known as the oddball paradigm, is an event-related potential (ERP) in which the brain elicits a unique response roughly 300ms after an “odd” stimulus is presented. This response can be decoded and classified in real-time for a variety of different applications.
One such use case is known as a P300 speller, in which a series of letters are flashed on a screen, and when the “target” letter pops up, our brain has the P300 response, which can then be transformed into a letter selection.
Betts Peters, Dr. Melanie Fried-Oken, and their team at Oregon Health & Science University have developed a P300 speller using the DSI-24, and have validated its functionality on subjects with Locked-In syndrome.
Steady State Visually Evoked Potentials (SSVEP) are natural responses to visual stimuli at specific frequencies. In a typical SSVEP paradigm, targets will flash at differing frequencies, anywhere from 3.5 Hz – 75 Hz, and depending on which target the subject is attending to, the brain will have a characterizable response at such specific frequency.
As shown in the video, a 12 target numbered keyboard is setup, and the subject is counting up. There is no training required, and the algorithm can correctly classify in under 1 second, in some cases.
This specific SSVEP software was developed by Wearable Sensing’s collaborater in China, Neuracle, and is available for purchase for all DSI systems. The software comes ready to use, with customizable 12-count and 40-count keyboards designed for ultra-rapid, high-accuracy classification.
Motor Imagery is a BCI technique in which the subject imagines performing a movement with a particular limb. This then alters the rhythmic activity in locations in the sensorimotor cortex that correspond to the imagined limb. The BCI can decode these signals, and translate the imagined movement into feedback in the form of cursor movements or other computer commands.
The DSI-24 was featured at an interactive art installation “Mental Work” at the Ecole Polytechnique Federale de Lausanne (EPFL) Switzerland. During the exhibit, subjects were presented with a wheel that was controlled by the subject thinking about moving either one of their arms.
Neurolutions is a medical device company developing neuro-rehabilitation solutions that seek to restore function to patients who are disabled as a result of neurological injury. The Neurolutions IpsiHand system provides upper extremity rehabilitation for chronic stroke patients leveraging brain-computer interface and advanced wearable robotics technology.
By utilizing the DSI-7, Neurolutions is able to use Motor Imagery techniques to decode a patients intent to move their finger, which then instructs the exoskeleton to physically move the finger. With repeated sessions, patients can regain control of their lost limbs.
What fires together, wires together!
Kim, Soram; Lee, Seungyun; Kang, Hyunsuk; Kim, Sion; Ahn, Minkyu
P300 Brain--Computer Interface-Based Drone Control in Virtual and Augmented Reality Journal Article
In: Sensors, vol. 21, no. 17, pp. 5765, 2021.
@article{kim2021p300,
title = {P300 Brain--Computer Interface-Based Drone Control in Virtual and Augmented Reality},
author = {Soram Kim and Seungyun Lee and Hyunsuk Kang and Sion Kim and Minkyu Ahn},
doi = {https://doi.org/10.3390/s21175765},
year = {2021},
date = {2021-08-27},
journal = {Sensors},
volume = {21},
number = {17},
pages = {5765},
publisher = {Multidisciplinary Digital Publishing Institute},
abstract = {Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Lim, Hyunmi; Ku, Jeonghun
Superior Facilitation of an Action Observation Network by Congruent Character Movements in Brain--Computer Interface Action-Observation Games Journal Article
In: Cyberpsychology, Behavior, and Social Networking, vol. 24, no. 8, pp. 566–572, 2021.
@article{lim2021superior,
title = {Superior Facilitation of an Action Observation Network by Congruent Character Movements in Brain--Computer Interface Action-Observation Games},
author = {Hyunmi Lim and Jeonghun Ku},
doi = {https://doi.org/10.1089/cyber.2020.0231},
year = {2021},
date = {2021-08-04},
journal = {Cyberpsychology, Behavior, and Social Networking},
volume = {24},
number = {8},
pages = {566--572},
publisher = {Mary Ann Liebert, Inc., publishers 140 Huguenot Street, 3rd Floor New~…},
abstract = {Action observation (AO) is a promising strategy for promoting motor function in neural rehabilitation. Recently, brain–computer interface (BCI)-AO game rehabilitation, which combines AO therapy with BCI technology, has been introduced to improve the effectiveness of rehabilitation. This approach can improve motor learning by providing feedback, which can be interactive in an observation task, and the game contents of the BCI-AO game paradigm can affect rehabilitation. In this study, the effects of congruent rather than incongruent feedback in a BCI-AO game on mirror neurons were investigated. Specifically, the mu suppression with congruent and incongruent BCI-AO games was measured in 17 healthy adults. The mu suppression in the central motor cortex was significantly higher with the congruent BCI-AO game than with the incongruent one. In addition, the satisfaction evaluation results were excellent for the congruent case. These results support the fact that providing feedback congruent with the motion of an action video facilitates mirror neuron activity and can offer useful guidelines for the design of BCI-AO games for rehabilitation},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zhang, Dong
Brain-Controlled Robotic Arm Based on Adaptive FBCCA Conference
Human Brain and Artificial Intelligence: Second International Workshop, HBAI 2020, Held in Conjunction with IJCAI-PRICAI 2020, Yokohama, Japan, January 7, 2021, Revised Selected Papers, Springer Nature 2021.
@conference{zhang2021brain,
title = {Brain-Controlled Robotic Arm Based on Adaptive FBCCA},
author = {Dong Zhang},
url = {https://link.springer.com/chapter/10.1007/978-981-16-1288-6_7},
year = {2021},
date = {2021-04-08},
booktitle = {Human Brain and Artificial Intelligence: Second International Workshop, HBAI 2020, Held in Conjunction with IJCAI-PRICAI 2020, Yokohama, Japan, January 7, 2021, Revised Selected Papers},
pages = {102},
organization = {Springer Nature},
abstract = {The SSVEP-BCI system usually uses a fixed calculation time and a static window stop method to decode the EEG signal, which reduces the efficiency of the system. In response to this problem, this paper uses an adaptive FBCCA algorithm, which uses Bayesian estimation to dynamically find the optimal data length for result prediction, adapts to the differences between different trials and different individuals, and effectively improves system operation effectiveness. At the same time, through this method, this paper constructs a brain-controlled robotic arm grasping life assistance system based on adaptive FBCCA. In this paper, we selected 20 subjects and conducted a total of 400 experiments. A large number of experiments have verified that the system is available and the average recognition success rate is 95.5%. This also proves that the system can be applied to actual scenarios. Help the handicapped to use the brain to control the mechanical arm to grab the needed items to assist in daily life and improve the quality of life. In the future, SSVEP’s adaptive FBCCA decoding algorithm can be combined with the motor imaging brain-computer interface decoding algorithm to build a corresponding system to help patients with upper or lower limb movement disorders caused by stroke diseases to recover, and reshape the brain and Control connection of limbs.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Eldeeb, Safaa; Susam, Busra T; Akcakaya, Murat; Conner, Caitlin M; White, Susan W; Mazefsky, Carla A
Trial by trial EEG based BCI for distress versus non distress classification in individuals with ASD Journal Article
In: Scientific Reports, vol. 11, no. 1, pp. 1–13, 2021.
@article{eldeeb2021trial,
title = {Trial by trial EEG based BCI for distress versus non distress classification in individuals with ASD},
author = {Safaa Eldeeb and Busra T Susam and Murat Akcakaya and Caitlin M Conner and Susan W White and Carla A Mazefsky},
url = {https://www.nature.com/articles/s41598-021-85362-8},
year = {2021},
date = {2021-03-16},
journal = {Scientific Reports},
volume = {11},
number = {1},
pages = {1--13},
publisher = {Nature Publishing Group},
abstract = {Autism spectrum disorder (ASD) is a neurodevelopmental disorder that is often accompanied by impaired emotion regulation (ER). There has been increasing emphasis on developing evidence-based approaches to improve ER in ASD. Electroencephalography (EEG) has shown success in reducing ASD symptoms when used in neurofeedback-based interventions. Also, certain EEG components are associated with ER. Our overarching goal is to develop a technology that will use EEG to monitor real-time changes in ER and perform intervention based on these changes. As a first step, an EEG-based brain computer interface that is based on an Affective Posner task was developed to identify patterns associated with ER on a single trial basis, and EEG data collected from 21 individuals with ASD. Accordingly, our aim in this study is to investigate EEG features that could differentiate between distress and non-distress conditions. Specifically, we investigate if the EEG time-locked to the visual feedback presentation could be used to classify between WIN (non-distress) and LOSE (distress) conditions in a game with deception. Results showed that the extracted EEG features could differentiate between WIN and LOSE conditions (average accuracy of 81%), LOSE and rest-EEG conditions (average accuracy 94.8%), and WIN and rest-EEG conditions (average accuracy 94.9%).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Memmott, Tab; Koçanaoğullari, Aziz; Lawhead, Matthew; Klee, Daniel; Dudy, Shiran; Fried-Oken, Melanie; Oken, Barry
BciPy: brain--computer interface software in Python Journal Article
In: Brain-Computer Interfaces, pp. 1-18, 2021.
@article{memmott2021bcipy,
title = {BciPy: brain--computer interface software in Python},
author = {Tab Memmott and Aziz Koçanaoğullari and Matthew Lawhead and Daniel Klee and Shiran Dudy and Melanie Fried-Oken and Barry Oken},
doi = {https://doi.org/10.1080/2326263X.2021.1878727},
year = {2021},
date = {2021-02-02},
journal = {Brain-Computer Interfaces},
pages = {1-18},
publisher = {Taylor & Francis},
abstract = {There are high technological and software demands associated with conducting Brain–Computer Interface (BCI) research. In order to accelerate the development and accessibility of BCIs, it is worthwhile to focus on open-source and community desired tooling. Python, a prominent computer language, has emerged as a language of choice for many research and engineering purposes. In this article, BciPy, an open-source, Python-based software for conducting BCI research is presented. It was developed with a focus on restoring communication using Event-Related Potential (ERP) spelling interfaces; however, it may be used for other non-spelling and non-ERP BCI paradigms. Major modules in this system include support for data acquisition, data queries, stimuli presentation, signal processing, signal viewing and modeling, language modeling, task building, and a simple Graphical User Interface (GUI).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Son, Ji Eun; Choi, Hyoseon; Lim, Hyunmi; Ku, Jeonghun
In: Technology and Health Care, vol. 28, no. S1, pp. 509-519, 2020.
@article{son2020development,
title = {Development of a flickering action video based steady state visual evoked potential triggered brain computer interface-functional electrical stimulation for a rehabilitative action observation game},
author = {Ji Eun Son and Hyoseon Choi and Hyunmi Lim and Jeonghun Ku},
editor = {Severin P. Schwarzacher and Carlos Gómez},
doi = {10.3233/THC-209051},
year = {2020},
date = {2020-06-04},
journal = {Technology and Health Care},
volume = {28},
number = {S1},
pages = {509-519},
publisher = {IOS Press},
abstract = {BACKGROUND:
This study focused on developing an upper limb rehabilitation program. In this regard, a steady state visual evoked potential (SSVEP) triggered brain computer interface (BCI)-functional electrical stimulation (FES) based action observation game featuring a flickering action video was designed.
OBJECTIVE:
In particular, the synergetic effect of the game was investigated by combining the action observation paradigm with BCI based FES.
METHODS:
The BCI-FES system was contrasted under two conditions: with flickering action video and flickering noise video. In this regard, 11 right-handed subjects aged between 22–27 years were recruited. The differences in brain activation in response to the two conditions were examined.
RESULTS:
The results indicate that T3 and P3 channels exhibited greater Mu suppression in 8–13 Hz for the action video than the noise video. Furthermore, T4, C4, and P4 channels indicated augmented high beta (21–30 Hz) for the action in contrast to the noise video. Finally, T4 indicated suppressed low beta (14–20 Hz) for the action video in contrast to the noise video.
CONCLUSION:
The flickering action video based BCI-FES system induced a more synergetic effect on cortical activation than the flickering noise based system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Choi, Hyoseon; Lim, Hyunmi; Kim, Joon Woo; Kang, Youn Joo; Ku, Jeonghun
Brain computer interface-based action observation game enhances mu suppression in patients with stroke Journal Article
In: Electronics, vol. 8, no. 12, pp. 1466, 2019.
@article{choi2019brain,
title = {Brain computer interface-based action observation game enhances mu suppression in patients with stroke},
author = {Hyoseon Choi and Hyunmi Lim and Joon Woo Kim and Youn Joo Kang and Jeonghun Ku},
doi = {https://doi.org/10.3390/electronics8121466},
year = {2019},
date = {2019-12-02},
journal = {Electronics},
volume = {8},
number = {12},
pages = {1466},
publisher = {Multidisciplinary Digital Publishing Institute},
abstract = {Action observation (AO), based on the mirror neuron theory, is a promising strategy to promote motor cortical activation in neurorehabilitation. Brain computer interface (BCI) can detect a user’s intention and provide them with brain state-dependent feedback to assist with patient rehabilitation. We investigated the effects of a combined BCI-AO game on power of mu band attenuation in stroke patients. Nineteen patients with subacute stroke were recruited. A BCI-AO game provided real-time feedback to participants regarding their attention to a flickering action video using steady-state visual-evoked potentials. All participants watched a video of repetitive grasping actions under two conditions: (1) BCI-AO game and (2) conventional AO, in random order. In the BCI-AO game, feedback on participants’ observation scores and observation time was provided. In conventional AO, a non-flickering video and no feedback were provided. The magnitude of mu suppression in the central motor, temporal, parietal, and occipital areas was significantly higher in the BCI-AO game than in the conventional AO. The magnitude of mu suppression was significantly higher in the BCI-AO game than in the conventional AO both in the affected and unaffected hemispheres. These results support the facilitatory effects of the BCI-AO game on mu suppression over conventional AO},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Goethem, Sander Van; Adema, Kimberly; van Bergen, Britt; Viaene, Emilia; Wenborn, Eva; Verwulgen, Stijn
A Test Setting to Compare Spatial Awareness on Paper and in Virtual Reality Using EEG Signals Conference
International Conference on Applied Human Factors and Ergonomics, Springer 2019.
@conference{van2019test,
title = {A Test Setting to Compare Spatial Awareness on Paper and in Virtual Reality Using EEG Signals},
author = {Sander Van Goethem and Kimberly Adema and Britt van Bergen and Emilia Viaene and Eva Wenborn and Stijn Verwulgen},
url = {https://link.springer.com/chapter/10.1007/978-3-030-20473-0_20},
year = {2019},
date = {2019-06-12},
booktitle = {International Conference on Applied Human Factors and Ergonomics},
pages = {199--208},
organization = {Springer},
abstract = {Spatial awareness and the ability to analyze spatial objects, manipulate them and assess the effect thereof, is a key competence for industrial designers. Skills are gradually built up throughout most educational design programs, starting with exercises on technical drawings and reconstruction or classification of spatial objects from isometric projections and CAD practice. The accuracy in which spatial assignments are conducted and the amount of effort required to fulfill them, highly depend on individual insight, interests and persistence. Thus each individual has its own struggles and learning curve to master the structure of spatial objects in aesthetic and functional design. Virtual reality (VR) is a promising tool to expose subjects to objects with complex spatial structure, and even manipulate and design spatial characteristics of such objects. The advantage of displaying spatial objects in VR, compared to representations by projecting them on a screen or paper, could be that subjects could more accurately assess spatial properties of and object and its full geometrical and/or mechanical complexity, when exposed to that object in VR. Immersive experience of spatial objects, could not only result in faster acquiring spatial insights, but also potentially with less effort. We propose that acquiring spatial insight in VR could leverage individual differences in skills and talents and that under this proposition VR can be used as a promising tool in design education. A first step in underpinning this hypothesis, is acquisition of cognitive workload that can be used and compared both in VR and in a classical teaching context. We use electroencephalography (EEG) to assess brain activity through wearable plug and play headset (Wearable Sensing-DSI 7). This equipment is combined with VR (Oculus). We use QStates classification software to compare brain waves when conducting spatial assessments on paper and in VR. This gives us a measure of cognitive workload, as a ratio of a resulting from subject records with a presumed ‘high’ workload. A total number of eight records of subjects were suited for comparison. No significant difference was found between EEG signals (paried t-test, p = 0.57). However the assessment of cognitive workload was successfully validated through a questionnaire. The method could be used to set up reliable constructs for learning techniques for spatial insights.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Lim, Hyunmi; Ku, Jeonghun
Multiple-command single-frequency SSVEP-based BCI system using flickering action video Journal Article
In: Journal of Neuroscience Methods, vol. 314, pp. 21-27, 2019.
@article{lim2019multiple,
title = {Multiple-command single-frequency SSVEP-based BCI system using flickering action video},
author = {Hyunmi Lim and Jeonghun Ku},
doi = {https://doi.org/10.1016/j.jneumeth.2019.01.005},
year = {2019},
date = {2019-01-16},
journal = {Journal of Neuroscience Methods},
volume = {314},
pages = {21-27},
publisher = {Elsevier},
abstract = {Background
The number of commands in a brain–computer interface (BCI) system is important. This study proposes a new BCI technique to increase the number of commands in a single BCI system without loss of accuracy.
New method
We expected that a flickering action video with left and right elbow movements could simultaneously activate the different pattern of event-related desynchronization (ERD) according to the video contents (e.g., left or right) and steady-state visually evoked potential (SSVEP). The classification accuracy to discriminate left, right, and rest states was compared under the three following feature combinations: SSVEP power (19–21 Hz), Mu power (8–13 Hz), and simultaneous SSVEP and Mu power.
Results
The SSVEP feature could discriminate the stimulus condition, regardless of left or right, from the rest condition, while the Mu feature discriminated left or right, but was relatively poor in discriminating stimulus from rest. However, combining the SSVEP and Mu features, which were evoked by the stimulus with a single frequency, showed superior performance for discriminating all the stimuli among rest, left, or right.
Comparison with the existing method
The video contents could activate the ERD differently, and the flickering component increased its accuracy, such that it revealed a better performance to discriminate when considering together.
Conclusions
This paradigm showed possibility of increasing performance in terms of accuracy and number of commands with a single frequency by applying flickering action video paradigm and applicability to rehabilitation systems used by patients to facilitate their mirror neuron systems while training.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Pereira, Arnaldo; Padden, Dereck; Jantz, Jay; Lin, Kate; Alcaide-Aguirre, Ramses
Cross-Subject EEG Event-Related Potential Classification for Brain-Computer Interfaces Using Residual Networks Journal Article
In: 2018.
@article{pereira2018cross,
title = {Cross-Subject EEG Event-Related Potential Classification for Brain-Computer Interfaces Using Residual Networks},
author = {Arnaldo Pereira and Dereck Padden and Jay Jantz and Kate Lin and Ramses Alcaide-Aguirre},
doi = {10.13140/RG.2.2.16257.10086},
year = {2018},
date = {2018-09-20},
urldate = {2018-01-01},
abstract = {EEG event-related potentials, and the P300 signal in
particular, are promising modalities for brain-computer interfaces (BCI). But the nonstationarity of EEG signals and
their differences across individuals have made it difficult to
implement classifiers that can determine user intent without having to be retrained or calibrated for each new user
and sometimes even each session. This is a major impediment to the development of consumer BCI. Recently, the
EEG BCI literature has begun to apply convolutional neural
networks (CNNs) for classification, but experiments have
largely been limited to training and testing on single subjects. In this paper, we report a study in which EEG data
were recorded from 66 subjects in a visual oddball task
in virtual reality. Using wide residual networks (WideResNets), we obtain state-of-the-art performance on a test set
composed of data from all 66 subjects together. Additionally, a minimal preprocessing stream to convert EEG data
into square images for CNN input while adding regularization is presented and shown to be viable. This study also
provides some guidance on network architecture parameters based on experiments with different models. Our results show that it may possible with enough data to train
a classifier for EEG-based BCIs that can generalize across
individuals without the need for individual training or calibration.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Please fill out the form and provide a brief description of your application so we can help match you with products that will meet your specific needs.
Please fill out the form and provide a brief description of your application so we can help match you with products that will meet your specific needs.
Please fill out the form and provide a brief description of your application so we can help match you with products that will meet your specific needs.