EEG-based Brain-Computer Interfaces (BCI) is a non-invasive technique used to translate brain activity to commands that control an effector (such as a computer keyboard, mouse, etc). Many patients who cannot communicate effectively, such as those who have suffered from a stroke, locked-in syndrome, or other neurodegenerative diseases, rely on BCI’s to stay connected. A few of the most common types of BCI’s modalities are P300, SSVEP, slow cortical potentials, and sensorimotor rhythms. With Wearable Sensing’s revolutionary dry EEG technology, nearly any type of BCI is possible with our research-grade signal quality. Since DSI systems are extremely easy to use and comfortable, this has opened the door to translating a wide range of BCI applications to the real- and virtual- worlds.
P300, otherwise known as the oddball paradigm, is an event-related potential (ERP) in which the brain elicits a unique response roughly 300ms after an “odd” stimulus is presented. This response can be decoded and classified in real-time for a variety of different applications.
One such use case is known as a P300 speller, in which a series of letters are flashed on a screen, and when the “target” letter pops up, our brain has the P300 response, which can then be transformed into a letter selection.
Betts Peters, Dr. Melanie Fried-Oken, and their team at Oregon Health & Science University have developed a P300 speller using the DSI-24, and have validated its functionality on subjects with Locked-In syndrome.
Steady State Visually Evoked Potentials (SSVEP) are natural responses to visual stimuli at specific frequencies. In a typical SSVEP paradigm, targets will flash at differing frequencies, anywhere from 3.5 Hz – 75 Hz, and depending on which target the subject is attending to, the brain will have a characterizable response at such specific frequency.
As shown in the video, a 12 target numbered keyboard is setup, and the subject is counting up. There is no training required, and the algorithm can correctly classify in under 1 second, in some cases.
This specific SSVEP software was developed by Wearable Sensing’s collaborater in China, Neuracle, and is available for purchase for all DSI systems. The software comes ready to use, with customizable 12-count and 40-count keyboards designed for ultra-rapid, high-accuracy classification.
Motor Imagery is a BCI technique in which the subject imagines performing a movement with a particular limb. This then alters the rhythmic activity in locations in the sensorimotor cortex that correspond to the imagined limb. The BCI can decode these signals, and translate the imagined movement into feedback in the form of cursor movements or other computer commands.
The DSI-24 was featured at an interactive art installation “Mental Work” at the Ecole Polytechnique Federale de Lausanne (EPFL) Switzerland. During the exhibit, subjects were presented with a wheel that was controlled by the subject thinking about moving either one of their arms.
Neurolutions is a medical device company developing neuro-rehabilitation solutions that seek to restore function to patients who are disabled as a result of neurological injury. The Neurolutions IpsiHand system provides upper extremity rehabilitation for chronic stroke patients leveraging brain-computer interface and advanced wearable robotics technology.
By utilizing the DSI-7, Neurolutions is able to use Motor Imagery techniques to decode a patients intent to move their finger, which then instructs the exoskeleton to physically move the finger. With repeated sessions, patients can regain control of their lost limbs.
What fires together, wires together!
Memmott, Tab; Koçanaoğullari, Aziz; Lawhead, Matthew; Klee, Daniel; Dudy, Shiran; Fried-Oken, Melanie; Oken, Barry
BciPy: brain--computer interface software in Python Journal Article
In: Brain-Computer Interfaces, pp. 1-18, 2021.
@article{memmott2021bcipy,
title = {BciPy: brain--computer interface software in Python},
author = {Tab Memmott and Aziz Koçanaoğullari and Matthew Lawhead and Daniel Klee and Shiran Dudy and Melanie Fried-Oken and Barry Oken},
doi = {https://doi.org/10.1080/2326263X.2021.1878727},
year = {2021},
date = {2021-02-02},
journal = {Brain-Computer Interfaces},
pages = {1-18},
publisher = {Taylor & Francis},
abstract = {There are high technological and software demands associated with conducting Brain–Computer Interface (BCI) research. In order to accelerate the development and accessibility of BCIs, it is worthwhile to focus on open-source and community desired tooling. Python, a prominent computer language, has emerged as a language of choice for many research and engineering purposes. In this article, BciPy, an open-source, Python-based software for conducting BCI research is presented. It was developed with a focus on restoring communication using Event-Related Potential (ERP) spelling interfaces; however, it may be used for other non-spelling and non-ERP BCI paradigms. Major modules in this system include support for data acquisition, data queries, stimuli presentation, signal processing, signal viewing and modeling, language modeling, task building, and a simple Graphical User Interface (GUI).},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Son, Ji Eun; Choi, Hyoseon; Lim, Hyunmi; Ku, Jeonghun
In: Technology and Health Care, vol. 28, no. S1, pp. 509-519, 2020.
@article{son2020development,
title = {Development of a flickering action video based steady state visual evoked potential triggered brain computer interface-functional electrical stimulation for a rehabilitative action observation game},
author = {Ji Eun Son and Hyoseon Choi and Hyunmi Lim and Jeonghun Ku},
editor = {Severin P. Schwarzacher and Carlos Gómez},
doi = {10.3233/THC-209051},
year = {2020},
date = {2020-06-04},
journal = {Technology and Health Care},
volume = {28},
number = {S1},
pages = {509-519},
publisher = {IOS Press},
abstract = {BACKGROUND:
This study focused on developing an upper limb rehabilitation program. In this regard, a steady state visual evoked potential (SSVEP) triggered brain computer interface (BCI)-functional electrical stimulation (FES) based action observation game featuring a flickering action video was designed.
OBJECTIVE:
In particular, the synergetic effect of the game was investigated by combining the action observation paradigm with BCI based FES.
METHODS:
The BCI-FES system was contrasted under two conditions: with flickering action video and flickering noise video. In this regard, 11 right-handed subjects aged between 22–27 years were recruited. The differences in brain activation in response to the two conditions were examined.
RESULTS:
The results indicate that T3 and P3 channels exhibited greater Mu suppression in 8–13 Hz for the action video than the noise video. Furthermore, T4, C4, and P4 channels indicated augmented high beta (21–30 Hz) for the action in contrast to the noise video. Finally, T4 indicated suppressed low beta (14–20 Hz) for the action video in contrast to the noise video.
CONCLUSION:
The flickering action video based BCI-FES system induced a more synergetic effect on cortical activation than the flickering noise based system.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Choi, Hyoseon; Lim, Hyunmi; Kim, Joon Woo; Kang, Youn Joo; Ku, Jeonghun
Brain computer interface-based action observation game enhances mu suppression in patients with stroke Journal Article
In: Electronics, vol. 8, no. 12, pp. 1466, 2019.
@article{choi2019brain,
title = {Brain computer interface-based action observation game enhances mu suppression in patients with stroke},
author = {Hyoseon Choi and Hyunmi Lim and Joon Woo Kim and Youn Joo Kang and Jeonghun Ku},
doi = {https://doi.org/10.3390/electronics8121466},
year = {2019},
date = {2019-12-02},
journal = {Electronics},
volume = {8},
number = {12},
pages = {1466},
publisher = {Multidisciplinary Digital Publishing Institute},
abstract = {Action observation (AO), based on the mirror neuron theory, is a promising strategy to promote motor cortical activation in neurorehabilitation. Brain computer interface (BCI) can detect a user’s intention and provide them with brain state-dependent feedback to assist with patient rehabilitation. We investigated the effects of a combined BCI-AO game on power of mu band attenuation in stroke patients. Nineteen patients with subacute stroke were recruited. A BCI-AO game provided real-time feedback to participants regarding their attention to a flickering action video using steady-state visual-evoked potentials. All participants watched a video of repetitive grasping actions under two conditions: (1) BCI-AO game and (2) conventional AO, in random order. In the BCI-AO game, feedback on participants’ observation scores and observation time was provided. In conventional AO, a non-flickering video and no feedback were provided. The magnitude of mu suppression in the central motor, temporal, parietal, and occipital areas was significantly higher in the BCI-AO game than in the conventional AO. The magnitude of mu suppression was significantly higher in the BCI-AO game than in the conventional AO both in the affected and unaffected hemispheres. These results support the facilitatory effects of the BCI-AO game on mu suppression over conventional AO},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Goethem, Sander Van; Adema, Kimberly; van Bergen, Britt; Viaene, Emilia; Wenborn, Eva; Verwulgen, Stijn
A Test Setting to Compare Spatial Awareness on Paper and in Virtual Reality Using EEG Signals Conference
International Conference on Applied Human Factors and Ergonomics, Springer 2019.
@conference{van2019test,
title = {A Test Setting to Compare Spatial Awareness on Paper and in Virtual Reality Using EEG Signals},
author = {Sander Van Goethem and Kimberly Adema and Britt van Bergen and Emilia Viaene and Eva Wenborn and Stijn Verwulgen},
url = {https://link.springer.com/chapter/10.1007/978-3-030-20473-0_20},
year = {2019},
date = {2019-06-12},
booktitle = {International Conference on Applied Human Factors and Ergonomics},
pages = {199--208},
organization = {Springer},
abstract = {Spatial awareness and the ability to analyze spatial objects, manipulate them and assess the effect thereof, is a key competence for industrial designers. Skills are gradually built up throughout most educational design programs, starting with exercises on technical drawings and reconstruction or classification of spatial objects from isometric projections and CAD practice. The accuracy in which spatial assignments are conducted and the amount of effort required to fulfill them, highly depend on individual insight, interests and persistence. Thus each individual has its own struggles and learning curve to master the structure of spatial objects in aesthetic and functional design. Virtual reality (VR) is a promising tool to expose subjects to objects with complex spatial structure, and even manipulate and design spatial characteristics of such objects. The advantage of displaying spatial objects in VR, compared to representations by projecting them on a screen or paper, could be that subjects could more accurately assess spatial properties of and object and its full geometrical and/or mechanical complexity, when exposed to that object in VR. Immersive experience of spatial objects, could not only result in faster acquiring spatial insights, but also potentially with less effort. We propose that acquiring spatial insight in VR could leverage individual differences in skills and talents and that under this proposition VR can be used as a promising tool in design education. A first step in underpinning this hypothesis, is acquisition of cognitive workload that can be used and compared both in VR and in a classical teaching context. We use electroencephalography (EEG) to assess brain activity through wearable plug and play headset (Wearable Sensing-DSI 7). This equipment is combined with VR (Oculus). We use QStates classification software to compare brain waves when conducting spatial assessments on paper and in VR. This gives us a measure of cognitive workload, as a ratio of a resulting from subject records with a presumed ‘high’ workload. A total number of eight records of subjects were suited for comparison. No significant difference was found between EEG signals (paried t-test, p = 0.57). However the assessment of cognitive workload was successfully validated through a questionnaire. The method could be used to set up reliable constructs for learning techniques for spatial insights.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Lim, Hyunmi; Ku, Jeonghun
Multiple-command single-frequency SSVEP-based BCI system using flickering action video Journal Article
In: Journal of Neuroscience Methods, vol. 314, pp. 21-27, 2019.
@article{lim2019multiple,
title = {Multiple-command single-frequency SSVEP-based BCI system using flickering action video},
author = {Hyunmi Lim and Jeonghun Ku},
doi = {https://doi.org/10.1016/j.jneumeth.2019.01.005},
year = {2019},
date = {2019-01-16},
journal = {Journal of Neuroscience Methods},
volume = {314},
pages = {21-27},
publisher = {Elsevier},
abstract = {Background
The number of commands in a brain–computer interface (BCI) system is important. This study proposes a new BCI technique to increase the number of commands in a single BCI system without loss of accuracy.
New method
We expected that a flickering action video with left and right elbow movements could simultaneously activate the different pattern of event-related desynchronization (ERD) according to the video contents (e.g., left or right) and steady-state visually evoked potential (SSVEP). The classification accuracy to discriminate left, right, and rest states was compared under the three following feature combinations: SSVEP power (19–21 Hz), Mu power (8–13 Hz), and simultaneous SSVEP and Mu power.
Results
The SSVEP feature could discriminate the stimulus condition, regardless of left or right, from the rest condition, while the Mu feature discriminated left or right, but was relatively poor in discriminating stimulus from rest. However, combining the SSVEP and Mu features, which were evoked by the stimulus with a single frequency, showed superior performance for discriminating all the stimuli among rest, left, or right.
Comparison with the existing method
The video contents could activate the ERD differently, and the flickering component increased its accuracy, such that it revealed a better performance to discriminate when considering together.
Conclusions
This paradigm showed possibility of increasing performance in terms of accuracy and number of commands with a single frequency by applying flickering action video paradigm and applicability to rehabilitation systems used by patients to facilitate their mirror neuron systems while training.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Pereira, Arnaldo; Padden, Dereck; Jantz, Jay; Lin, Kate; Alcaide-Aguirre, Ramses
Cross-Subject EEG Event-Related Potential Classification for Brain-Computer Interfaces Using Residual Networks Journal Article
In: 2018.
@article{pereira2018cross,
title = {Cross-Subject EEG Event-Related Potential Classification for Brain-Computer Interfaces Using Residual Networks},
author = {Arnaldo Pereira and Dereck Padden and Jay Jantz and Kate Lin and Ramses Alcaide-Aguirre},
doi = {10.13140/RG.2.2.16257.10086},
year = {2018},
date = {2018-09-20},
urldate = {2018-01-01},
abstract = {EEG event-related potentials, and the P300 signal in
particular, are promising modalities for brain-computer interfaces (BCI). But the nonstationarity of EEG signals and
their differences across individuals have made it difficult to
implement classifiers that can determine user intent without having to be retrained or calibrated for each new user
and sometimes even each session. This is a major impediment to the development of consumer BCI. Recently, the
EEG BCI literature has begun to apply convolutional neural
networks (CNNs) for classification, but experiments have
largely been limited to training and testing on single subjects. In this paper, we report a study in which EEG data
were recorded from 66 subjects in a visual oddball task
in virtual reality. Using wide residual networks (WideResNets), we obtain state-of-the-art performance on a test set
composed of data from all 66 subjects together. Additionally, a minimal preprocessing stream to convert EEG data
into square images for CNN input while adding regularization is presented and shown to be viable. This study also
provides some guidance on network architecture parameters based on experiments with different models. Our results show that it may possible with enough data to train
a classifier for EEG-based BCIs that can generalize across
individuals without the need for individual training or calibration.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Memmott, Tab; Eddy, Brandon; Dabiri, Sina; Erdogmus, Deniz; Fried-Oken, Melanie; Oken, Barry
Automated and self-report measures of drowsiness over successive calibrations in a brain-computer interface for communication Journal Article
In: Clinical Neurophysiology, vol. 129, pp. e61–e62, 2018.
@article{memmott2018t154,
title = {Automated and self-report measures of drowsiness over successive calibrations in a brain-computer interface for communication},
author = {Tab Memmott and Brandon Eddy and Sina Dabiri and Deniz Erdogmus and Melanie Fried-Oken and Barry Oken},
doi = {https://doi.org/10.1016/j.clinph.2018.04.155},
year = {2018},
date = {2018-05-01},
journal = {Clinical Neurophysiology},
volume = {129},
pages = {e61--e62},
publisher = {Elsevier},
abstract = {Brain computer interfaces (BCI) generally require the user to maintain an attentive state. Potential end-users with severe speech and physical impairments may have limited communication abilities to report their current state, thus an automatic calculation of state may improve performance. It’s not yet known if an effective automatic calculation of drowsiness can be detected reliably in end-user populations or healthy controls.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kang, Dayoon; Kim, Jinsoo; Jang, Dong-Pyo; Cho, Yang Seok; Kim, Sung-Phil
Investigation of engagement of viewers in movie trailers using electroencephalography Journal Article
In: Brain-Computer Interfaces, vol. 2, no. 4, pp. 193–201, 2015.
@article{kang2015investigation,
title = {Investigation of engagement of viewers in movie trailers using electroencephalography},
author = {Dayoon Kang and Jinsoo Kim and Dong-Pyo Jang and Yang Seok Cho and Sung-Phil Kim},
doi = {https://doi.org/10.1080/2326263X.2015.1103591},
year = {2015},
date = {2015-11-10},
journal = {Brain-Computer Interfaces},
volume = {2},
number = {4},
pages = {193--201},
publisher = {Taylor & Francis},
abstract = {Brain-computer interfaces (BCIs) have been focused on providing direct communications to the disabled. Recently, BCI researchers have expanded BCI applications to non-medical uses and categorized them as active BCI, reactive BCI, and passive BCI. Neurocinematics, a new application of reactive BCIs, aims to understand viewers’ cognitive and affective responses to movies from neural activity, providing more objective information than traditional subjective self-reports. However, studies on analytical indices for neurocinematics have verified their indices by comparisons with self-reports. To overcome this contradictory issue, we proposed using an independent psychophysical index to evaluate a neural engagement index (NEI). We made use of the secondary task reaction time (STRT), which measures participants’ engagement in a primary task by their reaction time to a secondary task; here, responding to a tactile stimulus was the secondary task and watching a movie trailer was the primary task. NEI was developed as changes in the difference between frontal beta and alpha activity of EEG. We evaluated movie trailers using NEI, STRT, and self-reports and found a significant correlation between STRT and NEI across trailers but no correlation between any of the self-report results and STRT or NEI. Our results suggest that NEI developed for neurocinematics may conform well with more objective psychophysical assessments but not with subjective self-reports.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sellers, Eric W; Turner, Peter; Sarnacki, William A; McManus, Tobin; Vaughan, Theresa M; Matthews, Robert
A Novel Dry Electrode for Brain-Computer Interface Conference
International Conference on Human-Computer Interaction, Springer 2009.
@conference{sellers2009novel,
title = {A Novel Dry Electrode for Brain-Computer Interface},
author = {Eric W Sellers and Peter Turner and William A Sarnacki and Tobin McManus and Theresa M Vaughan and Robert Matthews},
url = {https://link.springer.com/chapter/10.1007/978-3-642-02577-8_68},
year = {2009},
date = {2009-01-01},
booktitle = {International Conference on Human-Computer Interaction},
pages = {623--631},
organization = {Springer},
abstract = {A brain-computer interface is a device that uses signals recorded from the brain to directly control a computer. In the last few years, P300-based brain-computer interfaces (BCIs) have proven an effective and reliable means of communication for people with severe motor disabilities such as amyotrophic lateral sclerosis (ALS). Despite this fact, relatively few individuals have benefited from currently available BCI technology. Independent BCI use requires easily acquired, good-quality electroencephalographic (EEG) signals maintained over long periods in less-than-ideal electrical environments. Conventional, wet-sensor, electrodes require careful application. Faulty or inadequate preparation, noisy environments, or gel evaporation can result in poor signal quality. Poor signal quality produces poor user performance, system downtime, and user and caregiver frustration. This study demonstrates that a hybrid dry electrode sensor array (HESA) performs as well as traditional wet electrodes and may help propel BCI technology to a widely accepted alternative mode of communication.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Please fill out the form and provide a brief description of your application so we can help match you with products that will meet your specific needs.
Please fill out the form and provide a brief description of your application so we can help match you with products that will meet your specific needs.
Please fill out the form and provide a brief description of your application so we can help match you with products that will meet your specific needs.