EEG-based Brain-Computer Interfaces (BCI) is a non-invasive technique used to translate brain activity to commands that control an effector (such as a computer keyboard, mouse, etc). Many patients who cannot communicate effectively, such as those who have suffered from a stroke, locked-in syndrome, or other neurodegenerative diseases, rely on BCI’s to stay connected. A few of the most common types of BCI’s modalities are P300, SSVEP, slow cortical potentials, and sensorimotor rhythms. With Wearable Sensing’s revolutionary dry EEG technology, nearly any type of BCI is possible with our research-grade signal quality. Since DSI systems are extremely easy to use and comfortable, this has opened the door to translating a wide range of BCI applications to the real- and virtual- worlds.
P300, otherwise known as the oddball paradigm, is an event-related potential (ERP) in which the brain elicits a unique response roughly 300ms after an “odd” stimulus is presented. This response can be decoded and classified in real-time for a variety of different applications.
One such use case is known as a P300 speller, in which a series of letters are flashed on a screen, and when the “target” letter pops up, our brain has the P300 response, which can then be transformed into a letter selection.
Dr. Betts Peters, Dr. Melanie Fried-Oken, and their team at Oregon Health & Science University have developed a P300 speller using the DSI-24, and have validated its functionality on participants with Locked-In syndrome.
Steady State Visually Evoked Potentials (SSVEP) are natural responses to visual stimuli at specific frequencies. In a typical SSVEP paradigm, targets will flash at differing frequencies, anywhere from 3.5 Hz – 75 Hz, and depending on which target the subject is attending to, the brain will have a characterizable response at such specific frequency.
As shown in the video, a 12 target numbered keyboard is setup, and the subject is counting up. There is no training required, and the algorithm can correctly classify in under 1 second, in some cases.
This specific SSVEP software was developed by Wearable Sensing’s collaborater in China, Neuracle, and is available for purchase for all DSI systems. The software comes ready to use, with customizable 12-count and 40-count keyboards designed for ultra-rapid, high-accuracy classification.
Motor Imagery is a BCI technique in which the subject imagines performing a movement with a particular limb. This then alters the rhythmic activity in locations in the sensorimotor cortex that correspond to the imagined limb. The BCI can decode these signals, and translate the imagined movement into feedback in the form of cursor movements or other computer commands.
The DSI-24 was featured at an interactive art installation “Mental Work” at the Ecole Polytechnique Federale de Lausanne (EPFL) Switzerland. During the exhibit, subjects were presented with a wheel that was controlled by the subject thinking about moving either one of their arms.
Neurolutions is a medical device company developing neuro-rehabilitation solutions that seek to restore function to patients who are disabled as a result of neurological injury. The Neurolutions IpsiHand system provides upper extremity rehabilitation for chronic stroke patients leveraging brain-computer interface and advanced wearable robotics technology.
By utilizing the DSI-7, Neurolutions is able to use Motor Imagery techniques to decode a patients intent to move their finger, which then instructs the exoskeleton to physically move the finger. With repeated sessions, patients can regain control of their lost limbs.
What fires together, wires together!
Gupta, Disha; Brangaccio, Jodi Ann; Mojtabavi, Helia; Wolpaw, Jonathan R; Hill, NJ
Extracting Robust Single-Trial Somatosensory Evoked Potentials for Non-Invasive Brain Computer Interfaces Journal Article
In: Journal of Neural Engineering, 2025.
@article{gupta2025extracting,
title = {Extracting Robust Single-Trial Somatosensory Evoked Potentials for Non-Invasive Brain Computer Interfaces},
author = {Disha Gupta and Jodi Ann Brangaccio and Helia Mojtabavi and Jonathan R Wolpaw and NJ Hill},
doi = {10.1088/1741-2552/adfd8a},
year = {2025},
date = {2025-09-03},
urldate = {2025-01-01},
journal = {Journal of Neural Engineering},
abstract = {Objective. Reliable extraction of single-trial somatosensory evoked potentials (SEPs) is essential for developing brain-computer interface (BCI) applications to support rehabilitation after brain injury. For real-time feedback, these responses must be extracted prospectively on every trial, with minimal post-processing and artifact correction. However, noninvasive SEPs elicited by electrical stimulation at recommended parameter settings (0.1–0.2 msec pulse width, stimulation at or below motor threshold, 2–5 Hz frequency) are typically small and variable, often requiring averaging across multiple trials or extensive processing. Here, we describe and evaluate ways to optimize the stimulation setup to enhance the signal-to-noise ratio (SNR) of noninvasive single-trial SEPs, enabling more reliable extraction. Approach. SEPs were recorded with scalp electroencephalography in tibial nerve stimulation in thirteen healthy people, and two people with CNS injuries. Three stimulation frequencies (lower than recommended: 0.2 Hz, 1 Hz, 2 Hz) with a pulse width longer than recommended (1 msec), at a stimulation intensity based on H-reflex and M-wave at Soleus muscle were evaluated. Detectability of single-trial SEPs relative to background noise was tested offline and in a pseudo-online analysis, followed by a real-time demonstration. Main results. SEP N70 was observed predominantly at the central scalp regions. Online decoding performance was significantly higher with Laplacian filter. Generalization performance showed an expected degradation, at all frequencies, with an average decrease of 5.9% (multivariate) and 6.5% (univariate), with an AUC score ranging from 0.78–0.90. The difference across stimulation frequencies was not significant. In individuals with injuries, AUC of 0.86 (incomplete spinal cord injury) and 0.81 (stroke) was feasible. Real-time demonstration showed SEP detection with AUC of 0.89. Significance. This study describes and evaluates a system for extracting single-trial SEPs in real-time, suitable for a BCI-based operant conditioning. It enhances SNR of individual SEPs by alternate electrical stimulation parameters, dry headset, and optimized signal processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Țenea, Sabin-Andrei; Berceanu, Alexandru; Nisioi, Sergiu; Robu-Movilă, Andreea; Pistol, Constantin; Burloiu, Grigore
The co-created city: neuroadaptive design for healthy environments Journal Article
In: Intelligent Buildings International, pp. 1–17, 2025.
@article{țenea2025co,
title = {The co-created city: neuroadaptive design for healthy environments},
author = {Sabin-Andrei Țenea and Alexandru Berceanu and Sergiu Nisioi and Andreea Robu-Movilă and Constantin Pistol and Grigore Burloiu},
doi = {https://doi.org/10.1080/17508975.2025.2542804},
year = {2025},
date = {2025-08-21},
urldate = {2025-01-01},
journal = {Intelligent Buildings International},
pages = {1–17},
publisher = {Taylor & Francis},
abstract = {Urban environments profoundly influence human well-being and behavior, underscoring the need for design paradigms that seamlessly integrate ecological principles with stakeholder requirements. This study investigates the convergence of affective computing and generative design to develop adaptive, health-promoting urban environments. Initially, a genetic-generative algorithm was created to generate an extensive database of high-rise tower designs optimized for solar exposure and surface-to-volume ratio. Subsequently, an EEG-based Brain–Computer Interface (BCI) system was implemented to capture architects’ subconscious emotional responses to selected designs using Event-Related Potentials (ERPs) and Self-Assessment Manikin (SAM) scales. EEG data from 24 participants were analyzed to extract ERP markers of valence-based preference, revealing significant neural responses in early (250–350 ms) and late (600–800 ms) time windows. A random forest model, complemented by SHAP analysis, demonstrated nonlinear influences of critical design parameters on subjective preference. EEG-derived preference scores were then integrated into a multi-objective optimization workflow, facilitating a data-driven, user-centric design selection process. The findings support the potential for real-time, neuroadaptive architectural design that mitigates decision fatigue while harmonizing objective performance metrics with affective insights. Moreover, this work contributes to research-informed public consultation by providing an evidence-based, inclusive approach for incorporating subconscious user preferences into urban planning and architectural workflows.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Celik, Basak; Memmott, Tab; Stratis, Georgios; Lawhead, Matthew; Peters, Betts; Klee, Daniel; Fried-Oken, Melanie; Erdogmus, Deniz
Multimodal Sensor Fusion for EEG-Based BCI Typing Systems Conference
International Brain-Computer Interface Meeting 2025 2025.
@conference{inproceedings,
title = {Multimodal Sensor Fusion for EEG-Based BCI Typing Systems},
author = {Basak Celik and Tab Memmott and Georgios Stratis and Matthew Lawhead and Betts Peters and Daniel Klee and Melanie Fried-Oken and Deniz Erdogmus},
url = {https://www.researchgate.net/publication/393680550_Multimodal_Sensor_Fusion_for_EEG-Based_BCI_Typing_Systems},
doi = {10.3217/978-3-99161-050-2-196},
year = {2025},
date = {2025-06-02},
urldate = {2025-01-01},
organization = {International Brain-Computer Interface Meeting 2025},
abstract = {For people with severe speech and physical impairments (SSPI), a robust communication interface is often a necessity to improve quality of life. Non-implantable electroencephalography (EEG)-based BCI typing systems are one option in the field to restore communication. In an EEG-based typing interface, a sequence of symbols are presented consecutively on a screen, and the intended symbol is probabilistically inferred by the resulting event-related potentials (ERPs) [1]. Selecting the intended symbol often takes multiple attempts due to a subset of all symbols being presented in each sequence, and a decision cannot be made if the EEG evidence does not strongly support the intended symbol. },
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Peters, Betts; Celik, Basak; Gaines, Dylan; Galvin-McLaughlin, Deirdre; Imbiriba, Tales; Kinsella, Michelle; Klee, Daniel; Lawhead, Matthew; Memmott, Tab; Smedemark-Margulies, Niklas; others,
In: Journal of Neural Engineering, 2025.
@article{peters2025rsvp,
title = {RSVP Keyboard with Inquiry Preview: mixed performance and user experience with an adaptive, multimodal typing interface combining EEG and switch input},
author = {Betts Peters and Basak Celik and Dylan Gaines and Deirdre Galvin-McLaughlin and Tales Imbiriba and Michelle Kinsella and Daniel Klee and Matthew Lawhead and Tab Memmott and Niklas Smedemark-Margulies and others},
doi = {10.1088/1741-2552/ada8e0},
year = {2025},
date = {2025-01-10},
urldate = {2025-01-01},
journal = {Journal of Neural Engineering},
abstract = {Objective. The RSVP Keyboard is a non-implantable, event-related potential-based brain-computer interface (BCI) system designed to support communication access for people with severe speech and physical impairments. Here we introduce Inquiry Preview, a new RSVP Keyboard interface incorporating switch input for users with some voluntary motor function, and describe its effects on typing performance and other outcomes. Approach. Four individuals with disabilities participated in the collaborative design of possible switch input applications for the RSVP Keyboard, leading to the development of Inquiry Preview and a method of fusing switch input with language model and electroencephalography (EEG) evidence for typing. Twenty-four participants without disabilities and one potential end user with incomplete locked-in syndrome took part in two experiments investigating the effects of Inquiry Preview and two modes of switch input on typing accuracy and speed during a copy-spelling task. Main results. For participants without disabilities, Inquiry Preview and switch input tended to worsen typing performance compared to the standard RSVP Keyboard condition, with more consistent effects across participants for speed than for accuracy. However, there was considerable variability, with some participants demonstrating improved typing performance and better user experience with Inquiry Preview and switch input. Typing performance for the potential end user was comparable to that of participants without disabilities. He typed most quickly and accurately with Inquiry Preview and switch input and gave favorable user experience ratings to those conditions, but preferred standard RSVP Keyboard. Significance. Inquiry Preview is a novel multimodal interface for the RSVP Keyboard BCI, incorporating switch input as an additional control signal. Typing performance and user experience and preference varied widely across participants, reinforcing the need for flexible, customizable BCI systems that can adapt to individual users. ClinicalTrials.gov Identifier: NCT04468919.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Sultana, Mushfika; Jain, Osheen; Halder, Sebastian; Matran-Fernandez, Ana; Nawaz, Rab; Scherer, Reinhold; Chavarriaga, Ricardo; del R Millán, José; Perdikis, Serafeim
Evaluating Dry EEG Technology Out of the Lab Conference
2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), IEEE 2024.
@conference{sultana2024evaluating,
title = {Evaluating Dry EEG Technology Out of the Lab},
author = {Mushfika Sultana and Osheen Jain and Sebastian Halder and Ana Matran-Fernandez and Rab Nawaz and Reinhold Scherer and Ricardo Chavarriaga and José del R Millán and Serafeim Perdikis},
doi = {10.1109/MetroXRAINE62247.2024.10797021},
year = {2024},
date = {2024-12-24},
urldate = {2024-01-01},
booktitle = {2024 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE)},
pages = {752–757},
organization = {IEEE},
abstract = {Dry electroencephalography (EEG) electrodes have emerged as a promising solution for enhancing the usability of non-invasive Brain-Computer Interface (BCI) systems. Recent advancements in dry helmets have demonstrated competitiveness in enabling EEG and BCI applications with state-of-the-art gel-based systems. In this study, we evaluate the performance and signal quality of a dry EEG cap for BCI applications in a large population. The assessment was conducted under extremely noisy conditions at a public exhibition, surrounded by numerous attendees and multiple sources of electromagnetic interference. Our analysis includes 100 participants from the Mental Work exhibition and assesses the acquired signal's integrity through, Motor Imagery (MI) classification accuracy, the kurtosis of the raw signals and the deviation of the Power Spectral Density (PSD) spectra from the ideal curve. Results show that 56 participants achieved above chance-level single-sample accuracy, with even higher accuracy observed for peak performance. Frontal channels exhibited higher artifact presence, yet the overall signal quality was comparable to gel-based systems used in the authors' previous studies. These findings confirm the potential of dry EEG technology to enable BCI applications beyond the lab, facilitating everyday use in homes, clinics, and public spaces.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Gwon, Daeun; Ahn, Minkyu
Motor task-to-task transfer learning for motor imagery brain-computer interfaces Journal Article
In: NeuroImage, pp. 120906, 2024.
@article{gwon2024motor,
title = {Motor task-to-task transfer learning for motor imagery brain-computer interfaces},
author = {Daeun Gwon and Minkyu Ahn},
doi = {https://doi.org/10.1016/j.neuroimage.2024.120906},
year = {2024},
date = {2024-10-28},
urldate = {2024-01-01},
journal = {NeuroImage},
pages = {120906},
publisher = {Elsevier},
abstract = {Motor imagery (MI) is one of the popular control paradigms in the non-invasive brain-computer interface (BCI) field. MI-BCI generally requires users to conduct the imagination of movement (e.g., left or right hand) to collect training data for generating a classification model during the calibration phase. However, this calibration phase is generally time-consuming and tedious, as users conduct the imagination of hand movement several times without being given feedback for an extended period. This obstacle makes MI-BCI non user-friendly and hinders its use. On the other hand, motor execution (ME) and motor observation (MO) are relatively easier tasks, yield lower fatigue than MI, and share similar neural mechanisms to MI. However, few studies have integrated these three tasks into BCIs. In this study, we propose a new task-to-task transfer learning approach of 3-motor tasks (ME, MO, and MI) for building a better user-friendly MI-BCI. For this study, 28 subjects participated in 3-motor tasks experiment, and electroencephalography (EEG) was acquired. User opinions regarding the 3-motor tasks were also collected through questionnaire survey. The 3-motor tasks showed a power decrease in the alpha rhythm, known as event-related desynchronization, but with slight differences in the temporal patterns. In the classification analysis, the cross-validated accuracy (within-task) was 67.05 % for ME, 65.93 % for MI, and 73.16 % for MO on average. Consistently with the results, the subjects scored MI (3.16) as the most difficult task compared with MO (1.42) and ME (1.41), with p < 0.05. In the analysis of task-to-task transfer learning, where training and testing are performed using different task datasets, the ME–trained model yielded an accuracy of 65.93 % (MI test), which is statistically similar to the within-task accuracy (p > 0.05). The MO–trained model achieved an accuracy of 60.82 % (MI test). On the other hand, combining two datasets yielded interesting results. ME and 50 % of the MI–trained model (50-shot) classified MI with a 69.21 % accuracy, which outperformed the within-task accuracy (p < 0.05), and MO and 50 % of the MI–trained model showed an accuracy of 66.75 %. Of the low performers with a within-task accuracy of 70 % or less, 90 % (n = 21) of the subjects improved in training with ME, and 76.2 % (n = 16) improved in training with MO on the MI test at 50-shot. These results demonstrate that task-to-task transfer learning is possible and could be a promising approach to building a user-friendly training protocol in MI-BCI.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Cha, Seungwoo; Kim, Kyoung Tae; Chang, Won Kee; Paik, Nam-Jong; Choi, Ji Soo; Lim, Hyunmi; Kim, Won-Seok; Ku, Jeonghun
Effect of Electroencephalography-based Motor Imagery Neurofeedback on Mu Suppression During Motor Attempt in Patients with Stroke Journal Article
In: Journal of NeuroEngineering and Rehabilitation , 2024.
@article{cha2024effect,
title = {Effect of Electroencephalography-based Motor Imagery Neurofeedback on Mu Suppression During Motor Attempt in Patients with Stroke},
author = {Seungwoo Cha and Kyoung Tae Kim and Won Kee Chang and Nam-Jong Paik and Ji Soo Choi and Hyunmi Lim and Won-Seok Kim and Jeonghun Ku},
doi = {https://doi.org/10.21203/rs.3.rs-5106561/v1},
year = {2024},
date = {2024-09-26},
urldate = {2024-01-01},
journal = {Journal of NeuroEngineering and Rehabilitation },
abstract = {Objective
The primary aims of this study were to explore the neurophysiological effects of motor imagery neurofeedback using electroencephalography (EEG), specifically focusing on mu suppression during serial motor attempts and assessing its potential benefits in patients with subacute stroke.
Methods
A total of 15 patients with hemiplegia following subacute ischemic stroke were prospectively enrolled in this randomized cross-over study. This study comprised two experiments: neurofeedback and sham. Each experiment included four blocks: three blocks of resting, grasp, resting, and intervention, followed by one block of resting and grasp. During the resting sessions, the participants fixated on a white cross on a black background for 2 minutes without moving their upper extremities. In the grasp sessions, the participants were instructed to grasp and release their paretic hand at a frequency of about 1 Hz for 3 minutes while fixating on the same white cross. During the intervention sessions, neurofeedback involved presenting a punching image with the affected upper limb corresponding to the mu suppression induced by imagined movement, while the sham involved mu suppression of other randomly selected participants 3 minutes. EEG data were recorded during the experiment, and data from C3/C4 and P3/P4 were used for analyses to compare the degree of mu suppression between the neurofeedback and sham conditions.
Results
Significant mu suppression was observed in the bilateral motor and parietal cortices during the neurofeedback intervention compared with the sham condition across serial sessions (p < 0.001). Following neurofeedback, the real grasping sessions showed progressive strengthening of mu suppression in the ipsilesional motor cortex and bilateral parietal cortices compared to those following sham (p < 0.05), an effect not observed in the contralesional motor cortex.
Conclusion
Motor imagery neurofeedback significantly enhances mu suppression in the ipsilesional motor and bilateral parietal cortices during motor attempts in patients with subacute stroke. These findings suggest that motor imagery neurofeedback could serve as a promising adjunctive therapy to enhance motor-related cortical activity and support motor rehabilitation in patients with stroke.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Harel, Asaf; Shriki, Oren
Task-guided attention increases non-linearity of steady-state visually evoked potentials Journal Article
In: Journal of Neural Engineering, 2024.
@article{harel2024task,
title = {Task-guided attention increases non-linearity of steady-state visually evoked potentials},
author = {Asaf Harel and Oren Shriki},
doi = {https://doi.org/10.1088/1741-2552/ad8032},
year = {2024},
date = {2024-09-26},
urldate = {2024-01-01},
journal = {Journal of Neural Engineering},
abstract = {Attention is a multifaceted cognitive process, with nonlinear dynamics playing a crucial role. In this study, we investigated the involvement of nonlinear processes in top-down visual attention by employing a contrast-modulated sequence of letters and numerals, encircled by a consistently flickering white square on a black background - a setup that generated steady-state visually evoked potentials. Nonlinear processes are recognized for eliciting and modulating the harmonics of constant frequencies. We examined the fundamental and harmonic frequencies of each stimulus to evaluate the underlying nonlinear dynamics during stimulus processing. In line with prior research, our findings indicate that the power spectrum density of EEG responses is influenced by both task presence and stimulus contrast. By utilizing the Rhythmic Entrainment Source Separation (RESS) technique, we discovered that actively searching for a target within a letter stream heightened the amplitude of the fundamental frequency and harmonics related to the background flickering stimulus. While the fundamental frequency amplitude remained unaffected by stimulus contrast, a lower contrast led to an increase in the second harmonic's amplitude. We assessed the relationship between the contrast response function and the nonlinear-based harmonic responses. Our findings contribute to a more nuanced understanding of the nonlinear processes impacting top-down visual attention while also providing insights into optimizing brain-computer interfaces.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Xu, Jihong; Chen, Tianran; Yan, Lirong
Improvement Of An Untrained Brain-computer Interface System Combined With Target Recognition Proceedings Article
In: 2024 16th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pp. 1–6, IEEE 2024.
@inproceedings{xu2024improvement,
title = {Improvement Of An Untrained Brain-computer Interface System Combined With Target Recognition},
author = {Jihong Xu and Tianran Chen and Lirong Yan},
doi = {10.1109/ECAI61503.2024.10607572},
year = {2024},
date = {2024-07-30},
urldate = {2024-01-01},
booktitle = {2024 16th International Conference on Electronics, Computers and Artificial Intelligence (ECAI)},
pages = {1–6},
organization = {IEEE},
abstract = {In the current commonly used Steady State Visual Evoked Potential (SSVEP) paradigm, the stimuli are mostly white flashing blocks superimposed on a black background, which is monotonous and easy to cause subject fatigue with prolonged flashing stimuli. The stimulus paradigm is mostly divorced from the actual control environment, and lacks a direct connection with the control task. The mainstream classification algorithms usually analyze the data with a fixed window length, which is lack of generalizability to different subjects, and the classification performance index needs to be further improved. In this study, the SSVEP stimulus paradigm was improved by combining the YOLOv5 algorithm, which changed from the traditional black background to the actual control environment. It superimposed SSVEP stimulus blocks of different frequencies at each recognized target location. The stimulus paradigm was not stripped from the control scene, and the Filter Bank Criterion Correlation Analysis (FBCCA) algorithm was chosen to analyze it. The FBCCA algorithm was further improved by using a dynamic window strategy, which automatically adjusts the window length of each experiment according to the characteristics of each subject. This improves the versatility of the algorithm and increases the recognition accuracy and Information Transfer Rate (ITR). After the improvement, the offline experimental data were analyzed. The improved algorithm achieved an average accuracy of 87.08%, which was 17.29% higher than the original algorithm. Additionally, the average ITR was 74.28 bits/min, which was 36.51 bits/min higher than the original algorithm.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Jeong, Chang Hyeon; Lim, Hyunmi; Lee, Jiye; Lee, Hye Sun; Ku, Jeonghun; Kang, Youn Joo
In: Frontiers in Neuroscience, vol. 18, pp. 1373589, 2024.
@article{jeong2024attentional,
title = {Attentional state-synchronous peripheral electrical stimulation during action observation induced distinct modulation of corticospinal plasticity after stroke},
author = {Chang Hyeon Jeong and Hyunmi Lim and Jiye Lee and Hye Sun Lee and Jeonghun Ku and Youn Joo Kang},
doi = {10.3389/fnins.2024.1373589},
year = {2024},
date = {2024-03-18},
urldate = {2024-03-18},
journal = {Frontiers in Neuroscience},
volume = {18},
pages = {1373589},
publisher = {Frontiers},
abstract = {Introduction: Brain computer interface-based action observation (BCI-AO) is a promising technique in detecting the user's cortical state of visual attention and providing feedback to assist rehabilitation. Peripheral nerve electrical stimulation (PES) is a conventional method used to enhance outcomes in upper extremity function by increasing activation in the motor cortex. In this study, we examined the effects of different pairings of peripheral nerve electrical stimulation (PES) during BCI-AO tasks and their impact on corticospinal plasticity. Materials and methods: Our innovative BCI-AO interventions decoded user's attentive watching during task completion. This process involved providing rewarding visual cues while simultaneously activating afferent pathways through PES. Fifteen stroke patients were included in the analysis. All patients underwent a 15 min BCI-AO program under four different experimental conditions: BCI-AO without PES, BCI-AO with continuous PES, BCI-AO with triggered PES, and BCI-AO with reverse PES application. PES was applied at the ulnar nerve of the wrist at an intensity equivalent to 120% of the sensory threshold and a frequency of 50 Hz. The experiment was conducted randomly at least 3 days apart. To assess corticospinal and peripheral nerve excitability, we compared pre and post-task (post 0, post 20 min) parameters of motor evoked potential and F waves under the four conditions in the muscle of the affected hand.The findings indicated that corticospinal excitability in the affected hemisphere was higher when PES was synchronously applied with AO training, using BCI during a state of attentive watching. In contrast, there was no effect on corticospinal activation when PES was applied continuously or in the reverse manner. This paradigm promoted corticospinal plasticity for up to 20 min after task completion. Importantly, the effect was more evident in patients over 65 years of age.The results showed that task-driven corticospinal plasticity was higher when PES was applied synchronously with a highly attentive brain state during the action observation task, compared to continuous or asynchronous application. This study provides insight into how optimized BCI technologies dependent on brain state used in conjunction with other rehabilitation training could enhance treatment-induced neural plasticity.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Please fill out the form and provide a brief description of your application so we can help match you with products that will meet your specific needs.