Wearable Sensing’s wireless DSI-24 is the leading dry electrode EEG system in terms of signal quality and comfort. The DSI-24 takes on average less than 3 minutes to set up, making it the ideal solution for scientists in need of a simple, easy to use, EEG system. Our patented sensor technology not only delivers uncompromised signal quality but also enables our system to be virtually immune against motion and electrical artifacts. As a result, the DSI-24 can be utilized in virtual or augmented reality, while also allowing researchers to take their experiments out of the lab, and into the real world.
The DSI-24 has sensors that provide full head coverage with 19 electrodes on the head, 2 earclip sensors, and also has 3 built-in auxiliary inputs for acquisition of up to 3 auxiliary sensors. It also has an 8-bit trigger input to synchronize with other devices such as Eye-Tracking, Motion (IMU), and more.
Used around the world by leaders in Research, Neurofeedback, Neuromarketing, Brain-Computer Interfaces, & Neuroergonomics.
With over 90% correlation to research-grade wet EEG systems, the dry sensor interface (DSI) offers unparalleled quality and performance
Multiple adjustment points and a foam pad lined interior enable the system to be worn for up to 8 hours on any head shape or size
All DSI systems include free, unlimited licenses of DSI-Streamer, our data acquisition software which can record raw data, in .csv and .edf file formats
Faraday cage's, spring-loaded electrodes, and our patented common-mode follower technology, provides near immunity against electrical and motion artifacts
Using 70% isopropyl alcohol and a cleaning brush, the DSI-24 only takes a minute to clean, 3 minutes to dry, and can be up and running on the next subject in minutes
All DSI systems include our free C based .dll API, which enables users to pull the raw data directly from the headset, for custom software on Windows, Mac OS, Linux, and ARM
The DSI-24 was designed for ultra-rapid setup, taking on average less than 3 minutes to don, and works on any type of hair, including long hair, thick hair, afros, and more
DSI headsets have active sensors, amplifiers, digitizers, batteries, onboard storage, and wireless transmission, making them complete, mobile, wearable EEG systems
DSI systems exclusively work with QStates, a machine learning algorithm for cognitive classification on states such as mental workload, engagement, and fatigue
Our Wireless Trigger Hub simplifies the synchronization of DSI headsets with other devices. It features:
An additional benefit of the Trigger Hub design is that it allows synchronization across multiple data sources that are distributed across multiple systems, each of which running at its own clock rate. One such case commonly experienced in EEG experiments involves the synchronization of EEG and eye-tracking measurements, where the inevitable clock drift that arises between two systems during extended measurements creates difficulty in aligning data to events across the two systems.
The DSI-24 has 3 auxiliary inputs on the headset, which allows for automatic synchronization of Wearable Sensing’s auxiliary sensors to the EEG. The sensors available include ECG, EMG, EOG, GSR, RESP, & TEMP. The sensor data is collected and recorded in our data acquisition software, DSI-Streamer, where you can view the EEG and Aux sensors in real-time.
EEG Channels
Fp1, Fp2, Fz, F3, F4, F7, F8, Cz, C3, C4, T7/T3, T8/T4, Pz, P3, P4, P7/T5, P8/T6, O1, O2, A1, A2
Reference / Ground
Common Mode Follower / Fpz
Head Size Range
Adult Size: 52cm – 62cm circumference
Child Size: 48cm – 54cm circumference
Sampling Rate
300 Hz (600Hz upgrade available)
Bandwidth
0.003 – 150 Hz
A/D resolution
0.317 μV referred to input
Input Impedance (1Hz)
47 GΩ
CMRR
> 120 dB
Amplifier / Digitizer
16 bits / 24 channels
Wireless
Bluetooth
Wireless Range
10 m
Run-time
> 24 Hours, Hot-Swappable Batteries
Onboard Storage
~ 68 Hours (available option)
Data Acquisition
Real time, evoked potentials
Signal Quality Monitoring
Continuous impedance, Baseline offset, Noise (1-50 Hz)
Data Type
Raw and Filtered Data available
File Type
.CSV and .EDF
Data Output Streaming
TCP/IP socket, API (C Based), LSL
Cognitive State Classification
Brain Computer Interface
SSVEP BCI Algorithms; BCI2000; OpenViBE; PsychoPy; BCILab
Data Integration / Analysis
CAPTIV; Lab Streaming Layer; NeuroPype; BrainStorm; NeuroVIS
Neurofeedback
Applied Neuroscience NeuroGuide; Brainmaster Brain Avatar; EEGer
Neuromarketing
CAPTIV Neurolab
Presentation
Presentation; E-Prime
Gupta, Disha; Brangaccio, Jodi Ann; Mojtabavi, Helia; Wolpaw, Jonathan R; Hill, NJ
Extracting Robust Single-Trial Somatosensory Evoked Potentials for Non-Invasive Brain Computer Interfaces Journal Article
In: Journal of Neural Engineering, 2025.
@article{gupta2025extracting,
title = {Extracting Robust Single-Trial Somatosensory Evoked Potentials for Non-Invasive Brain Computer Interfaces},
author = {Disha Gupta and Jodi Ann Brangaccio and Helia Mojtabavi and Jonathan R Wolpaw and NJ Hill},
doi = {10.1088/1741-2552/adfd8a},
year = {2025},
date = {2025-09-03},
urldate = {2025-01-01},
journal = {Journal of Neural Engineering},
abstract = {Objective. Reliable extraction of single-trial somatosensory evoked potentials (SEPs) is essential for developing brain-computer interface (BCI) applications to support rehabilitation after brain injury. For real-time feedback, these responses must be extracted prospectively on every trial, with minimal post-processing and artifact correction. However, noninvasive SEPs elicited by electrical stimulation at recommended parameter settings (0.1–0.2 msec pulse width, stimulation at or below motor threshold, 2–5 Hz frequency) are typically small and variable, often requiring averaging across multiple trials or extensive processing. Here, we describe and evaluate ways to optimize the stimulation setup to enhance the signal-to-noise ratio (SNR) of noninvasive single-trial SEPs, enabling more reliable extraction. Approach. SEPs were recorded with scalp electroencephalography in tibial nerve stimulation in thirteen healthy people, and two people with CNS injuries. Three stimulation frequencies (lower than recommended: 0.2 Hz, 1 Hz, 2 Hz) with a pulse width longer than recommended (1 msec), at a stimulation intensity based on H-reflex and M-wave at Soleus muscle were evaluated. Detectability of single-trial SEPs relative to background noise was tested offline and in a pseudo-online analysis, followed by a real-time demonstration. Main results. SEP N70 was observed predominantly at the central scalp regions. Online decoding performance was significantly higher with Laplacian filter. Generalization performance showed an expected degradation, at all frequencies, with an average decrease of 5.9% (multivariate) and 6.5% (univariate), with an AUC score ranging from 0.78–0.90. The difference across stimulation frequencies was not significant. In individuals with injuries, AUC of 0.86 (incomplete spinal cord injury) and 0.81 (stroke) was feasible. Real-time demonstration showed SEP detection with AUC of 0.89. Significance. This study describes and evaluates a system for extracting single-trial SEPs in real-time, suitable for a BCI-based operant conditioning. It enhances SNR of individual SEPs by alternate electrical stimulation parameters, dry headset, and optimized signal processing.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Colombo, Samuele; Kim, Nayeon; Gero, John
Brain-derived neural networks distinguish design representations in different media Journal Article
In: Proceedings of the Design Society, vol. 5, pp. 751–760, 2025.
@article{colombo2025brain,
title = {Brain-derived neural networks distinguish design representations in different media},
author = {Samuele Colombo and Nayeon Kim and John Gero},
doi = {https://doi.org/10.1017/pds.2025.10089},
year = {2025},
date = {2025-08-27},
urldate = {2025-01-01},
journal = {Proceedings of the Design Society},
volume = {5},
pages = {751–760},
publisher = {Cambridge University Press},
abstract = {Design activities rely on external representations to offload cognitive effort and communicate ideas. These representations, ranging from sketches to virtual reality (VR), influence cognitive processes and perceptual outcomes. This study investigates the impact of different media representations on brain activity by comparing neural responses to design representations in VR and desktop monitor conditions. Utilizing brain network analyses derived from EEG signals in alpha, beta, gamma, and theta bands, results demonstrate that VR elicits greater cognitive integration and sensory engagement. These patterns suggest that VR facilitates holistic evaluations, while desktop representations support precision-focused tasks. These findings provide actionable guidance for optimizing design media selection based on cognitive objectives and contribute to the emerging design neurocognition field.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Țenea, Sabin-Andrei; Berceanu, Alexandru; Nisioi, Sergiu; Robu-Movilă, Andreea; Pistol, Constantin; Burloiu, Grigore
The co-created city: neuroadaptive design for healthy environments Journal Article
In: Intelligent Buildings International, pp. 1–17, 2025.
@article{țenea2025co,
title = {The co-created city: neuroadaptive design for healthy environments},
author = {Sabin-Andrei Țenea and Alexandru Berceanu and Sergiu Nisioi and Andreea Robu-Movilă and Constantin Pistol and Grigore Burloiu},
doi = {https://doi.org/10.1080/17508975.2025.2542804},
year = {2025},
date = {2025-08-21},
urldate = {2025-01-01},
journal = {Intelligent Buildings International},
pages = {1–17},
publisher = {Taylor & Francis},
abstract = {Urban environments profoundly influence human well-being and behavior, underscoring the need for design paradigms that seamlessly integrate ecological principles with stakeholder requirements. This study investigates the convergence of affective computing and generative design to develop adaptive, health-promoting urban environments. Initially, a genetic-generative algorithm was created to generate an extensive database of high-rise tower designs optimized for solar exposure and surface-to-volume ratio. Subsequently, an EEG-based Brain–Computer Interface (BCI) system was implemented to capture architects’ subconscious emotional responses to selected designs using Event-Related Potentials (ERPs) and Self-Assessment Manikin (SAM) scales. EEG data from 24 participants were analyzed to extract ERP markers of valence-based preference, revealing significant neural responses in early (250–350 ms) and late (600–800 ms) time windows. A random forest model, complemented by SHAP analysis, demonstrated nonlinear influences of critical design parameters on subjective preference. EEG-derived preference scores were then integrated into a multi-objective optimization workflow, facilitating a data-driven, user-centric design selection process. The findings support the potential for real-time, neuroadaptive architectural design that mitigates decision fatigue while harmonizing objective performance metrics with affective insights. Moreover, this work contributes to research-informed public consultation by providing an evidence-based, inclusive approach for incorporating subconscious user preferences into urban planning and architectural workflows.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Shahrokhi, Eren M; Ahmed, S Nizam; Lee, Gaang; Lee, SangHyun
Feasibility of an EEG-based dynamic suboptimal cognitive monitoring for field neuroergonomics Conference
ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction, vol. 42, IAARC Publications 2025.
@conference{shahrokhi2025feasibility,
title = {Feasibility of an EEG-based dynamic suboptimal cognitive monitoring for field neuroergonomics},
author = {Eren M Shahrokhi and S Nizam Ahmed and Gaang Lee and SangHyun Lee},
url = {https://wearablesensing.com/wp-content/uploads/2025/09/Feasibility_of_an_EEG-based_dy.pdf},
year = {2025},
date = {2025-07-28},
urldate = {2025-07-28},
booktitle = {ISARC. Proceedings of the International Symposium on Automation and Robotics in Construction},
volume = {42},
pages = {73–80},
organization = {IAARC Publications},
abstract = {Suboptimal cognitive states among construction workers significantly impact safety and productivity, with mental workload playing a key role in triggering these states. Determining if the mental workload fluctuation is leading to an error is challenging as the relationship between mental workload and suboptimal cognitive states is complex and non-linear, with traditional theories failing to map their fluctuations effectively. Recently, a two-dimensional space has been introduced to theoretically map mental workload fluctuations and suboptimal cognitive states using task engagement and arousal. However, there is currently no framework in place to continuously apply this theoretical knowledge in practical settings. To address this gap, this study investigates the feasibility of EEG-based frameworks for classifying four different cognitive states, namely comfort zone, mind wandering, effort withdrawal, and inattentional blindness, based on mental workload fluctuations. EEG signals were collected from 10 participants using a headset with dry electrodes, processed to extract relevant features, and classified using Support Vector Machine (SVM) and Artificial Neural Network (ANN) models. The ANN achieved superior performance in k-fold and leave one period out validation methods, though accuracy declined in leave one subject out validation. These findings underscore the potential of EEG-based differentiation of cognitive suboptimalities to enhance safety and productivity in construction by providing crucial information about when construction workers are most likely to make cognitive errors, which is essential for timely and appropriate interventions. Also, the low subject independent accuracy emphasizes the need to address individual differences in EEG signals for broader applicability},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Maksimenko, Vladimir; Li, Xinwei; Kim, Eui-Jin; Bansal, Prateek
Video-Based experiments better unveil societal biases towards ethical decisions of autonomous vehicles Journal Article
In: Transportation Research Part C: Emerging Technologies, vol. 179, pp. 105284, 2025.
@article{maksimenko2025video,
title = {Video-Based experiments better unveil societal biases towards ethical decisions of autonomous vehicles},
author = {Vladimir Maksimenko and Xinwei Li and Eui-Jin Kim and Prateek Bansal},
doi = {https://doi.org/10.1016/j.trc.2025.105284},
year = {2025},
date = {2025-07-26},
urldate = {2025-01-01},
journal = {Transportation Research Part C: Emerging Technologies},
volume = {179},
pages = {105284},
publisher = {Elsevier},
abstract = {Autonomous vehicles (AVs) encounter moral dilemmas when determining whom to sacrifice in unavoidable crashes. To increase the trustworthiness of AVs, policymakers need to understand public judgment on how AVs should act in such ethically complex situations. Previous studies have evaluated public perception about these ethical matters using picture-based surveys and reported societal biases, i.e., systematic variations in ethical decisions based on the socioeconomic characteristics (e.g., gender) of the individuals involved. For instance, females may prioritise saving a female pedestrian in AV-pedestrian incidents. We investigate if these biases stem from personal beliefs or emerge during experiment engagement and if the presentation format affects bias manifestation. Analysing neural responses in moral experiments measured using electroencephalography (EEG) and behaviour model parameters, we find that video-based scenes better unveil societal biases than picture-based scenes. These biases emerge when the subject interacts with experimental information rather than being solely dictated by initial preferences. The findings support the use of realistic video-based scenes in moral experiments. These insights can inform data collection standards to shape socially acceptable ethical AI policies.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Zazon, Dor; Nissim, Nir
Can your brain signals reveal your romantic emotions? Journal Article
In: Computers in Biology and Medicine, vol. 196, pp. 110754, 2025.
@article{zazon2025can,
title = {Can your brain signals reveal your romantic emotions?},
author = {Dor Zazon and Nir Nissim},
doi = {https://doi.org/10.1016/j.compbiomed.2025.110754},
year = {2025},
date = {2025-07-14},
urldate = {2025-01-01},
journal = {Computers in Biology and Medicine},
volume = {196},
pages = {110754},
publisher = {Elsevier},
abstract = {The process of partner selection may result in emotions of romantic attraction when one expresses interest towards a potential partner, and rejection when one receives negative feedback from a potential partner. Previous EEG studies have found distinct neural correlates for both emotions in the context of dating apps. However, to the best of our knowledge, no study has demonstrated the ability to predict the associated intra-subject romantic emotions based on a single-trial analysis of event related potential (ERP). In this study, 61 participants (31 females and 30 males) agreed to use our simulated dating app, and their EEG brain activity was recorded during their engagement with the app. Based on each participant's EEG signals, we induced multiple machine and deep learning models aimed at predicting single-trial romantic attraction and rejection for each participant. Our results show that the best model obtained 71.38 % and 81.31 % average ROC-AUC scores across the participants respectively for romantic attraction and rejection. We also found that our learning models were able to predict romantic emotions more accurately for picky participants than they could for those that were less fussy, which might suggest that picky people have stronger brain activity signals when it comes to romantic preference.
},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ahmed, Mohammad Haroon; Panchookian, John; Grillo, Michael; Weerasinghe, Yasith; Taebi, Amirtaha; Qadri, Fadil; Gamage, Peshala; Kaya, Mehmet
Stress Classification Through Simultaneous EEG, Heart Rate Variability, and EMG Monitoring Proceedings Article
In: 2025 IEEE Medical Measurements & Applications (MeMeA), pp. 1–6, IEEE 2025.
@inproceedings{ahmed2025stress,
title = {Stress Classification Through Simultaneous EEG, Heart Rate Variability, and EMG Monitoring},
author = {Mohammad Haroon Ahmed and John Panchookian and Michael Grillo and Yasith Weerasinghe and Amirtaha Taebi and Fadil Qadri and Peshala Gamage and Mehmet Kaya},
doi = {https://doi.org/10.1109/MeMeA65319.2025.11068019},
year = {2025},
date = {2025-07-10},
urldate = {2025-01-01},
booktitle = {2025 IEEE Medical Measurements & Applications (MeMeA)},
pages = {1–6},
organization = {IEEE},
abstract = {Stress has significant effects on health, yet there is limited research on effective methods for quantifying stress detection. Monitoring physiological changes presents a promising approach to stress management. This study compares the effectiveness of electroencephalography (EEG), electrocardiography (ECG)-derived heart rate variability (HRV), and trapezius muscle electromyography (EMG) in stress classification. Sixteen healthy participants (ages 18–46) completed three sessions in a controlled environment. Baseline activity was compared to stress-induced changes during a Stroop color word test and mental arithmetic task. EEG, HRV, and EMG features were analyzed in 30-second intervals to assess their ability to detect stress. EEG features were found to be the most effective, followed by HRV and EMG. Machine learning techniques (SVM, KNN, neural network, and random forest) were applied for subject-specific classification. EEG achieved the highest accuracy (86.45 ± 7.22%), while HRV and EMG yielded similar accuracies (77.36 ± 9.10% and 81.84 ± 6.13%, respectively). When combining HRV and EMG features, an accuracy of 87.51 ± 7.18% was achieved, comparable to EEG. These findings suggest that wearable sensors utilizing EMG and HRV could effectively detect stress without the need for EEG. This approach could open up new avenues for stress management in real-world settings. Future studies with larger sample sizes will work towards developing a universal stress classification model.
},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Kyrou, Maria; Laskaris, Nikos A; Petrantonakis, Panagiotis C; Kalaganis, Fotis P; Georgiadis, Kostas; Nikolopoulos, Spiros; Kompatsiaris, Ioannis
Decoding Visual Art Preferences from EEG Signals Using Wavelet Scattering Transform Conference
2025 25th International Conference on Digital Signal Processing 2025.
@conference{kyroudecoding,
title = {Decoding Visual Art Preferences from EEG Signals Using Wavelet Scattering Transform},
author = {Maria Kyrou and Nikos A Laskaris and Panagiotis C Petrantonakis and Fotis P Kalaganis and Kostas Georgiadis and Spiros Nikolopoulos and Ioannis Kompatsiaris},
url = {https://2025.ic-dsp.org/wp-content/uploads/2025/05/Decoding-Visual-Art-Preferences-from-EEG-signals-Using-Wavelet-Scattering-Transform.pdf},
year = {2025},
date = {2025-06-25},
organization = {2025 25th International Conference on Digital Signal Processing},
abstract = {Understanding art preferences through neural signals can enhance artistic experiences and provide valuable insights into aesthetic perception. In this study, we propose a novel EEG-based framework for visual art preferences classification, leveraging Wavelet Scattering Transform (WST) for feature extraction and Support Vector Machines (SVM) for classification. Unlike deep learning approaches that require large-scale datasets and extensive training, a wavelet scattering network provides low-variance, translation-invariant features without the need for learnable parameters, making it well-suited for regular size EEG datasets. Experimental results demonstrate that the proposed method effectively differentiates between ”like” and ”dislike” ratings based on EEG responses to visual art stimuli. The findings highlight the potential of wavelet scattering-based feature extraction in decoding aesthetic preferences.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Kalaganis, Fotis P; Georgiadis, Kostas; Nousias, Georgios; Oikonomou, Vangelis P; Laskaris, Nikos A; Nikolopoulos, Spiros; Kompatsiaris, Ioannis
Enhancing EEG-Based Neuromarketing with Attention mechanism and Riemannian Features Conference
2025 25th International Conference on Digital Signal Processing 2025.
@conference{kalaganisenhancing,
title = {Enhancing EEG-Based Neuromarketing with Attention mechanism and Riemannian Features},
author = {Fotis P Kalaganis and Kostas Georgiadis and Georgios Nousias and Vangelis P Oikonomou and Nikos A Laskaris and Spiros Nikolopoulos and Ioannis Kompatsiaris},
url = {https://2025.ic-dsp.org/wp-content/uploads/2025/05/IEEE_DSP_Attention_and_SCMs_for_Neuromarketing_final.pdf},
year = {2025},
date = {2025-06-25},
organization = {2025 25th International Conference on Digital Signal Processing},
abstract = {This paper presents a novel EEG-based decoding framework that integrates Riemannian Geometry features with Deep Learning to enhance neuromarketing classification tasks. The proposed approach leverages Spatial Covariance Matrices (SCMs) computed across multiple frequency bands and employs a self-attention mechanism to improve feature selection and classification performance. To address the challenges of class imbalance and inter-subject variability, Riemannian alignment and data augmentation techniques are incorporated, ensuring robust feature representations. The framework is evaluated on the NeuMa dataset, where participants engaged in a realistic shopping scenario. Experimental results demonstrate that the proposed method achieves a balanced accuracy of 77.80%, outperforming traditional classifiers, including Support Vector Machines (SVM), k-Nearest Neighbors (kNN), Riemannian-based models, and SPDNet-based approaches. These findings highlight the effectiveness of combining functional covariation with deep learning architectures, paving the way for more advanced EEGbased consumer behavior analysis in neuromarketing applications.
},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}
Parra, Sebastian Rueda; Hardesty, Russell Lee; GEMOETS, DARREN Ethan; Hill, Jeremy; Gupta, Disha
Test-Retest Reliability of Kinematic and EEG Low-beta spectral features in a robot-based arm movement task Journal Article
In: Biomedical Physics & Engineering Express, 2025.
@article{rueda2025test,
title = {Test-Retest Reliability of Kinematic and EEG Low-beta spectral features in a robot-based arm movement task},
author = {Sebastian Rueda Parra and Russell Lee Hardesty and DARREN Ethan GEMOETS and Jeremy Hill and Disha Gupta},
doi = {https://doi.org/10.1088/2057-1976/ade317},
year = {2025},
date = {2025-06-10},
urldate = {2025-01-01},
journal = {Biomedical Physics & Engineering Express},
abstract = {Objective: Low-beta (Lβ, 13-20 Hz) power plays a key role in upper-limb motor control and afferent processing, making it a strong candidate for a neurophysiological biomarker. We investigate the test-retest reliability of Lβ power and kinematic features from a robotic task over extended intervals between sessions to assess its potential for tracking longitudinal changes in sensorimotor function.
Approach: We designed and optimized a testing protocol to evaluate Lβ power and kinematic features (maximal and mean speed, reaction time, and movement duration) in ten right-handed healthy individuals that performed a planar center-out task using a robotic device and EEG for data collection. The task was performed with both hands, and the experiment was repeated approximately 40 days later under similar conditions, to resemble real-life intervention periods. We first characterized the selected features within the task context for each session, then assessed intersession agreement, the test-retest reliability (Intraclass Correlation Coefficient, ICC), and established threshold values for meaningful changes in Lβ power using Bland-Altman plots and repeatability coefficients.
Main results: Lβ power showed the expected contralateral reduction during movement preparation and onset. Both Lβ power and kinematic features exhibited good to excellent test-retest reliability (ICC > 0.8), displaying no significant intersession differences. Kinematic results align with prior literature, reinforcing the robustness of these measures in tracking motor performance over time. Changes in Lβ power between sessions exceeding 11.4% for right-arm and 16.5% for left-arm movements reflect meaningful intersession differences.
Significance: This study provides evidence that Lβ power remains stable over extended intersession intervals comparable to rehabilitation timelines. The strong reliability of both Lβ power and kinematic features supports their use in monitoring upper-extremity sensorimotor function longitudinally, with Lβ power emerging as a promising biomarker for tracking therapeutic outcomes, postulating it as a reliable feature for long-term applications.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}