publications
Publications in reversed chronological order.
2024
- JCIMUNIQUE: A Framework for Uncertainty Quantification BenchmarkingJessica Lanini, Minh Tam Davide Huynh, Gaetano Scebba, and 2 more authorsJournal of Chemical Information and Modeling, 2024
Machine learning models have become key in decision-making for many disciplines, including drug discovery and medicinal chemistry. Therefore, uncertainty quantification (UQ) in ML predictions has gained importance in recent years. Many investigations have focused on developing methodologies that provide accurate uncertainty estimates for ML-based predictions. Unfortunately, there is no UQ strategy that consistently provides robust estimates about model’s applicability on new samples. Depending on the dataset, prediction task, and algorithm, accurate uncertainty estimations might be unfeasible to obtain. The UNIQUE (UNcertaInty QUantification bEnchmarking) framework is introduced to facilitate a comparison of UQ strategies in ML-based predictions. This Python library unifies the benchmarking of multiple UQ metrics, including the calculation of nonstandard UQ metrics (combining information from the dataset and model), and provides a comprehensive evaluation.
@article{unique_jcim_2024, title = {{UNIQUE}: A Framework for Uncertainty Quantification Benchmarking}, author = {Lanini, Jessica and Huynh, Minh Tam Davide and Scebba, Gaetano and Schneider, Nadine and Rodr{\'i}guez-P{\'e}rez, Raquel}, journal = {Journal of Chemical Information and Modeling}, volume = {64}, number = {22}, pages = {8379--8386}, year = {2024}, doi = {10.1021/acs.jcim.4c01578}, }
2023
- AAICMotor-cognitive dual tasking in the clinical setting: a sensitive measure of functional impairment in early Alzheimer’s diseaseAnna-Katrine Brem, Gaetano Scebba, Jelena Curcic, and 17 more authorsIn Alzheimer’s Association International Conference 2023, 2023
Gait is a complex everyday activity that depends upon supraspinal activity and a host of cognitive functions such as attention and executive functions. As cognition declines in neurodegenerative diseases, the interaction and competition for neuronal resources during motor-cognitive dual-tasking (e.g., walking while talking) might be a sensitive measure of subtle functional impairments in early Alzheimer’s disease (AD). Here, we aim to identify gait deficits due to neuronal competition across the AD spectrum.
@inproceedings{aaic_23, title = {Motor-cognitive dual tasking in the clinical setting: a sensitive measure of functional impairment in early {Alzheimer's} disease}, author = {Brem, Anna-Katrine and Scebba, Gaetano and Curcic, Jelena and Muurling, Marjin and de Boer, Casper and Coello, Neva and Atreya, Alankar and Conde, Pauline and Frohlich, Holger and Grammatikopoulou, Margarita and Hinds, Chris and Lazarou, Ioulietta and Lentzen, Manuel and Narayan, Vaibhav A and Kozak, Rouba and Nikolopoulos, Spiros and Vairavan, Srinivasan and Visser, Pieter Jelle and Wittenberg, Gayle and Aarsland, Dag}, booktitle = {Alzheimer's Association International Conference 2023}, year = {2023}, }
2022
- PatentA system for recording a high-quality wound image and for real-time image quality assessmentJia Zhang, Gaetano Scebba, and Walter Karlen2022European Patent EP4092620A1
The patent describes a system for recording high-quality wound images and real-time image quality assessment. The system consists of a computerized mobile device with a camera, processor, non-transitory storage medium, and display, and a reference marker for color and sharpness evaluation. The mobile device’s non-transitory storage medium contains a computer program that, when executed on the processor, causes the camera to record an image of a wound and reference marker and evaluates the image’s sharpness and displays feedback information on the display.
@misc{patent_eth, title = {A system for recording a high-quality wound image and for real-time image quality assessment}, author = {Zhang, Jia and Scebba, Gaetano and Karlen, Walter}, year = {2022}, note = {European Patent EP4092620A1}, } - IMUDetect-and-Segment: A Deep Learning Approach to Automate Wound Image SegmentationGaetano Scebba, Jia Zhang, Sabrina Catanzaro, and 4 more authorsInformatics in Medicine Unlocked, 2022
Chronic wounds significantly impact quality of life. They can rapidly deteriorate and require close monitoring of healing progress. Image-based wound analysis is a way of objectively assessing the wound status by quantifying important features that are related to healing. However, high heterogeneity of the wound types and imaging conditions challenge the robust segmentation of wound images. We present Detect-and-Segment (DS), a deep learning approach to produce wound segmentation maps with high generalization capabilities. In our approach, dedicated deep neural networks detected the wound position, isolated the wound from the perturbing background, and computed a wound segmentation map. We tested this approach on a diabetic foot ulcers data set and compared it to a segmentation method based on the full image. The Matthews correlation coefficient (MCC) improved from 0.29 (full image) to 0.85 (DS) on the diabetic foot ulcer data set. When the DS was tested on the independent data sets, the mean MCC increased from 0.17 to 0.85. Furthermore, the DS enabled the training of segmentation models with up to 90% less training data without impacting the segmentation performance. The proposed DS approach is a step towards automating wound analysis and reducing efforts to manage chronic wounds.
@article{wou_imu, title = {Detect-and-Segment: A Deep Learning Approach to Automate Wound Image Segmentation}, author = {Scebba, Gaetano and Zhang, Jia and Catanzaro, Sabrina and Mihai, Carina and Distler, Oliver and Berli, Martin and Karlen, Walter}, journal = {Informatics in Medicine Unlocked}, pages = {100884}, year = {2022}, issn = {2352-9148}, doi = {https://doi.org/10.1016/j.imu.2022.100884}, }
2021
- JMIRWound Image Quality From a Mobile Health Tool for Home-Based Chronic Wound Management With Real-Time Quality Feedback: Randomized Feasibility StudyJia Zhang, Carina Mihai, Laura Tüshaus, and 3 more authorsJMIR mHealth and uHealth, 2021
Background: Travel to clinics for chronic wound management is burdensome to patients. Remote assessment and management of wounds using mobile and telehealth approaches can reduce this burden and improve patient outcomes. An essential step in wound documentation is the capture of wound images, but poor image quality can have a negative influence on the reliability of the assessment. Objective: Our goal was to develop a mobile health (mHealth) tool for the remote self-assessment of digital ulcers (DUs) in patients with systemic sclerosis (SSc). Methods: We developed an mHealth tool composed of a wound imaging and management app, a custom color reference sticker, and a smartphone holder. Results: A total of 21 patients were enrolled, of which 15 patients were included in the image quality analysis. Conclusions: We developed an mHealth tool that enables patients with SSc to acquire good-quality DU images and demonstrated that it is feasible to deploy such an app in this patient group.
@article{JMIR2021_scleroderma, title = {Wound Image Quality From a Mobile Health Tool for Home-Based Chronic Wound Management With Real-Time Quality Feedback: Randomized Feasibility Study}, author = {Zhang, Jia and Mihai, Carina and T{\"u}shaus, Laura and Scebba, Gaetano and Distler, Oliver and Karlen, Walter}, journal = {JMIR mHealth and uHealth}, volume = {9}, number = {7}, pages = {e26149}, year = {2021}, publisher = {JMIR Publications Inc., Toronto, Canada}, }
2020
- EMBCCovariance Intersection to Improve the Robustness of the Photoplethysmogram Derived Respiratory RateJia Zhang, Gaetano Scebba, and Walter KarlenIn 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2020
Respiratory rate (RR) can be estimated from the photoplethysmogram (PPG) recorded by optical sensors in wearable devices. The fusion of estimates from different PPG features has lead to an increase in accuracy, but also reduced the numbers of available final estimates due to discarding of unreliable data. We propose a novel, tunable fusion algorithm using covariance intersection to estimate the RR from PPG (CIF). The algorithm is adaptive to the number of available feature estimates and takes each estimates’ trustworthiness into account. In a benchmarking experiment using the CapnoBase dataset with reference RR from capnography, we compared the CIF against the state-of-the-art Smart Fusion (SF) algorithm. The median root mean square error was 1.4 breaths/min for the CIF and 1.8 breaths/min for the SF. The CIF significantly increased the retention rate distribution of all recordings from 0.46 to 0.90 (p < 0.001). The agreement with the reference RR was high with a Pearson’s correlation coefficient of 0.94, a bias of 0.3 breaths/min, and limits of agreement of -4.6 and 5.2 breaths/min. In addition, the algorithm was computationally efficient. Therefore, CIF could contribute to a more robust RR estimation from wearable PPG recordings.
@inproceedings{EMBC2020_rrCovariance, title = {Covariance Intersection to Improve the Robustness of the Photoplethysmogram Derived Respiratory Rate}, author = {Zhang, Jia and Scebba, Gaetano and Karlen, Walter}, booktitle = {2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)}, year = {2020}, } - IEEE TBMEMultispectral Video Fusion for Non-contact Monitoring of Respiratory Rate and ApneaGaetano Scebba, Giulia Da Poian, and Walter KarlenIEEE Transactions on Biomedical Engineering, 2020
Continuous monitoring of respiratory activity is desirable in many clinical applications to detect respiratory events. Non-contact monitoring of respiration can be achieved with near- and far-infrared spectrum cameras. However, current technologies are not sufficiently robust to be used in clinical applications. For example, they fail to estimate an accurate respiratory rate (RR) during apnea. We present a novel algorithm based on multispectral data fusion that aims at estimating RR also during apnea. The algorithm independently addresses the RR estimation and apnea detection tasks. Respiratory information is extracted from multiple sources and fed into an RR estimator and an apnea detector whose results are fused into a final respiratory activity estimation. We evaluated the system retrospectively using data from 30 healthy adults who performed diverse controlled breathing tasks while lying supine in a dark room and reproduced central and obstructive apneic events. Combining multiple respiratory information from multispectral cameras improved the root mean square error (RMSE) accuracy of the RR estimation from up to 4.64 monospectral data down to 1.60 breaths/min. The median F1 scores for classifying obstructive (0.75 to 0.86) and central apnea (0.75 to 0.93) also improved. Furthermore, the independent consideration of apnea detection led to a more robust system (RMSE of 4.44 vs. 7.96 breaths/min). Our findings may represent a step towards the use of cameras for vital sign monitoring in medical applications.
@article{ieeeTBME20_RRfusion, title = {Multispectral Video Fusion for Non-contact Monitoring of Respiratory Rate and Apnea}, author = {Scebba, Gaetano and Da Poian, Giulia and Karlen, Walter}, journal = {IEEE Transactions on Biomedical Engineering}, volume = {68}, number = {1}, pages = {350--359}, year = {2020}, publisher = {IEEE}, }
2018
- EMBCMultispectral camera fusion increases robustness of ROI detection for biosignal estimation with nearables in real-world scenariosGaetano Scebba, Laura Tüshaus, and Walter KarlenIn 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2018
Thermal cameras enable non-contact estimation of the respiratory rate (RR). Accurate estimation of RR is highly dependent on the reliable detection of the region of interest (ROI), especially when using cameras with low pixel resolution. We present a novel approach for the automatic detection of the human nose ROI, based on facial landmark detection from an RGB camera that is fused with the thermal image after tracking. We evaluated the detection rate and spatial accuracy of the novel algorithm on recordings obtained from 16 subjects under challenging detection scenarios. Results show a high detection rate (median: 100%, 5th-95th percentile: 92%-100%) and very good spatial accuracy with an average root mean square error of 2 pixels in the detected ROI center when compared to manual labeling. Therefore, the implementation of a multispectral camera fusion algorithm is a valid strategy to improve the reliability of non-contact RR estimation with nearable devices featuring thermal cameras.
@inproceedings{EMBC2018_multispectralROI, title = {Multispectral camera fusion increases robustness of {ROI} detection for biosignal estimation with nearables in real-world scenarios}, author = {Scebba, Gaetano and T{\"u}shaus, Laura and Karlen, Walter}, booktitle = {2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)}, pages = {5672--5675}, year = {2018}, organization = {IEEE}, }
2017
- CinCBeat by Beat: Classifying Cardiac Arrhythmias with Recurrent Neural NetworksPatrick Schwab, Gaetano Scebba, Jia Zhang, and 2 more authorsIn Computing in Cardiology, 2017
With tens of thousands of electrocardiogram (ECG) records processed by mobile cardiac event recorders every day, heart rhythm classification algorithms are an important tool for the continuous monitoring of patients at risk. We utilise an annotated dataset of 12,186 single-lead ECG recordings to build a diverse ensemble of recurrent neural networks (RNNs) that is able to distinguish between normal sinus rhythms, atrial fibrillation, other types of arrhythmia and signals that are too noisy to interpret. In order to ease learning over the temporal dimension, we introduce a novel task formulation that harnesses the natural segmentation of ECG signals into heartbeats to drastically reduce the number of time steps per sequence. Additionally, we extend our RNNs with an attention mechanism that enables us to reason about which heartbeats our RNNs focus on to make their decisions. Through the use of attention, our model maintains a high degree of interpretability, while also achieving state-of-the-art classification performance with an average F1 score of 0.79 on an unseen test set (n=3658).
@inproceedings{cinc2017_BeatByBeat, title = {Beat by Beat: Classifying Cardiac Arrhythmias with Recurrent Neural Networks}, author = {Schwab, Patrick and Scebba, Gaetano and Zhang, Jia and Delai, Marco and Karlen, Walter}, booktitle = {Computing in Cardiology}, year = {2017}, } - EMBCImproving ROI detection in photoplethysmographic imaging with thermal camerasGaetano Scebba, Jelena Dragas, Suyi Hu, and 1 more authorIn 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2017
Photoplethismographic imaging (PPGi) enables the estimation of heart rate without body contact by analyzing the temporal skin color changes from video recordings. Motion artifacts and atypical facial characteristics cause poor signals and currently limit the applicability of PPGi. We have developed a novel algorithm for locating cheek and forehead region of interests (ROI) with the aim to improve PPGi during challenging situations. The proposed approach is based on the fusion of RGB and far-infrared (FIR) video streams where FIR ROI is used as fall-back when RGB alone fails. We validated and compared the algorithm against the detection based on single sources, using videos from 8 subjects with distinctively different face characteristics. The subject performed three scenarios with incremental motion artifact content (head at rest, intensive head movements, speaking). The results showed that combining the two imaging sources increased the detection rate of cheeks from 75% (RGB) to 92% (RGB+FIR) in the challenging intensive head movement scenario. This work demonstrated that FIR imaging is complementary to simple RGB imaging and when combined, adds robustness to the detection of ROI in PPGi applications.
@inproceedings{EMBC2017_ppgROI, title = {Improving {ROI} detection in photoplethysmographic imaging with thermal cameras}, author = {Scebba, Gaetano and Dragas, Jelena and Hu, Suyi and Karlen, Walter}, booktitle = {2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)}, pages = {4285--4288}, year = {2017}, organization = {IEEE}, }