Güldenring, Daniel
Refine
Document Type
Language
- English (26)
Publication reviewed
- begutachtet (26)
Keywords
- Biomedizinische Signalverarbeitung (12)
- ECG (10)
- EKG (7)
- Biomedical Signal Processing (6)
- Elektrokardiogramm (6)
- Computergestützte Elektrokardiographie (3)
- Biomedizin (2)
- Biomedizinische Signalverarbeitung; EKG; ECG (2)
- Elektrokardiogramm, Biomedizinische Signalverarbeitung, ECG, EKG (2)
- Maschinelles Lernen (2)
- Artificial intelligence in medicine (1)
- Blood pressure (1)
- Digitale Signalverarbeitung (1)
- Elektrokortikogramm, Biomedizinische Signalverarbeitung, ECG, EKG (1)
- Künstliche Intelligenz (1)
- blood pressure estimation (1)
- pulse transit time (1)
Institute
Linear ECG-lead transformations (LELTs) are used to estimate unrecorded target leads by applying a number of recorded basis leads to a LELT matrix. Such LELT matrices are commonly developed using training datasets that are composed of ECGs that belong to different diagnostic classes (DCs). The aim of our research was to assess the influence of the training set composition on the estimation performance of LELTs that estimate target leads V1, V3, V4 and V6 from basis leads I, II, V2 and V5 of the 12-lead ECG. Our assessment was performed using ECGs from the three DCs left ventricular hypertrophy, right bundle branch block and normal (ECGs without abnormalities). Training sets with different DC compositions were used for the development of LELT matrices. These matrices were used to estimate the target leads of different test sets. The estimation performance of the developed matrices was quantified using root mean square error values calculated between derived and recorded target leads. Our findings indicate that unbalanced training sets can lead to LELTs that show large estimation performance variability across different DCs. Balanced training sets were found to produce LELTs that performed well across multiple DCs. We recommend balanced training sets for the development of LELTs.
Background: Electrocardiogram (ECG) signals are often contaminated by noise. Manual review of large ECG databases to identify noisy signals is time-consuming. Traditional signal quality assessment algorithms often do not generalize well or are computationally expensive. This study developed a Temporal Convolutional Neural Network (TCNN) to estimate the signal-to-noise ratio (SNR) of ECG signals. Method: We trained a TCNN on a proprietary database of 134,019 12-lead ECGs without any machine or human-added noise labels. Assuming that this data had high SNR, we randomly selected a single lead from each ECG and added random Gaussian noise. We then scaled the signals and added noise to give a negatively skewed normal distribution of true SNR values. We trained a TCNN to regress low- and high-frequency pseudo-SNR values from the raw noisy input signals. Results: On the testing dataset, the TCNN achieved a mean error of 0.31±1.80 dB and a Pearson correlation coefficient of 0.96 for low-frequency pseudo-SNR. Similarly, for high-frequency pseudo-SNR, the mean error was 0.29±1.63 dB and the Pearson correlation coefficient was 0.97. Conclusion: A Temporal Convolutional Neural Network can accurately estimate the SNR of unseen ECGs.
We perform a novel comparative analysis between optically and mechanically derived pulse transit time (PTT) that is universally employed technique for cuffless blood pressure (BP) estimation. For data collection two inline photoplethysmogram (PPG) sensors were mounted at the distal and proximal phalanxes of the index finger of each subject and top each PPG sensor fixture a finger ballistocardiogram (BPP) sensors were clamped. The clamped stacking of the BPP sensors over the PPG sensors provided vertical aligned acquisition of the blood flow waveform through the radial artery for both sensors. The analysis of variance (ANOVA) between PTT derived from the BPP and PPG sensors resulted in a statistically significant difference at p<0.05. The PTT derived from the BPP sensors showed higher values, 17.8 milliseconds on average, than the PTT derived from the PPG sensors. Higher accuracy PTT values will improve the estimation of cuffless BP and thus has the potential to revolutionize the technology.
Linear ECG-lead transformations (LELTs) are used to estimate unrecorded leads by applying a number of recorded leads to a LELT matrix. Such LELT matrices are commonly developed using a training dataset and linear regression analysis. An important performance metric of LELTs is the subject-to-subject variability (SSV) of their estimation performance. In this research, we assess the relationship between an increasing training set size (from n=10 to n=370 subjects) and the SSV of LELTs. A total of 200 LELT matrices were developed for each training sets size. The developed LELT matrices and 12-lead ECG data of a testing dataset (n=123 subjects) were used for the estimation of Frank VCGs. Root-mean-squared-error (RMSE) values between recorded and estimated Frank VCG leads were used for the quantification of the estimation performance. The SSV associated with each LELT matrix was quantified as the standard deviation of the corresponding RMSE values. This was followed by an analysis of the relationship between the training set size and the associated SSV values. Increasing the training set size from 10 to 180, to 160 and to 200 subjects, for Frank VCG leads X,Y and Z respectively, was associated with a reduction of the observed SSV. Further increases in training set size were found to only have a marginal effect on the observed SSV.
This study assessed the performance of a deep neural network (PulseAI, Belfast, United Kingdom) used in conjunction with a dry-electrode ECG sensor device (RhythmPad, D&FT, United Kingdom) to detect AF automatically. Simultaneous pairs of 12-lead ECGs and single-lead dry-electrode ECGs were collected from 622 patients. The 12-lead ECGs were manually overread and used as reference diagnoses. Twenty-two patients were confirmed with AF and had an interpretable 12-lead and single-lead dry-electrode ECG recording. The deep neural network analysed the dry-electrode ECGs, and performance was compared to the 12-lead interpretation. Overall, the deep neural network algorithm yielded a sensitivity of 96% (95% CI, 87%-100%), specificity of 99% (95% CI, 98%-100%) and positive predictive value of 81% (95% CI, 66%-96%) for detection of AF episodes. When coupled with dry-electrode ECG sensors, the PulseAI neural network allows for large-scale and low-cost screening for AF. Widespread implementation of this technology may allow for earlier detection, treatment, and management of patients with AF.
Background
The application of artificial intelligence to interpret the electrocardiogram (ECG) has predominantly included the use of knowledge engineered rule-based algorithms which have become widely used today in clinical practice. However, over recent decades, there has been a steady increase in the number of research studies that are using machine learning (ML) to read or interrogate ECG data.
Objective
The aim of this study is to review the use of ML with ECG data using a time series approach.
Methods
Papers that address the subject of ML and the ECG were identified by systematically searching databases that archive papers from January 1995 to October 2019. Time series analysis was used to study the changing popularity of the different types of ML algorithms that have been used with ECG data over the past two decades. Finally, a meta-analysis of how various ML techniques performed for various diagnostic classifications was also undertaken.
Results
A total of 757 papers was identified. Based on results, the use of ML with ECG data started to increase sharply (p < 0.001) from 2012. Healthcare applications, especially in heart abnormality classification, were the most common application of ML when using ECG data (p < 0.001). However, many new emerging applications include using ML and the ECG for biometrics and driver drowsiness. The support vector machine was the technique of choice for a decade. However, since 2018, deep learning has been trending upwards and is likely to be the leading technique in the coming few years. Despite the accuracy paradox, accuracy was the most frequently used metric in the studies reviewed, followed by sensitivity, specificity, F1 score and then AUC.
Conclusion
Applying ML using ECG data has shown promise. Data scientists and physicians should collaborate to ensure that clinical knowledge is being applied appropriately and is informing the design of ML algorithms. Data scientists also need to consider knowledge guided feature engineering and the explicability of the ML algorithm as well as being transparent in the algorithm's performance to appropriately calibrate human-AI trust. Future work is required to enhance ML performance in ECG classification.
Deep Convolutional Neural Networks (DCNNs) have been shown to provide improved performance over traditional heuristic algorithms for the detection of arrhythmias from ambulatory ECG recordings. However, these DCNNs have primarily been trained and tested on device-specific databases with standardized electrode positions and uniform sampling frequencies. This work explores the possibility of training a DCNN for Atrial Fibrillation (AF) detection on a database of single‑lead ECG rhythm strips extracted from resting 12‑lead ECGs. We then test the performance of the DCNN on recordings from ambulatory ECG devices with different recording leads and sampling frequencies.
We developed an extensive proprietary resting 12‑lead ECG dataset of 549,211 patients. This dataset was randomly split into a training set of 494,289 patients and a testing set of the remaining 54,922 patients. We trained a 34-layer convolutional DCNN to detect AF and other arrhythmias on this dataset. The DCNN was then validated on two Physionet databases commonly used to benchmark automated ECG algorithms (1) MIT-BIH Arrhythmia Database and (2) MIT-BIH Atrial Fibrillation Database. Validation was performed following the EC57 guidelines, with performance assessed by gross episode and duration sensitivity and positive predictive value (PPV). Finally, validation was also performed on a selection of rhythm strips from an ambulatory ECG patch that a committee of board-certified cardiologists annotated.
On MIT-BIH, The DCNN achieved a sensitivity of 100% and 84% PPV in detecting episodes of AF. and 100% sensitivity and 94% PPV in quantifying AF episode duration. On AFDB, The DCNN achieved a sensitivity of 94% and PPV of 98% in detecting episodes of AF, and 98% sensitivity and 100% PPV in quantifying AF episode duration. On the patch database, the DCNN demonstrated performance that was closely comparable to that of a cardiologist.
The results indicate that DCNN models can learn features that generalize between resting 12‑lead and ambulatory ECG recordings, allowing DCNNs to be device agnostic for detecting arrhythmias from single‑lead ECG recordings and enabling a range of clinical applications.
Linear electrocardiographic lead transformations (LELTs) are used to estimate unrecorded ECG leads by applying a number of recorded leads to a LELT matrix. Such matrices are commonly developed using a training dataset. The size of the training dataset has an influence on the estimation performance of a LELT matrix. However, an estimate of the minimal size required for the development of LELTs has previously not been reported.
The aim of this research was to determine such an estimate. We generated LELT matrices from differently sized (from n = 10 to n = 540 subjects in steps of 10 subjects) training datasets. The LELT matrices and the 12-lead ECG data of a testing dataset (n = 186 subjects) were used for the estimation of Frank VCGs. Root-mean-squared-error values between recorded and estimated Frank leads of the testing dataset were used for the quantification of the estimation performance associated with a given size of the training dataset.
The performance of the LELTs was, after an initial phase of improvement, found to only marginally improve with additional increases in the size of the training dataset. Our findings suggest that the training dataset should have
a minimal size of 170 subjects when developing LELTs that utilise the 12-lead ECG for the estimation of unrecorded
ECG leads.
There are limited datasets available to facilitate the evaluation of patch-based lead systems, so the leads must be derived from existing data, mainly the 12-lead ECG. We have previously introduced a short spaced lead (SSL) system consisting of two leads with the largest ST segment changes during ischaemic-type episodes. In this study, we aim to evaluate the derivation of this patch-based lead system from the 12-lead ECG.
Method: Thoracic body surface potential maps (BSPM) were recorded from n=734 patients. Using Laplacian interpolation, each recording was expanded to the 352-node Dalhousie torso. The eight independent channels of the 12-lead ECG were extracted (I, II, V1-V6) with the two leads of the SSL patch Coefficients were derived using linear regression from the 12-lead ECG to the SSL patch.
Results: The median Pearson correlation coefficients (CC) and root mean square error (RMSE) for each lead were calculated as follows (CC/RMSE): 0.986/74.3 µV (ST monitoring lead); 0.976/65.3 µV (spatially orthogonal lead).
Conclusion: We have developed coefficients that allow the derivation of a patch-based lead system from the 12-lead ECG. Given the high correlation, it is possible to generate short spaced lead systems from existing diagnostic lead systems, however, amplitude errors are introduced in the process.
Automated interpretation of the 12-lead ECG has remained an underpinning interest in decades of research that has seen a diversity of computing applications in cardiology.
The application of computers in cardiology began in the 1960s with early research focusing on the conversion of analogue ECG signals (voltages) to digital samples. Alongside this, software techniques that automated the extraction of wave measurements and provided basic diagnostic statements, began to emerge. In the years since then there have been many significant milestones which include the widespread commercialisation of 12-lead ECG interpretation software, associated clinical utility and the development of the related regulatory frameworks to promote standardised development.
In the past few years, the research community has seen a significant rejuvenation in the development of ECG interpretation programs. This is evident in the research literature where a large number of studies have emerged tackling a variety of automated ECG interpretation problems. This is largely due to two factors. Specifically, the technical advances, both software and hardware, that have facilitated the broad adoption of modern artificial intelligence (AI) techniques, and, the increasing availability of large datasets that support modern AI approaches.
In this article we provide a very high-level overview of the operation of and approach to the development of early 12-lead ECG interpretation programs and we contrast this to the approaches that are now seen in emerging AI approaches. Our overview is mainly focused on highlighting differences in how input data are handled prior to generation of the diagnostic statement.