Refine
Document Type
Language
- English (9)
Has Fulltext
- no (9)
Publication reviewed
- begutachtet (9)
Keywords
- Biomedizinische Signalverarbeitung (3)
- ECG (3)
- EKG (3)
- Biomedizinische Signalverarbeitung; EKG; ECG (2)
- Biomedical Signal Processing (1)
- Biomedizin (1)
- Computergestützte Elektrokardiographie (1)
- Elektrokardiogramm (1)
- Elektrokortikogramm, Biomedizinische Signalverarbeitung, ECG, EKG (1)
- Maschinelles Lernen (1)
Institute
Background
The application of artificial intelligence to interpret the electrocardiogram (ECG) has predominantly included the use of knowledge engineered rule-based algorithms which have become widely used today in clinical practice. However, over recent decades, there has been a steady increase in the number of research studies that are using machine learning (ML) to read or interrogate ECG data.
Objective
The aim of this study is to review the use of ML with ECG data using a time series approach.
Methods
Papers that address the subject of ML and the ECG were identified by systematically searching databases that archive papers from January 1995 to October 2019. Time series analysis was used to study the changing popularity of the different types of ML algorithms that have been used with ECG data over the past two decades. Finally, a meta-analysis of how various ML techniques performed for various diagnostic classifications was also undertaken.
Results
A total of 757 papers was identified. Based on results, the use of ML with ECG data started to increase sharply (p < 0.001) from 2012. Healthcare applications, especially in heart abnormality classification, were the most common application of ML when using ECG data (p < 0.001). However, many new emerging applications include using ML and the ECG for biometrics and driver drowsiness. The support vector machine was the technique of choice for a decade. However, since 2018, deep learning has been trending upwards and is likely to be the leading technique in the coming few years. Despite the accuracy paradox, accuracy was the most frequently used metric in the studies reviewed, followed by sensitivity, specificity, F1 score and then AUC.
Conclusion
Applying ML using ECG data has shown promise. Data scientists and physicians should collaborate to ensure that clinical knowledge is being applied appropriately and is informing the design of ML algorithms. Data scientists also need to consider knowledge guided feature engineering and the explicability of the ML algorithm as well as being transparent in the algorithm's performance to appropriately calibrate human-AI trust. Future work is required to enhance ML performance in ECG classification.
Linear electrocardiographic lead transformations (LELTs) are used to estimate unrecorded ECG leads by applying a number of recorded leads to a LELT matrix. Such matrices are commonly developed using a training dataset. The size of the training dataset has an influence on the estimation performance of a LELT matrix. However, an estimate of the minimal size required for the development of LELTs has previously not been reported.
The aim of this research was to determine such an estimate. We generated LELT matrices from differently sized (from n = 10 to n = 540 subjects in steps of 10 subjects) training datasets. The LELT matrices and the 12-lead ECG data of a testing dataset (n = 186 subjects) were used for the estimation of Frank VCGs. Root-mean-squared-error values between recorded and estimated Frank leads of the testing dataset were used for the quantification of the estimation performance associated with a given size of the training dataset.
The performance of the LELTs was, after an initial phase of improvement, found to only marginally improve with additional increases in the size of the training dataset. Our findings suggest that the training dataset should have
a minimal size of 170 subjects when developing LELTs that utilise the 12-lead ECG for the estimation of unrecorded
ECG leads.
This an exploratory paper that discusses the use of artificial intelligence (AI) in ECG interpretation and opportunities for improving the explainability of the AI (XAI) when reading 12-lead ECGs. To develop AI systems, many principles (human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse and competence) must be considered to ensure that the AI is trustworthy and applicable. The current computerised ECG interpretation algorithms can detect different types of heart diseases. However, there are some challenges and shortcomings that need to be addressed, such as the explainability issue and the interaction between the human and the AI for clinical decision making. These challenges create opportunities to develop a trustworthy XAI for automated ECG interpretation with a high performance and a high confidence level. This study reports a proposed XAI interface design in automatic ECG interpretation based on suggestions from previous studies and based on standard guidelines that were developed by the human computer interaction (HCI) community. New XAI interfaces should be developed in the future that facilitate more transparency of the decision logic of the algorithm which may allow users to calibrate their trust and use of the AI system.
Linear ECG-lead transformations estimate or derive unrecorded target leads by applying a number of recorded basis leads to a so-called linear ECG-lead transformation matrix. The inverse transform of such a linear ECG-lead transformation performs a transformation in the opposite direction (from the target leads to the basis leads). The pseudo-inverse of a given transformation matrix can be used to perform such an inverse transformation. Linear regression based inverse transformation matrices are, provided that sufficient
training data for their development is available, an alternative to pseudo-inverse matrices. The aim of this research was to compare the estimation performance of pseudo-inverse and linear regression based inverse transformations. This comparison was performed for two example inverse transformations. The performance of the different transformations was assessed using root-meansquared-error (RMSE) values between the QRS-T complexes of recorded and derived leads. Typical mean RMSE values associated with the regression based
approach were found to be approximately two thirds to half of the mean RMSE values achieved by the approach based upon the pseudo-inverse. Provided that sufficient data are available, linear regression should be used for the development of inverse ECG-lead transformation matrices.
Background:
A 12-lead electrocardiogram (ECG) is the most commonly used method to diagnose patients with cardiovascular diseases. However, there are a number of possible misinterpretations of the ECG that can be caused by several different factors, such as the misplacement of chest electrodes.
Objective:
The aim of this study is to build advanced algorithms to detect precordial (chest) electrode misplacement.
Methods:
In this study, we used traditional machine learning (ML) and deep learning (DL) to autodetect the misplacement of electrodes V1 and V2 using features from the resultant ECG. The algorithms were trained using data extracted from high-resolution body surface potential maps of patients who were diagnosed with myocardial infarction, diagnosed with left ventricular hypertrophy, or a normal ECG.
Results:
DL achieved the highest accuracy in this study for detecting V1 and V2 electrode misplacement, with an accuracy of 93.0% (95% CI 91.46-94.53) for misplacement in the second intercostal space. The performance of DL in the second intercostal space was benchmarked with physicians (n=11 and age 47.3 years, SD 15.5) who were experienced in reading ECGs (mean number of ECGs read in the past year 436.54, SD 397.9). Physicians were poor at recognizing chest electrode misplacement on the ECG and achieved a mean accuracy of 60% (95% CI 56.09-63.90), which was significantly poorer than that of DL (P<.001).
Conclusions:
DL provides the best performance for detecting chest electrode misplacement when compared with the ability of experienced physicians. DL and ML could be used to help flag ECGs that have been incorrectly recorded and flag that the data may be flawed, which could reduce the number of erroneous diagnoses.
Background: Body surface potential mapping (BSPM) provides additional electrophysiological information that can be useful for the detection of cardiac diseases. Moreover, BSPMs are currently utilized in electrocardiographic imaging (ECGI) systems within clinical practice. Missing information due to noisy recordings, poor electrode contact is inevitable. In this study, we present an interpolation method that combines Laplacian minimization and principal component analysis (PCA) techniques for interpolating this missing information. Method: The dataset used consisted of 117 lead BSPMs recorded from 744 subjects (a training set of 384 subjects, and a test set of 360). This dataset is a mixture of normal, old myocardial infarction, and left ventricular hypertrophy subjects. The missing data was simulated by ignoring data recorded from 7 regions: the first region represents three rows of five electrodes on the anterior torso surface (high potential gradient region), and the other six regions were realistic patterns that have been drawn from clinical data and represent the most likely regions of broken electrodes. Three interpolation methods including PCA based interpolation, Laplacian interpolation, and hybrid Laplacian-PCA interpolation methods were used to interpolate the missing data from the remaining electrodes. In the simulated region of missing data, the calculated potentials from each interpolation method were compared with the measured potentials using relative error (RE) and correlation coefficient (CC) over time. In the hybrid Laplacian-PCA interpolation method, the missing data are firstly interpolated using Laplacian interpolation, then the resulting BSPM of 117 potentials was multiplied by the (117 × 117) coefficient matrix calculated using the training set to get the principal components. Out of 117 principal components (PCs), the first 15 PCs were utilized for the second stage of interpolation. The best performance of interpolation was the reason for choosing the first 15 PCs. Results: The differences in the median of relative error (RE) between Laplacian and Hybrid method ranged from 0.01 to 0.35 (p b 0.001), while the differences in the median of correlation between them ranged from 0.0006 to 0.034 (p b 0.001). PCA-interpolation method performed badly especially in some scenarios where the number of missing electrodes was up to 12 or higher causing a high region of missing data. The figures of median of RE for PCAmethod were between 0.05 and 0.6 lower than that for Hybrid method (p b 0.001). However, the median of correlation was between 0.0002 and 0.26 lower than the figure for the Hybrid method (p b 0.001). Conclusion: Comparison between the three methods of interpolation (Laplacian, PCA, Hybrid) in reconstructing missing data in BSPM showed that the Hybrid method was always better than the other methods in all scenarios; whether the number of missed electrodes is high or low, and irrespective of the location of these missed electrodes.
Electrode misplacement during 12-lead Electrocardiogram (ECG) acquisition can cause false ECG diagnosis and subsequent incorrect clinical treatment. A common misplacement error is the superior placement of V1 and V2 electrodes. The aim of the current research was to detect lead V1 and V2 misplacement using machine learning to enhance ECG data quality to improve clinical decision making. In this particular study, we reasonably assume that V1 and V2 are concurrently superiorly misplaced together. ECGs for 450 patients were extracted from body surface potential maps. Sixteen features were extracted including: morphological, statistical and time-frequency features. Two feature selection approaches (filter method and wrapper method) were applied to find an optimal set of features that provide a high accuracy. To ensure accuracy, six classifiers were applied including: fine tree, coarse tree, bagged tree, Linear Support Vector Machine (LSVM), Quadratic Support Vector Machine (QSVM) and logistic regression. The accuracy of V1 and V2 misplacement detection was 94.3% in the first ICS, 92.7% in the second ICS and 70% in third ICS respectively. Bagged tree was the best classifier in the first, second and third ICS to detect V1 and V2 misplacement.
Background: Electrocardiogram (ECG) lead misplacement can adversely affect ECG diagnosis and subsequent clinical decisions. V1 and V2 are commonly placed superior of their correct position. The aim of the current study was to use machine learning approaches to detect V1 and V2 lead misplacement to enhance ECG data quality. Method: ECGs for 453 patients, (normal n = 151, Left Ventricular Hypertrophy (LVH) n = 151, Myocardial Infarction n = 151) were extracted from body surface potential maps. These were used to extract both the correct and incorrectly placed V1 and V2 leads. The prevalence for correct and incorrect leads were 50%. Sixteen features were extracted in three different domains: time-based, statistical and time-frequency features using a wavelet
transform. A hybrid feature selection approach was applied to select an optimal set of features. To ensure optimal model selection, five classifiers were used and compared. The aforementioned feature selection approach and classifiers were applied for V1 and V2 misplacement in three different positions: first, second and third intercostal spaces (ICS). Results: The accuracy for V1 misplacement detection was 93.9%, 89.3%, 72.8% in the first, second and third ICS respectively. In V2, the accuracy was 93.6%, 86.6% and 68.1% in the first, second and third ICS respectively. There is a noticeable decline in accuracy when detecting misplacement in the third ICS which is expected.
Introduction: Electrode misplacement and interchange errors are known problems when recording the 12‑lead electrocardiogram (ECG). Automatic detection of these errors could play an important role for improving clinical decision making and outcomes in cardiac care. The objectives of this systematic review and meta-analysis is to
1) study the impact of electrode misplacement on ECG signals and ECG interpretation,
2) to determine the most challenging electrode misplacements to detect using machine learning (ML),
3) to analyse the ML performance of algorithms that detect electrode misplacement or interchange according to sensitivity and specificity and
4) to identify the most commonly used ML technique for detecting electrode misplacement/interchange. This review analysed the current literature regarding electrode misplacement/interchange recognition accuracy using machine learning techniques.
Method: A search of three online databases including IEEE, PubMed and ScienceDirect identified 228 articles, while 3 articles were included from additional sources from co-authors. According to the eligibility criteria, 14 articles were selected. The selected articles were considered for qualitative analysis and meta-analysis.
Results: The articles showed the effect of lead interchange on ECG morphology and as a consequence on patient diagnoses. Statistical analysis of the included articles found that machine learning performance is high in detecting electrode misplacement/interchange except left arm/left leg interchange.
Conclusion: This review emphasises the importance of detecting electrode misplacement detection in ECG diagnosis and the effects on decision making. Machine learning shows promise in detecting lead misplacement/interchange and highlights an opportunity for developing and operationalising deep learning algorithms such as
convolutional neural network (CNN) to detect electrode misplacement/interchange.