Linear ECG-lead transformations (LELTs) are used to estimate unrecorded target leads by applying a number of recorded basis leads to a LELT matrix. Such LELT matrices are commonly developed using training datasets that are composed of ECGs that belong to different diagnostic classes (DCs). The aim of our research was to assess the influence of the training set composition on the estimation performance of LELTs that estimate target leads V1, V3, V4 and V6 from basis leads I, II, V2 and V5 of the 12-lead ECG. Our assessment was performed using ECGs from the three DCs left ventricular hypertrophy, right bundle branch block and normal (ECGs without abnormalities). Training sets with different DC compositions were used for the development of LELT matrices. These matrices were used to estimate the target leads of different test sets. The estimation performance of the developed matrices was quantified using root mean square error values calculated between derived and recorded target leads. Our findings indicate that unbalanced training sets can lead to LELTs that show large estimation performance variability across different DCs. Balanced training sets were found to produce LELTs that performed well across multiple DCs. We recommend balanced training sets for the development of LELTs.
Background: Electrocardiogram (ECG) signals are often contaminated by noise. Manual review of large ECG databases to identify noisy signals is time-consuming. Traditional signal quality assessment algorithms often do not generalize well or are computationally expensive. This study developed a Temporal Convolutional Neural Network (TCNN) to estimate the signal-to-noise ratio (SNR) of ECG signals. Method: We trained a TCNN on a proprietary database of 134,019 12-lead ECGs without any machine or human-added noise labels. Assuming that this data had high SNR, we randomly selected a single lead from each ECG and added random Gaussian noise. We then scaled the signals and added noise to give a negatively skewed normal distribution of true SNR values. We trained a TCNN to regress low- and high-frequency pseudo-SNR values from the raw noisy input signals. Results: On the testing dataset, the TCNN achieved a mean error of 0.31±1.80 dB and a Pearson correlation coefficient of 0.96 for low-frequency pseudo-SNR. Similarly, for high-frequency pseudo-SNR, the mean error was 0.29±1.63 dB and the Pearson correlation coefficient was 0.97. Conclusion: A Temporal Convolutional Neural Network can accurately estimate the SNR of unseen ECGs.
This study assessed the performance of a deep neural network (PulseAI, Belfast, United Kingdom) used in conjunction with a dry-electrode ECG sensor device (RhythmPad, D&FT, United Kingdom) to detect AF automatically. Simultaneous pairs of 12-lead ECGs and single-lead dry-electrode ECGs were collected from 622 patients. The 12-lead ECGs were manually overread and used as reference diagnoses. Twenty-two patients were confirmed with AF and had an interpretable 12-lead and single-lead dry-electrode ECG recording. The deep neural network analysed the dry-electrode ECGs, and performance was compared to the 12-lead interpretation. Overall, the deep neural network algorithm yielded a sensitivity of 96% (95% CI, 87%-100%), specificity of 99% (95% CI, 98%-100%) and positive predictive value of 81% (95% CI, 66%-96%) for detection of AF episodes. When coupled with dry-electrode ECG sensors, the PulseAI neural network allows for large-scale and low-cost screening for AF. Widespread implementation of this technology may allow for earlier detection, treatment, and management of patients with AF.
Linear ECG-lead transformations (LELTs) are used to estimate unrecorded leads by applying a number of recorded leads to a LELT matrix. Such LELT matrices are commonly developed using a training dataset and linear regression analysis. An important performance metric of LELTs is the subject-to-subject variability (SSV) of their estimation performance. In this research, we assess the relationship between an increasing training set size (from n=10 to n=370 subjects) and the SSV of LELTs. A total of 200 LELT matrices were developed for each training sets size. The developed LELT matrices and 12-lead ECG data of a testing dataset (n=123 subjects) were used for the estimation of Frank VCGs. Root-mean-squared-error (RMSE) values between recorded and estimated Frank VCG leads were used for the quantification of the estimation performance. The SSV associated with each LELT matrix was quantified as the standard deviation of the corresponding RMSE values. This was followed by an analysis of the relationship between the training set size and the associated SSV values. Increasing the training set size from 10 to 180, to 160 and to 200 subjects, for Frank VCG leads X,Y and Z respectively, was associated with a reduction of the observed SSV. Further increases in training set size were found to only have a marginal effect on the observed SSV.
Background
The application of artificial intelligence to interpret the electrocardiogram (ECG) has predominantly included the use of knowledge engineered rule-based algorithms which have become widely used today in clinical practice. However, over recent decades, there has been a steady increase in the number of research studies that are using machine learning (ML) to read or interrogate ECG data.
Objective
The aim of this study is to review the use of ML with ECG data using a time series approach.
Methods
Papers that address the subject of ML and the ECG were identified by systematically searching databases that archive papers from January 1995 to October 2019. Time series analysis was used to study the changing popularity of the different types of ML algorithms that have been used with ECG data over the past two decades. Finally, a meta-analysis of how various ML techniques performed for various diagnostic classifications was also undertaken.
Results
A total of 757 papers was identified. Based on results, the use of ML with ECG data started to increase sharply (p < 0.001) from 2012. Healthcare applications, especially in heart abnormality classification, were the most common application of ML when using ECG data (p < 0.001). However, many new emerging applications include using ML and the ECG for biometrics and driver drowsiness. The support vector machine was the technique of choice for a decade. However, since 2018, deep learning has been trending upwards and is likely to be the leading technique in the coming few years. Despite the accuracy paradox, accuracy was the most frequently used metric in the studies reviewed, followed by sensitivity, specificity, F1 score and then AUC.
Conclusion
Applying ML using ECG data has shown promise. Data scientists and physicians should collaborate to ensure that clinical knowledge is being applied appropriately and is informing the design of ML algorithms. Data scientists also need to consider knowledge guided feature engineering and the explicability of the ML algorithm as well as being transparent in the algorithm's performance to appropriately calibrate human-AI trust. Future work is required to enhance ML performance in ECG classification.
Deep Convolutional Neural Networks (DCNNs) have been shown to provide improved performance over traditional heuristic algorithms for the detection of arrhythmias from ambulatory ECG recordings. However, these DCNNs have primarily been trained and tested on device-specific databases with standardized electrode positions and uniform sampling frequencies. This work explores the possibility of training a DCNN for Atrial Fibrillation (AF) detection on a database of single‑lead ECG rhythm strips extracted from resting 12‑lead ECGs. We then test the performance of the DCNN on recordings from ambulatory ECG devices with different recording leads and sampling frequencies.
We developed an extensive proprietary resting 12‑lead ECG dataset of 549,211 patients. This dataset was randomly split into a training set of 494,289 patients and a testing set of the remaining 54,922 patients. We trained a 34-layer convolutional DCNN to detect AF and other arrhythmias on this dataset. The DCNN was then validated on two Physionet databases commonly used to benchmark automated ECG algorithms (1) MIT-BIH Arrhythmia Database and (2) MIT-BIH Atrial Fibrillation Database. Validation was performed following the EC57 guidelines, with performance assessed by gross episode and duration sensitivity and positive predictive value (PPV). Finally, validation was also performed on a selection of rhythm strips from an ambulatory ECG patch that a committee of board-certified cardiologists annotated.
On MIT-BIH, The DCNN achieved a sensitivity of 100% and 84% PPV in detecting episodes of AF. and 100% sensitivity and 94% PPV in quantifying AF episode duration. On AFDB, The DCNN achieved a sensitivity of 94% and PPV of 98% in detecting episodes of AF, and 98% sensitivity and 100% PPV in quantifying AF episode duration. On the patch database, the DCNN demonstrated performance that was closely comparable to that of a cardiologist.
The results indicate that DCNN models can learn features that generalize between resting 12‑lead and ambulatory ECG recordings, allowing DCNNs to be device agnostic for detecting arrhythmias from single‑lead ECG recordings and enabling a range of clinical applications.
Linear electrocardiographic lead transformations (LELTs) are used to estimate unrecorded ECG leads by applying a number of recorded leads to a LELT matrix. Such matrices are commonly developed using a training dataset. The size of the training dataset has an influence on the estimation performance of a LELT matrix. However, an estimate of the minimal size required for the development of LELTs has previously not been reported.
The aim of this research was to determine such an estimate. We generated LELT matrices from differently sized (from n = 10 to n = 540 subjects in steps of 10 subjects) training datasets. The LELT matrices and the 12-lead ECG data of a testing dataset (n = 186 subjects) were used for the estimation of Frank VCGs. Root-mean-squared-error values between recorded and estimated Frank leads of the testing dataset were used for the quantification of the estimation performance associated with a given size of the training dataset.
The performance of the LELTs was, after an initial phase of improvement, found to only marginally improve with additional increases in the size of the training dataset. Our findings suggest that the training dataset should have
a minimal size of 170 subjects when developing LELTs that utilise the 12-lead ECG for the estimation of unrecorded
ECG leads.
Introduction: Interpretation of the 12‑lead Electrocardiogram (ECG) is normally assisted with an automated diagnosis (AD), which can facilitate an ‘automation bias’ where interpreters can be anchored. In this paper, we studied, 1) the effect of an incorrect AD on interpretation accuracy and interpreter confidence (a proxy foruncertainty), and 2) whether confidence and other interpreter features can predict interpretation accuracy using machine learning. Methods: This study analysed 9000 ECG interpretations from cardiology and non-cardiology fellows (CFs and non-CFs). One third of the ECGs involved no ADs, one third with ADs (half as incorrect) and one third had multiple ADs. Interpretations were scored and interpreter confidence was recorded for each interpretation and subsequently standardised using sigma scaling. Spearman coefficients were used for correlation analysis and C5.0 decision trees were used for predicting interpretation accuracy using basic interpreter features such as confidence, age, experience and designation. Results: Interpretation accuracies achieved by CFs and non-CFs dropped by 43.20% and 58.95% respectively when an incorrect AD was presented (p b 0.001). Overall correlation between scaled confidence and interpretation accuracy was higher amongst CFs. However, correlation between confidence and interpretation accuracy decreased for both groups when an incorrect AD was presented. We found that an incorrect AD disturbs the reliability of interpreter confidence in predicting accuracy. An incorrect AD has a greater effect on the confidence of nonCFs (although this is not statistically significant it is close to the threshold, p = 0.065). The best C5.0 decision tree achieved an accuracy rate of 64.67% (p b 0.001), however this is only 6.56% greater than the noinformation-rate. Conclusion: Incorrect ADs reduce the interpreter's diagnostic accuracy indicating an automation bias. Non-CFs tend to agree more with the ADs in comparison to CFs, hence less expert physicians are more effected by automation bias. Incorrect ADs reduce the interpreter's confidence and also reduces the predictive power of confidence for predicting accuracy (even more so for non-CFs). Whilst a statistically significant model was developed, it is difficult to predict interpretation accuracy using machine learning on basic features such as interpreter confidence, age, reader experience and designation.
CPR Guideline Chest Compression Depths May Exceed Requirements for Optimal Physiological Response
(2018)
A twelve-animal porcine study dataset was retrospectively analyzed to assess associations between chest compression (CC) depth, systolic blood pressure (SBP) and end-tidal carbon dioxide (EtCO2). Manual CCs were applied for 7 two-minute episodes, at CC depths between 10mm-55mm. A rolling 15s analysis window was applied to the continuous signals. Mean peak values were calculated for each window. Correlation analysis was applied to assess strength of association. Optimal CC depth to achieve physiological targets was determined via cut-off analysis. A total of 672 observations for each variable were available for analysis. Pearson correlations (95% confidence interval; p-value) between CC depth and both SBP and ETCO2 were 0.84 (0.82, 0.86; p < 0.001) and 0.75 (0.71, 0.78; p < 0.001) respectively. Optimal CC depth cutoff (sensitivity, specificity) to achieve SBP ≥ 100mmHg and EtCO2 ≥ 10mmHg was 33 mm (98.29%, 88.94%) and 20mm (95.08%, 78.30%) respectively. A reasonable relationship between CC depth and
physiological response was observed. Optimal SBP and EtCO2 cut-offs were achieved significantly below guideline depths. Furthermore, cut-off analysis suggests a disparity between CC depth and physiological targets.
Body surface potential maps (BSPMs) are typically recorded from a large number of ECG leads that cover the entire thorax. This improves diagnostic accuracy and is required in Electrocardiographic imaging (ECGi). BSPMs recorded in the clinical setting may have some leads that are noisy due to poor skin electrode contact. We analyzed 117 lead BSPMs recorded from 360 subjects. We successively simulated the removal of ECG leads at various locations and tested the ability of our algorithm to accurately reconstruct the missing information. When seven electrodes were removed, the algorithm could reconstruct BSPM patterns from QRS segments with median RMSE of 6.24µV and 12.15µV and CC of 0.999 and 0.997 when Laplacian method and PCA based method were used respectively. This work shows that noisy BSPM leads, which often manifest in the clinical setting, can be more accurately reconstructed using our Laplacian based interpolation
algorithm, when low number of missed electrodes in regions where electrodes are organised in a well distributed and tight mesh.
Automated interpretation of the 12-lead ECG has remained an underpinning interest in decades of research that has seen a diversity of computing applications in cardiology.
The application of computers in cardiology began in the 1960s with early research focusing on the conversion of analogue ECG signals (voltages) to digital samples. Alongside this, software techniques that automated the extraction of wave measurements and provided basic diagnostic statements, began to emerge. In the years since then there have been many significant milestones which include the widespread commercialisation of 12-lead ECG interpretation software, associated clinical utility and the development of the related regulatory frameworks to promote standardised development.
In the past few years, the research community has seen a significant rejuvenation in the development of ECG interpretation programs. This is evident in the research literature where a large number of studies have emerged tackling a variety of automated ECG interpretation problems. This is largely due to two factors. Specifically, the technical advances, both software and hardware, that have facilitated the broad adoption of modern artificial intelligence (AI) techniques, and, the increasing availability of large datasets that support modern AI approaches.
In this article we provide a very high-level overview of the operation of and approach to the development of early 12-lead ECG interpretation programs and we contrast this to the approaches that are now seen in emerging AI approaches. Our overview is mainly focused on highlighting differences in how input data are handled prior to generation of the diagnostic statement.
This an exploratory paper that discusses the use of artificial intelligence (AI) in ECG interpretation and opportunities for improving the explainability of the AI (XAI) when reading 12-lead ECGs. To develop AI systems, many principles (human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse and competence) must be considered to ensure that the AI is trustworthy and applicable. The current computerised ECG interpretation algorithms can detect different types of heart diseases. However, there are some challenges and shortcomings that need to be addressed, such as the explainability issue and the interaction between the human and the AI for clinical decision making. These challenges create opportunities to develop a trustworthy XAI for automated ECG interpretation with a high performance and a high confidence level. This study reports a proposed XAI interface design in automatic ECG interpretation based on suggestions from previous studies and based on standard guidelines that were developed by the human computer interaction (HCI) community. New XAI interfaces should be developed in the future that facilitate more transparency of the decision logic of the algorithm which may allow users to calibrate their trust and use of the AI system.
Background: The 12-lead ECG is spatially limited in diagnosing cardiac abnormalities. Additional leads (right sided and posterior leads) are inconvenient in a clinical setting, however, they can be derived. In this paper we report on the development of coefficients to allow derivation of right sided and posterior leads.
Method: Analysis was performed using body surface potential maps (BSPM) recorded from 910 patients in two centres. Recordings were made up of healthy controls (n=314), peak balloon inflation during elective percutaneous coronary angioplasty (n=88), myocardial infarction (n=271) and left-ventricular hypertrophy (n=237). All recordings were expanded to the 352-node Dalhousie torso. Coefficients to allow derivation of right sided and posterior leads were generated by linear regression. Further coefficients from a previously reported study were used for performance comparisons.
Results: Correlation coefficients between recorded and derived leads were significantly improved using the new coefficients (p<0.05) in leads V7-V12.
Conclusion: We have developed coefficients that allow the derivation of 10 additional leads from the 12-lead ECG.
Linear ECG-lead transformations estimate or derive unrecorded target leads by applying a number of recorded basis leads to a so-called linear ECG-lead transformation matrix. The inverse transform of such a linear ECG-lead transformation performs a transformation in the opposite direction (from the target leads to the basis leads). The pseudo-inverse of a given transformation matrix can be used to perform such an inverse transformation. Linear regression based inverse transformation matrices are, provided that sufficient
training data for their development is available, an alternative to pseudo-inverse matrices. The aim of this research was to compare the estimation performance of pseudo-inverse and linear regression based inverse transformations. This comparison was performed for two example inverse transformations. The performance of the different transformations was assessed using root-meansquared-error (RMSE) values between the QRS-T complexes of recorded and derived leads. Typical mean RMSE values associated with the regression based
approach were found to be approximately two thirds to half of the mean RMSE values achieved by the approach based upon the pseudo-inverse. Provided that sufficient data are available, linear regression should be used for the development of inverse ECG-lead transformation matrices.
Background:
A 12-lead electrocardiogram (ECG) is the most commonly used method to diagnose patients with cardiovascular diseases. However, there are a number of possible misinterpretations of the ECG that can be caused by several different factors, such as the misplacement of chest electrodes.
Objective:
The aim of this study is to build advanced algorithms to detect precordial (chest) electrode misplacement.
Methods:
In this study, we used traditional machine learning (ML) and deep learning (DL) to autodetect the misplacement of electrodes V1 and V2 using features from the resultant ECG. The algorithms were trained using data extracted from high-resolution body surface potential maps of patients who were diagnosed with myocardial infarction, diagnosed with left ventricular hypertrophy, or a normal ECG.
Results:
DL achieved the highest accuracy in this study for detecting V1 and V2 electrode misplacement, with an accuracy of 93.0% (95% CI 91.46-94.53) for misplacement in the second intercostal space. The performance of DL in the second intercostal space was benchmarked with physicians (n=11 and age 47.3 years, SD 15.5) who were experienced in reading ECGs (mean number of ECGs read in the past year 436.54, SD 397.9). Physicians were poor at recognizing chest electrode misplacement on the ECG and achieved a mean accuracy of 60% (95% CI 56.09-63.90), which was significantly poorer than that of DL (P<.001).
Conclusions:
DL provides the best performance for detecting chest electrode misplacement when compared with the ability of experienced physicians. DL and ML could be used to help flag ECGs that have been incorrectly recorded and flag that the data may be flawed, which could reduce the number of erroneous diagnoses.
Background: Body surface potential mapping (BSPM) provides additional electrophysiological information that can be useful for the detection of cardiac diseases. Moreover, BSPMs are currently utilized in electrocardiographic imaging (ECGI) systems within clinical practice. Missing information due to noisy recordings, poor electrode contact is inevitable. In this study, we present an interpolation method that combines Laplacian minimization and principal component analysis (PCA) techniques for interpolating this missing information. Method: The dataset used consisted of 117 lead BSPMs recorded from 744 subjects (a training set of 384 subjects, and a test set of 360). This dataset is a mixture of normal, old myocardial infarction, and left ventricular hypertrophy subjects. The missing data was simulated by ignoring data recorded from 7 regions: the first region represents three rows of five electrodes on the anterior torso surface (high potential gradient region), and the other six regions were realistic patterns that have been drawn from clinical data and represent the most likely regions of broken electrodes. Three interpolation methods including PCA based interpolation, Laplacian interpolation, and hybrid Laplacian-PCA interpolation methods were used to interpolate the missing data from the remaining electrodes. In the simulated region of missing data, the calculated potentials from each interpolation method were compared with the measured potentials using relative error (RE) and correlation coefficient (CC) over time. In the hybrid Laplacian-PCA interpolation method, the missing data are firstly interpolated using Laplacian interpolation, then the resulting BSPM of 117 potentials was multiplied by the (117 × 117) coefficient matrix calculated using the training set to get the principal components. Out of 117 principal components (PCs), the first 15 PCs were utilized for the second stage of interpolation. The best performance of interpolation was the reason for choosing the first 15 PCs. Results: The differences in the median of relative error (RE) between Laplacian and Hybrid method ranged from 0.01 to 0.35 (p b 0.001), while the differences in the median of correlation between them ranged from 0.0006 to 0.034 (p b 0.001). PCA-interpolation method performed badly especially in some scenarios where the number of missing electrodes was up to 12 or higher causing a high region of missing data. The figures of median of RE for PCAmethod were between 0.05 and 0.6 lower than that for Hybrid method (p b 0.001). However, the median of correlation was between 0.0002 and 0.26 lower than the figure for the Hybrid method (p b 0.001). Conclusion: Comparison between the three methods of interpolation (Laplacian, PCA, Hybrid) in reconstructing missing data in BSPM showed that the Hybrid method was always better than the other methods in all scenarios; whether the number of missed electrodes is high or low, and irrespective of the location of these missed electrodes.
Electrode misplacement during 12-lead Electrocardiogram (ECG) acquisition can cause false ECG diagnosis and subsequent incorrect clinical treatment. A common misplacement error is the superior placement of V1 and V2 electrodes. The aim of the current research was to detect lead V1 and V2 misplacement using machine learning to enhance ECG data quality to improve clinical decision making. In this particular study, we reasonably assume that V1 and V2 are concurrently superiorly misplaced together. ECGs for 450 patients were extracted from body surface potential maps. Sixteen features were extracted including: morphological, statistical and time-frequency features. Two feature selection approaches (filter method and wrapper method) were applied to find an optimal set of features that provide a high accuracy. To ensure accuracy, six classifiers were applied including: fine tree, coarse tree, bagged tree, Linear Support Vector Machine (LSVM), Quadratic Support Vector Machine (QSVM) and logistic regression. The accuracy of V1 and V2 misplacement detection was 94.3% in the first ICS, 92.7% in the second ICS and 70% in third ICS respectively. Bagged tree was the best classifier in the first, second and third ICS to detect V1 and V2 misplacement.
This article investigates the selection of optimal ECG leads for the detection of ST changes more likely to appear in patch systems with closely spaced leads. Method: We analysed body surface potential maps (BSPMs) from 44 subjects undergoing PTCA. BSPMs were recorded at 120 sites and these were expanded to 352 nodes (Dalhousie torso) using Laplacian interpolation. A total of 88 BSPMs were investigated. This included the 44 subjects at baseline and the 44 subjects at peak balloon inflation (PBI). At PBI the subjects had various coronary arteries occluded (14 LAD, 15 LCX, 15 RCA). All possible bipolar leads were calculated for each subject. Leads were ranked based on the maximum ST-segment change between baseline and PBI for each subject. Leads with electrode spacing of more than 100 mm were excluded. The highest ranked lead was chosen as the short spaced lead (SSL) on the anterior torso. Result: The median ST-segment change for the chosen SSL for each vessel was LAD = 134 µV, LCX = 65 µV, RCA = 166 µV. The maximum ST segment change observed for the same lead was LAD = 277 µV, LCX = 166 µV, RCA = 257 µV . For comparison, the highest median observed on the 12-lead ECG for each vessel was LAD = 137 µV (V3), LCX = 130 µV (III), RCA = 196 µV (III).
Background: Electrocardiogram (ECG) lead misplacement can adversely affect ECG diagnosis and subsequent clinical decisions. V1 and V2 are commonly placed superior of their correct position. The aim of the current study was to use machine learning approaches to detect V1 and V2 lead misplacement to enhance ECG data quality. Method: ECGs for 453 patients, (normal n = 151, Left Ventricular Hypertrophy (LVH) n = 151, Myocardial Infarction n = 151) were extracted from body surface potential maps. These were used to extract both the correct and incorrectly placed V1 and V2 leads. The prevalence for correct and incorrect leads were 50%. Sixteen features were extracted in three different domains: time-based, statistical and time-frequency features using a wavelet
transform. A hybrid feature selection approach was applied to select an optimal set of features. To ensure optimal model selection, five classifiers were used and compared. The aforementioned feature selection approach and classifiers were applied for V1 and V2 misplacement in three different positions: first, second and third intercostal spaces (ICS). Results: The accuracy for V1 misplacement detection was 93.9%, 89.3%, 72.8% in the first, second and third ICS respectively. In V2, the accuracy was 93.6%, 86.6% and 68.1% in the first, second and third ICS respectively. There is a noticeable decline in accuracy when detecting misplacement in the third ICS which is expected.
The magnitude of the spatial ventricular gradient (MSVG) is an attractive parameter in electrocardiogram (ECG)monitoring applications. The MSVG is most commonlyobtained from150 Hz low-pass filtered resting ECGs. However, monitoring applications typically utilize 40 Hz low-pass filtered ECG data. The extend to which the value of the MSVG is affected by the utilization of 40 Hz low-pass monitoring ECG filters over the commonly used 150 Hz low-pass resting ECG filters has not previously been reported. The aim of this research was to quantify the differences between MSVG values computed using 40 Hz low-pass filtered ECG data (MSVG40) and 150 Hz low-pass filtered ECG data (MSVG150). The differences between the MSVG40 and the MSVG150 were quantified as systematic error (mean difference) and random error (span of Bland-Altman 95% limits of agreement) using a study population of 726 subjects. The systematic error was found to be 0.013 mV ms [95% confidence interval: 0.008 mV ms to 0.018 mV ms]. The random error was quantified as 0.282 mV ms [95% confidence interval: 0.266 mV ms to 0.298 mV ms]. Our findings suggest that it is possible to record accurate MSVG values using 40 Hz low-pass filtered ECG data.
Introduction: Electrode misplacement and interchange errors are known problems when recording the 12‑lead electrocardiogram (ECG). Automatic detection of these errors could play an important role for improving clinical decision making and outcomes in cardiac care. The objectives of this systematic review and meta-analysis is to
1) study the impact of electrode misplacement on ECG signals and ECG interpretation,
2) to determine the most challenging electrode misplacements to detect using machine learning (ML),
3) to analyse the ML performance of algorithms that detect electrode misplacement or interchange according to sensitivity and specificity and
4) to identify the most commonly used ML technique for detecting electrode misplacement/interchange. This review analysed the current literature regarding electrode misplacement/interchange recognition accuracy using machine learning techniques.
Method: A search of three online databases including IEEE, PubMed and ScienceDirect identified 228 articles, while 3 articles were included from additional sources from co-authors. According to the eligibility criteria, 14 articles were selected. The selected articles were considered for qualitative analysis and meta-analysis.
Results: The articles showed the effect of lead interchange on ECG morphology and as a consequence on patient diagnoses. Statistical analysis of the included articles found that machine learning performance is high in detecting electrode misplacement/interchange except left arm/left leg interchange.
Conclusion: This review emphasises the importance of detecting electrode misplacement detection in ECG diagnosis and the effects on decision making. Machine learning shows promise in detecting lead misplacement/interchange and highlights an opportunity for developing and operationalising deep learning algorithms such as
convolutional neural network (CNN) to detect electrode misplacement/interchange.
Background: We have previously reported on the potential of patch-based ECG leads to observe changes typical during ischaemia. In this study we aim to assess the utility of patch-based leads in the detection of these changes.
Method: Body surface potential maps (BSPM) from subjects (n=45) undergoing elective percutaneous coronary angioplasty (PTCA) were used. The short spaced lead (SSL), that was previously identified as having the greatest ST-segment change between baseline and peak balloon inflation (PBI), was selected as the basis for a patch based lead system. A feature set of J-point amplitudes for all bipolar leads available within the same 100 mm region were included (n=6). Current 12-lead ECG criteria were applied to 12-lead ECGs for the same subjects to benchmark performance.
Results: The previously identified single SSL achieved sensitivity and specificity of 87% and 71% respectively using a Naive Bayes classifier. Adding other combinations of leads to this did not improve performance significantly. The 12-lead ECG performance was 62/93% (sensitivity/specificity).
Conclusion: This study suggests that short spaced leads can be sensitive to ischaemic ECG changes. However, due to the short distance between leads, they lack the specificity of the 12-lead ECG.