Refine
Year of publication
Document Type
- Article (58)
- In Proceedings (5)
- Doctoral Thesis (1)
- In Collection (1)
- ZIB-Report (1)
Is part of the Bibliography
- no (66)
Keywords
- ANFIS (1)
- ANN (1)
- Cross-validation (1)
- Disease severity (1)
- Epidemiology (1)
- Infectious Diseases (1)
- MODIS (1)
- Microbiology (1)
- Parasitology (1)
- Public Health, Environmental and Occupational Health (1)
We develop a data-driven method to learn chemical reaction networks from trajectory data. Modeling the reaction system as a continuous-time Markov chain and assuming the system is fully observed,our method learns the propensity functions of the system with predetermined basis functions by maximizing the likelihood function of the trajectory data under l^1 sparse regularization. We demonstrate our method with numerical examples using synthetic data and carry out an asymptotic analysis of the proposed learning procedure in the infinite-data limit.
Remote monitoring devices, which can be worn or implanted, have enabled a more effective healthcare for patients with periodic heart arrhythmia due to their ability to constantly monitor heart activity. However, these devices record considerable amounts of electrocardiogram (ECG) data that needs to be interpreted by physicians. Therefore, there is a growing need to develop reliable methods for automatic ECG interpretation to assist the physicians. Here, we use deep convolutional neural networks (CNN) to classify raw ECG recordings. However, training CNNs for ECG classification often requires a large number of annotated samples, which are expensive to acquire. In this work, we tackle this problem by using transfer learning. First, we pretrain CNNs on the largest public data set of continuous raw ECG signals. Next, we finetune the networks on a small data set for classification of Atrial Fibrillation, which is the most common heart arrhythmia. We show that pretraining improves the performance of CNNs on the target task by up to 6.57%, effectively reducing the number of annotations required to achieve the same performance as CNNs that are not pretrained. We investigate both supervised as well as unsupervised pretraining approaches, which we believe will increase in relevance, since they do not rely on the expensive ECG annotations. The code is available on GitHub at https://github.com/kweimann/ecg-transfer-learning.
In the framework of time series analysis with recurrence networks, we introduce a self-adaptive method that determines the elusive recurrence threshold and identifies metastable states in complex real-world time series. As initial step, we introduce a way to set the embedding parameters used to reconstruct the state space from the time series. We set them as the ones giving the maximum Shannon entropy of the diagonal line length distribution for the first simultaneous minima of recurrence rate and Shannon entropy. To identify metastable states, as well as the transitions between them, we use a soft partitioning algorithm for module finding which is specifically developed for the case in which a system shows metastability. We illustrate our method with a complex time series example. Finally, we show the robustness of our method for identifying metastable states. Our results suggest that our method is robust for identifying metastable states in complex time series, even when introducing considerable levels of noise and missing data points.
SAIMeR: Self-adapted method for the identification of metastable states in real-world time series
(2014)
In the framework of time series analysis with recurrence networks, we introduce SAIMeR, a heuristic self-adapted method that determines the elusive recurrence threshold and identifies metastable states in complex time series. To identify metastable states as well as the transitions between them, we use graph theory concepts and a fuzzy partitioning clustering algorithm. We illustrate SAIMeR by applying it to three real-world time series and show that it is able to identify metastable states in real-world data with noise and missing data points. Finally, we suggest a way to choose the embedding parameters used to construct the state space in which this method is performed, based on the analysis of how the values of these parameters affect two recurrence quantitative measurements: recurrence rate and entropy.
The aim of Network Alignment in Protein-Protein Interaction Networks is discovering functionally similar regions between compared organisms. One major compromise for solving a network alignment problem is the trade-off among multiple similarity objectives while applying an alignment strategy. An alignment may lose its biological relevance while favoring certain objectives upon others due to the actual relevance of unfavored objectives. One possible solution for solving this issue may be blending the stronger aspects of various alignment strategies until achieving mature solutions. This study proposes a parallel approach called PERSONA that allows aligners to share their partial solutions continuously while they progress. All these aligners pursue their particular heuristics as part of a particle swarm that searches for multi-objective solutions of the same alignment problem in a reactive actor environment.
The actors use the stronger portion of a solution as a subgraph that they receive from leading or other actors and send their own stronger subgraphs back upon evaluation of those partial solutions. Moreover, the individual heuristics of each actor takes randomized parameter values at each cycle of parallel execution so that the problem search space can thoroughly be investigated. The results achieved with PERSONA are remarkably optimized and balanced for both topological and node similarity objectives.
BACKGROUND:
Influenza-like illness (ILI) is a common reason for paediatric consultations. Viral causes predominate, but antibiotics are used frequently. With regard to influenza, pneumococcal coinfections are considered major contributors to morbidity/mortality.
METHODS:
In the context of a perennial quality management (QM) programme at the Charit{\'e} Departments of Paediatrics and Microbiology in collaboration with the Robert Koch Institute, children aged 0-18 years presenting with signs and symptoms of ILI were followed from the time of initial presentation until hospital discharge (Charit{\'e} Influenza-Like Disease = ChILD Cohort). An independent QM team performed highly standardized clinical assessments using a disease severity score based on World Health Organization criteria for uncomplicated and complicated/progressive disease. Nasopharyngeal and pharyngeal samples were collected for viral reverse transcription polymerase chain reaction and bacterial culture/sensitivity and MaldiTOF analyses. The term 'detection' was used to denote any evidence of viral or bacterial pathogens in the (naso)pharyngeal cavity. With the ChILD Cohort data collected, a standard operating procedure (SOP) was created as a model system to reduce the inappropriate use of antibiotics in children with ILI. Monte Carlo simulations were performed to assess cost-effectiveness.
RESULTS:
Among 2,569 ChILD Cohort patients enrolled from 12/2010 to 04/2013 (55\% male, mean age 3.2 years, range 0-18, 19\% {\ensuremath{>}}5 years), 411 patients showed laboratory-confirmed influenza, with bacterial co-detection in 35\%. Influenza and pneumococcus were detected simultaneously in 12/2,569 patients, with disease severity clearly below average. Pneumococcal vaccination rates were close to 90\%. Nonetheless, every fifth patient was already on antibiotics upon presentation; new antibiotic prescriptions were issued in an additional 20\%. Simulation of the model SOP in the same dataset revealed that the proposed decision model could have reduced the inappropriate use of antibiotics significantly (P{\ensuremath{<}}0.01) with an incremental cost-effectiveness ratio of -99.55?.
CONCLUSIONS:
Physicians should be made aware that in times of pneumococcal vaccination the prevalence and severity of influenza infections complicated by pneumococci may decline. Microbiological testing in combination with standardized disease severity assessments and review of vaccination records could be cost-effective, as well as promoting stringent use of antibiotics and a personalized approach to managing children with ILI.
Collaborative comparisons and combinations of epidemic models are used as policy-relevant evidence during epidemic outbreaks. In the process of collecting multiple model projections, such collaborations may gain or lose relevant information. Typically, modellers contribute a probabilistic summary at each time-step. We compared this to directly collecting simulated trajectories. We aimed to explore information on key epidemic quantities; ensemble uncertainty; and performance against data, investigating potential to continuously gain information from a single cross-sectional collection of model results. Methods We compared July 2022 projections from the European COVID-19 Scenario Modelling Hub. Five modelling teams projected incidence in Belgium, the Netherlands, and Spain. We compared projections by incidence, peaks, and cumulative totals. We created a probabilistic ensemble drawn from all trajectories, and compared to ensembles from a median across each model’s quantiles, or a linear opinion pool. We measured the predictive accuracy of individual trajectories against observations, using this in a weighted ensemble. We repeated this sequentially against increasing weeks of observed data. We evaluated these ensembles to reflect performance with varying observed data. Results. By collecting modelled trajectories, we showed policy-relevant epidemic characteristics. Trajectories contained a right-skewed distribution well represented by an ensemble of trajectories or a linear opinion pool, but not models’ quantile intervals. Ensembles weighted by performance typically retained the range of plausible incidence over time, and in some cases narrowed this by excluding some epidemic shapes. Conclusions. We observed several information gains from collecting modelled trajectories rather than quantile distributions, including potential for continuously updated information from a single model collection. The value of information gains and losses may vary with each collaborative effort’s aims, depending on the needs of projection users. Understanding the differing information potential of methods to collect model projections can support the accuracy, sustainability, and communication of collaborative infectious disease modelling efforts. Data availability All code and data available on Github: https://github.com/covid19-forecast-hub-europe/aggregation-info-loss
Feature selection technique is often applied in identifying cancer prognosis biomarkers. However, many feature selection methods are prone to over-fitting or poor biological interpretation when applied on biological high-dimensional data. Network-based feature selection and data integration approaches are proposed to identify more robust biomarkers. We conducted experiments to investigate the advantages of the two approaches using epithelial mesenchymal transition regulatory network, which is demonstrated as highly relevant to cancer prognosis. We obtained data from The Cancer Genome Atlas. Prognosis prediction was made using Support Vector Machine. Under our experimental settings, the results showed that network-based features gave significantly more accurate predictions than individual molecular features, and features selected from integrated data (RNA-Seq and micro-RNA data) gave significantly more accurate predictions than features selected from single source data (RNA-Seq data). Our study indicated that biological network-based feature transformation and data integration are two useful approaches to identify robust cancer biomarkers.