Refine
Document Type
- Article (20)
- ZIB-Report (4)
- In Proceedings (2)
Keywords
- Chemical Master Equations (1)
- Dimension Reduction (1)
- Markov Chains (1)
- Opioid, Ligand-Receptor Interaction, Binding Kinetics, Molecular Dynamics, Metadynamics, SQRA (1)
- mesoscale spreading process (1)
- mesoscale spreading process, network inference, time-evolving network, romanization spreading, scarce data (1)
- network inference (1)
- romanization spreading (1)
- scarce data (1)
- time-evolving network (1)
Background: Despite recent advances in cellular cryo-electron tomography (CET), developing automated tools for macromolecule identification in submolecular resolution remains challenging due to the lack of annotated data and high structural complexities. To date, the extent of the deep learning methods constructed for this problem is limited to conventional Convolutional Neural Networks (CNNs). Identifying macromolecules of different types and sizes is a tedious and time-consuming task. In this paper, we employ a capsule-based architecture to automate the task of macro- molecule identification, that we refer to as 3D-UCaps. In particular, the architecture is composed of three components: feature extractor, capsule encoder, and CNN decoder. The feature extractor converts voxel intensities of input sub-tomograms to activities of local features. The encoder is a 3D Capsule Network (CapsNet) that takes local features to generate a low-dimensional representation of the input. Then, a 3D CNN decoder reconstructs the sub-tomograms from the given representation by upsampling.
Results: We performed binary and multi-class localization and identification tasks on synthetic and experimental data. We observed that the 3D-UNet and the 3D-UCaps had an F1−score mostly above 60% and 70%, respectively, on the test data. In both network architectures, we observed degradation of at least 40% in the F1-score when identifying very small particles (PDB entry 3GL1) compared to a large particle (PDB entry 4D8Q). In the multi-class identification task of experimental data, 3D-UCaps had an F1-score of 91% on the test data in contrast to 64% of the 3D-UNet. The better F1-score of 3D-UCaps compared to 3D-UNet is obtained by a higher precision score. We speculate this to be due to the capsule network employed in the encoder. To study the effect of the CapsNet-based encoder architecture further, we performed an ablation study and perceived that the F1-score is boosted as network depth is increased which
is in contrast to the previously reported results for the 3D-UNet. To present a reproducible work, source code, trained models, data as well as visualization results are made publicly available.
Conclusion: Quantitative and qualitative results show that 3D-UCaps successfully perform various downstream tasks including identification and localization of macro- molecules and can at least compete with CNN architectures for this task. Given that the capsule layers extract both the existence probability and the orientation of the molecules, this architecture has the potential to lead to representations of the data that are better interpretable than those of 3D-UNet.
Understanding the Romanization Spreading on Historical Interregional Networks in Northern Tunisia
(2022)
Spreading processes are important drivers of change in social systems. To understand the mechanisms of spreading it is fundamental to have information about the underlying contact network and the dynamical parameters of the process.
However, in many real-wold examples, this information is not known and needs to be inferred from data. State-of-the-art spreading inference methods have mostly been applied to modern social systems, as they rely on availability of very detailed data. In this paper we study the inference challenges for historical spreading processes, for which only very fragmented information is available. To cope with this problem, we extend existing network models by formulating a model on a mesoscale with temporal spreading rate. Furthermore, we formulate the respective parameter inference problem for the extended model. We apply our approach to the romanization process of Northern Tunisia, a scarce dataset, and study properties of the inferred time-evolving interregional networks. As a result, we show that (1) optimal solutions consist of very different network structures and spreading rate functions; and that (2) these diverse solutions produce very similar spreading patterns. Finally, we discuss how inferred dominant interregional connections are related to available archaeological traces. Historical networks resulting from our approach can help understanding complex processes of cultural change in ancient times.
Understanding the Romanization Spreading on Historical Interregional Networks in Northern Tunisia
(2022)
Spreading processes are important drivers of change in social systems. To understand the mechanisms of spreading it is fundamental to have information about the underlying contact network and the dynamical parameters of the process. However, in many real-wold examples, this information is not known and needs to be inferred from data. State-of-the-art spreading inference methods have mostly been applied to modern social systems, as they rely on availability of very detailed data. In this paper we study the inference challenges for historical spreading processes, for which only very fragmented information is available. To
cope with this problem, we extend existing network models by formulating a model on a mesoscale with temporal spreading rate. Furthermore, we formulate the respective parameter inference problem for the extended model. We apply our approach to the romanization process of Northern Tunisia, a scarce dataset, and study properties of the inferred time-evolving interregional networks. As a result, we show that (1) optimal solutions consist of very different network structures and spreading rate functions; and that (2) these diverse solutions produce very similar spreading patterns. Finally, we discuss how inferred dominant interregional connections are related to available archaeological traces. Historical networks resulting from our approach can help understanding complex processes of cultural
change in ancient times.
Muscle fibre cross sectional area (CSA) is an important biomedical measure used to determine the structural composition of skeletal muscle, and it is relevant for tackling research questions in many different fields of research. To date, time consuming and tedious manual delineation of muscle fibres is often used to determine the CSA. Few methods are able to automatically detect muscle fibres in muscle fibre cross sections to quantify CSA due to challenges posed by variation of bright- ness and noise in the staining images. In this paper, we introduce SLCV, a robust semi-automatic pipeline for muscle fibre detection, which combines supervised learning (SL) with computer vision (CV). SLCV is adaptable to different staining methods and is quickly and intuitively tunable by the user. We are the first to perform an error analysis with respect to cell count and area, based on which we compare SLCV to the best purely CV-based pipeline in order to identify the contribution of SL and CV steps to muscle fibre detection. Our results obtained on 27 fluorescence-stained cross sectional images of varying staining quality suggest that combining SL and CV performs signifi- cantly better than both SL based and CV based methods with regards to both the cell separation- and the area reconstruction error. Furthermore, applying SLCV to our test set images yielded fibre detection results of very high quality, with average sensitivity values of 0.93 or higher on different cluster sizes and an average Dice Similarity Coefficient (DSC) of 0.9778.
Deep convolutional neural networks (DCNNs) are routinely used for image segmentation of biomedical data sets to obtain quantitative measurements of cellular structures like tissues. These cellular structures often contain gaps in their boundaries, leading to poor segmentation performance when using DCNNs like the U-Net. The gaps can usually be corrected by post-hoc computer vision (CV) steps, which are specific to the data set and require a disproportionate amount of work. As DCNNs are Universal Function Approximators, it is conceivable that the corrections should be obsolete by selecting the appropriate architecture for the DCNN. In this article, we present a novel theoretical framework for the gap-filling problem in DCNNs that allows the selection of architecture to circumvent the CV steps. Combining information-theoretic measures of the data set with a fundamental property of DCNNs, the size of their receptive field, allows us to formulate statements about the solvability of the gap-filling problem independent of the specifics of model training. In particular, we obtain mathematical proof showing that the maximum proficiency of filling a gap by a DCNN is achieved if its receptive field is larger than the gap length. We then demonstrate the consequence of this result using numerical experiments on a synthetic and real data set and compare the gap-filling ability of the ubiquitous U-Net architecture with variable depths. Our code is available at https://github.com/ai-biology/dcnn-gap-filling.
The reaction counts chemical master equation (CME) is a high-dimensional variant of the classical population counts CME. In the reaction counts CME setting, we count the reactions which have fired over time rather than monitoring the population state over time. Since a reaction either fires or not, the reaction counts CME transitions are only forward stepping. Typically there are more reactions in a system than species, this results in the reaction counts CME being higher in dimension, but simpler in dynamics. In this work, we revisit the reaction counts CME framework and its key theoretical results. Then we will extend the theory by exploiting the reactions counts’ forward stepping feature, by decomposing the state space into independent continuous-time Markov chains (CTMC). We extend the reaction counts CME theory to derive analytical forms and estimates for the CTMC decomposition of the CME. This new theory gives new insights into solving hitting times-, rare events-, and a priori domain construction problems.