TY - JOUR A1 - Raharinirina, Alexia N. A1 - Peppert, Felix A1 - von Kleist, Max A1 - Schütte, Christof A1 - Sunkara, Vikram T1 - Inferring gene regulatory networks from single-cell RNA-seq temporal snapshot data requires higher-order moments JF - Patterns N2 - Single-cell RNA sequencing (scRNA-seq) has become ubiquitous in biology. Recently, there has been a push for using scRNA-seq snapshot data to infer the underlying gene regulatory networks (GRNs) steering cellular function. To date, this aspiration remains unrealized due to technical and computational challenges. In this work we focus on the latter, which is under-represented in the literature. We took a systemic approach by subdividing the GRN inference into three fundamental components: data pre-processing, feature extraction, and inference. We observed that the regulatory signature is captured in the statistical moments of scRNA-seq data and requires computationally intensive minimization solvers to extract it. Furthermore, current data pre-processing might not conserve these statistical moments. Although our moment-based approach is a didactic tool for understanding the different compartments of GRN inference, this line of thinking—finding computationally feasible multi-dimensional statistics of data—is imperative for designing GRN inference methods. Y1 - 2021 U6 - https://doi.org/10.1016/j.patter.2021.100332 VL - 2 IS - 9 ER - TY - JOUR A1 - Peppert, Felix A1 - von Kleist, Max A1 - Schütte, Christof A1 - Sunkara, Vikram T1 - On the Sufficient Condition for Solving the Gap-Filling Problem Using Deep Convolutional Neural Networks JF - IEEE Transactions on Neural Networks and Learning Systems N2 - Deep convolutional neural networks (DCNNs) are routinely used for image segmentation of biomedical data sets to obtain quantitative measurements of cellular structures like tissues. These cellular structures often contain gaps in their boundaries, leading to poor segmentation performance when using DCNNs like the U-Net. The gaps can usually be corrected by post-hoc computer vision (CV) steps, which are specific to the data set and require a disproportionate amount of work. As DCNNs are Universal Function Approximators, it is conceivable that the corrections should be obsolete by selecting the appropriate architecture for the DCNN. In this article, we present a novel theoretical framework for the gap-filling problem in DCNNs that allows the selection of architecture to circumvent the CV steps. Combining information-theoretic measures of the data set with a fundamental property of DCNNs, the size of their receptive field, allows us to formulate statements about the solvability of the gap-filling problem independent of the specifics of model training. In particular, we obtain mathematical proof showing that the maximum proficiency of filling a gap by a DCNN is achieved if its receptive field is larger than the gap length. We then demonstrate the consequence of this result using numerical experiments on a synthetic and real data set and compare the gap-filling ability of the ubiquitous U-Net architecture with variable depths. Our code is available at https://github.com/ai-biology/dcnn-gap-filling. Y1 - 2022 U6 - https://doi.org/10.1109/TNNLS.2021.3072746 VL - 33 IS - 11 SP - 6194 EP - 6205 ER -