Refine
Document Type
- Article (4)
- ZIB-Report (1)
Language
- English (5)
Is part of the Bibliography
- no (5)
Osteoarthritis (OA) is the most common cause of disability in ageing societies, with no effective therapies available to date. Two preclinical models are widely used to validate novel OA interventions (MCL-MM and DMM). Our aim is to discern disease dynamics in these models to provide a clear timeline in which various pathological changes occur. OA was surgically induced in mice by destabilisation of the medial meniscus. Analysis of OA progression revealed that the intensity and duration of chondrocyte loss and cartilage lesion formation were significantly different in MCL-MM vs DMM. Firstly, apoptosis was seen prior to week two and was narrowly restricted to the weight bearing area. Four weeks post injury the magnitude of apoptosis led to a 40–60% reduction of chondrocytes in the non-calcified zone. Secondly, the progression of cell loss preceded the structural changes of the cartilage spatio-temporally. Lastly, while proteoglycan loss was similar in both models, collagen type II degradation only occurred more prominently in MCL-MM. Dynamics of chondrocyte loss and lesion formation in preclinical models has important implications for validating new therapeutic strategies. Our work could be helpful in assessing the feasibility and expected response of the DMM- and the MCL-MM models to chondrocyte mediated therapies.
Muscle fibre cross sectional area (CSA) is an important biomedical measure used to determine the structural composition of skeletal muscle, and it is relevant for tackling research questions in many different fields of research. To date, time consuming and tedious manual delineation of muscle fibres is often used to determine the CSA. Few methods are able to automatically detect muscle fibres in muscle fibre cross sections to quantify CSA due to challenges posed by variation of bright- ness and noise in the staining images. In this paper, we introduce SLCV, a robust semi-automatic pipeline for muscle fibre detection, which combines supervised learning (SL) with computer vision (CV). SLCV is adaptable to different staining methods and is quickly and intuitively tunable by the user. We are the first to perform an error analysis with respect to cell count and area, based on which we compare SLCV to the best purely CV-based pipeline in order to identify the contribution of SL and CV steps to muscle fibre detection. Our results obtained on 27 fluorescence-stained cross sectional images of varying staining quality suggest that combining SL and CV performs signifi- cantly better than both SL based and CV based methods with regards to both the cell separation- and the area reconstruction error. Furthermore, applying SLCV to our test set images yielded fibre detection results of very high quality, with average sensitivity values of 0.93 or higher on different cluster sizes and an average Dice Similarity Coefficient (DSC) of 0.9778.
Due to the increase in accessibility and robustness of sequencing technology, single cell RNA-seq (scRNA-seq) data has become abundant. The technology has made significant contributions to discovering novel phenotypes and heterogeneities of cells. Recently, there has been a push for using single-- or multiple scRNA-seq snapshots to infer the underlying gene regulatory networks (GRNs) steering the cells' biological functions. To date, this aspiration remains unrealised.
In this paper, we took a bottom-up approach and curated a stochastic two gene interaction model capturing the dynamics of a complete system of genes, mRNAs, and proteins. In the model, the regulation was placed upstream from the mRNA on the gene level. We then inferred the underlying regulatory interactions from only the observation of the mRNA population through~time.
We could detect signatures of the regulation by combining information of the mean, covariance, and the skewness of the mRNA counts through time. We also saw that reordering the observations using pseudo-time did not conserve the covariance and skewness of the true time course. The underlying GRN could be captured consistently when we fitted the moments up to degree three; however, this required a computationally expensive non-linear least squares minimisation solver.
There are still major numerical challenges to overcome for inference of GRNs from scRNA-seq data. These challenges entail finding informative summary statistics of the data which capture the critical regulatory information. Furthermore, the statistics have to evolve linearly or piece-wise linearly through time to achieve computational feasibility and scalability.
Single-cell RNA sequencing (scRNA-seq) has become ubiquitous in biology. Recently, there has been a push for using scRNA-seq snapshot data to infer the underlying gene regulatory networks (GRNs) steering cellular function. To date, this aspiration remains unrealized due to technical and computational challenges. In this work we focus on the latter, which is under-represented in the literature. We took a systemic approach by subdividing the GRN inference into three fundamental components: data pre-processing, feature extraction, and inference. We observed that the regulatory signature is captured in the statistical moments of scRNA-seq data and requires computationally intensive minimization solvers to extract it. Furthermore, current data pre-processing might not conserve these statistical moments. Although our moment-based approach is a didactic tool for understanding the different compartments of GRN inference, this line of thinking—finding computationally feasible multi-dimensional statistics of data—is imperative for designing GRN inference methods.
Deep convolutional neural networks (DCNNs) are routinely used for image segmentation of biomedical data sets to obtain quantitative measurements of cellular structures like tissues. These cellular structures often contain gaps in their boundaries, leading to poor segmentation performance when using DCNNs like the U-Net. The gaps can usually be corrected by post-hoc computer vision (CV) steps, which are specific to the data set and require a disproportionate amount of work. As DCNNs are Universal Function Approximators, it is conceivable that the corrections should be obsolete by selecting the appropriate architecture for the DCNN. In this article, we present a novel theoretical framework for the gap-filling problem in DCNNs that allows the selection of architecture to circumvent the CV steps. Combining information-theoretic measures of the data set with a fundamental property of DCNNs, the size of their receptive field, allows us to formulate statements about the solvability of the gap-filling problem independent of the specifics of model training. In particular, we obtain mathematical proof showing that the maximum proficiency of filling a gap by a DCNN is achieved if its receptive field is larger than the gap length. We then demonstrate the consequence of this result using numerical experiments on a synthetic and real data set and compare the gap-filling ability of the ubiquitous U-Net architecture with variable depths. Our code is available at https://github.com/ai-biology/dcnn-gap-filling.