Analytische Chemie
Filtern
Dokumenttyp
- Zeitschriftenartikel (25)
- Vortrag (10)
- Posterpräsentation (7)
- Sonstiges (4)
- Forschungsdatensatz (2)
- Video (1)
Schlagworte
- Mass spectrometry (13)
- Bioinformatics (12)
- Metaproteomics (8)
- Taxonomic analysis (5)
- Virus protoemics (5)
- Graphical models (4)
- Machine learning (4)
- Simulation (4)
- Software (4)
- Benchmarking (3)
- Monoclonal antibody (3)
- PDF (3)
- Proteomics (3)
- Research data management (3)
- SAXS (3)
- XRD (3)
- 3D Fourier Transform (2)
- ABID (2)
- Algorithms (2)
- Antibody identification (2)
- BAM Data Store (2)
- COVID-19 (2)
- Data science (2)
- FFT (2)
- Forschungsdatenmanagement (2)
- Fourier Transform (2)
- High resolution (2)
- Identity (2)
- Metal organic framework (2)
- Microbial proteomics (2)
- Microbiome (2)
- OpenBIS (2)
- Peptide mass fingerprinting (2)
- Reference material (2)
- Sequence coverage (2)
- Sequencing (2)
- Traceability (2)
- Virus detection (2)
- X-ray (2)
- X-ray scattering (2)
- 2,5-dihydroxyacetophenone (1)
- 3D (1)
- 3D FFT (1)
- 3D printed hydrogel (1)
- ADA-GEL (1)
- Academic Software (1)
- Acidic cleavage (1)
- Antibody light chain (1)
- Antibody subclass (1)
- Artificial intelligence (1)
- Ayesian models (1)
- Bayesian hierachical modelling (1)
- Benchmarking study (1)
- Bioanalytics (1)
- Bioinformatic algorithms (1)
- Bone tissue engineering (1)
- Calibration (1)
- Catena-X (1)
- Certificate (1)
- Cleavage (1)
- Collaborative trial (1)
- Complex systems (1)
- Cooper (1)
- Corona virus (1)
- DHAP (1)
- Data Sciences (1)
- Data Spaces (1)
- Data infrastructures (1)
- Data integration (1)
- Data privacy (1)
- Data quality (1)
- Data set (1)
- Database (1)
- Datenräume (1)
- Datenökosystem (1)
- De novo peptide sequencing (1)
- Deep Learning (1)
- Deep learning (1)
- Deep learning-based tools (1)
- Digital quality infrastructure (1)
- Digital representations (1)
- Digital workflows (1)
- Digitalisierung (1)
- Digitalization (1)
- Drug delivery (1)
- Education (1)
- Electron density map (1)
- Electronic Lab Notebook (ELN) (1)
- Electronic lab notebook (ELN) (1)
- Elektronisches Laborbuch (1)
- Enzyme activity (1)
- Error correction (1)
- Federated learning (1)
- GDOES (1)
- Gaia-X (1)
- Github (1)
- Graph databases (1)
- Graph learning (1)
- Graphical model (1)
- Heavy chains (1)
- High Resolution (1)
- High-resolution (1)
- Hydrogen safety (1)
- Identification (1)
- IgG (1)
- Immunoglobulins (1)
- Interactions (1)
- International lab study (1)
- Interoperability (1)
- Knowledge graphs (1)
- Künstliche Intelligenz (1)
- Library (1)
- Light chain (1)
- Light chains (1)
- Linked open data (1)
- MALDI (1)
- MALDI-TOF-MS (1)
- MDCK cell (1)
- MOF (1)
- MOUSE (1)
- MS/MS (1)
- Manufacturing-X (1)
- Materials informatics (1)
- Materials science (1)
- Mathematics and Statistics (1)
- Mesoporous SiO2-CaO nanoparticles (1)
- Meta-Omics (1)
- Metabolism (1)
- Metrology (1)
- Microbiomes (1)
- Missing fragmentation sites (1)
- Multi-omics (1)
- Multi-scale (1)
- NIST-mAb 8671 (1)
- NanoPlattform (1)
- Nanomaterial (1)
- Nanomaterials (1)
- Network science (1)
- Networking (1)
- Non linear dynamics (1)
- Normung Roadmap (1)
- Nucleocapsid (1)
- Online software (1)
- Ontologies (1)
- Ontology (1)
- Open science (1)
- Outburst floods (1)
- Pair distribution function (1)
- Peak overlap (1)
- Peptide coverage (1)
- Peptide identification (1)
- Peptides (1)
- Physics (1)
- Physics and society (1)
- Pipelines (1)
- Preprocessing (1)
- Protemics (1)
- Proteome (1)
- Protocol (1)
- QI-Cloud (1)
- QI-Digital (1)
- Quality Assurance (1)
- Quality Infrastructure (1)
- Quality Management (1)
- Quality control (1)
- Qualitätsinfrastruktur (1)
- Qualitätsmanagement (1)
- Qualitätssicherung (1)
- RBD (1)
- RDF (1)
- Recombinant antibody (1)
- Reference materials (1)
- Referenzmaterialien (1)
- Reinforcement learning (1)
- Reproducibility (1)
- Reproducibility crisis (1)
- Research Agenda (1)
- Research Data Infrastructure (1)
- Research Data Management (1)
- Research Software (1)
- Review (1)
- SARS-CoV-2 antibody (1)
- Scattering (1)
- Semantic web (1)
- Sequencing algorithm (1)
- Small-angle X-ray scattering (1)
- Socio-ecological models (1)
- Software Infrastructure (1)
- Software Licensing (1)
- Software Training (1)
- Spike protein (1)
- Statistics (1)
- Strain identification (1)
- Structural health monitoring (1)
- Sulfuric acid (1)
- Suspension growth (1)
- Sustainable Software Development (1)
- Targeted mass spectrometry (1)
- Taxonomic inference (1)
- Total scattering (1)
- Trends in extreme events (1)
- Trypsin (1)
- Tryptic digest (1)
- Unsicherheit (1)
- Vehicle Routing (1)
- Verification (1)
- Video (1)
- Virus (1)
- Virus diagnostics (1)
- Vocabulary providers (1)
- X-ray diffraction (1)
- Zenodo (1)
- graphical models (1)
- openBIS (1)
- smart standards (1)
- total scattering (1)
Organisationseinheit der BAM
- VP.1 eScience (49) (entfernen)
Paper des Monats
- ja (1)
Die Angabe von Unsicherheiten bei zertifizierten Werten von Referenzmaterialien ist von entscheidender Bedeutung. Die korrekte Einbindung der Unsicherheiten zur Berechnung von Verfahrensmessunsicherheiten ist wesentlich für die Gewährleistung der Genauigkeit und Zuverlässigkeit von Messungen. In diesem Vortrag werden die verschiedenen Einflussfaktoren auf die Unsicherheit zertifizierter Werte gemäß ISO Guide 35 dargestellt. Dabei werden insbesondere die Charakterisierung, Homogenität und Stabilität als entscheidende Faktoren für die Bestimmung der Unsicherheit eines Referenzmaterials betrachtet. Abschließend wird das Konzept anhand eines konkreten Beispiels veranschaulicht, um die praktische Anwendung und die Auswirkungen auf die Berechnung von Verfahrensmessunsicherheiten zu verdeutlichen.
The BAM Data Store
(2023)
As a partner in several NFDI consortia, the Bundesanstalt für Materialforschung und -prüfung (BAM, German federal institute for materials science and testing) contributes to research data standardization efforts in various domains of materials science and engineering (MSE). To implement a central research data management (RDM) infrastructure that meets the requirements of MSE groups at BAM, we initiated the Data Store pilot project in 2021. The resulting infrastructure should enable researchers to digitally document research processes and store related data in a standardized and interoperable manner. As a software solution, we chose openBIS, an open-source framework that is increasingly being used for RDM in MSE communities.
The pilot project was conducted for one year with five research groups across different organizational units and MSE disciplines. The main results are presented for the use case “nanoPlattform”. The group registered experimental steps and linked associated instruments and chemicals in the Data Store to ensure full traceability of data related to the synthesis of ~400 nanomaterials. The system also supported researchers in implementing RDM practices in their workflows, e.g., by automating data import and documentation and by integrating infrastructure for data analysis.
Based on the promising results of the pilot phase, we will roll out the Data Store as the central RDM infrastructure of BAM starting in 2023. We further aim to develop openBIS plugins, metadata standards, and RDM workflows to contribute to the openBIS community and to foster RDM in MSE.
Angesichts der zunehmenden Digitalisierung und dem Einsatz datenintensiver Methodiken in der Wissenschaft stehen Forschende vor der Herausforderung, stetig wachsende Datenmengen nachvollziehbar zu dokumentieren, langfristig zu speichern und für Dritte nachnutzbar zu machen. Um diesen Anforderungen gerecht zu werden, bietet sich die Nutzung von Software-Lösungen an, welche Forschungsdatenmanagement mit der digitalen Dokumentation von Laborinventar und Experimenten in elektronischen Laborbüchern (engl. electronic lab notebooks (ELN)) verknüpfen.
This document constitutes the Strategic Research Agenda (SRA) for the European Metrology Network for Mathematics and Statistics in Metrology (EMN Mathmet). The EMN Mathmet is an alliance of European National Metrology Institutes (NMIs), Designated Institutes (DIs) and an EMN Partner that aims to strengthen research and cooperation in the field. The SRA has been developed within a European project (EMPIR 18NET05 MATHMET) to promote and support the network. The SRA was developed based on a consultation process with stakeholders and the strategies of individual NMIs and DIs, and in alignment with the EURAMET 2030 strategy.
As a key result, the SRA defines a long-term research goal: the EMN Mathmet will coordinate research to strengthen the trust in algorithms, software tools and data to underpin digital transformation. For this purpose, new emerging research topics where algorithms, software tools and data play a significant role were identified: (i) Artificial Intelligence and Machine Learning, and (ii) Computational Modelling and Virtual Metrology. The foundation for the development of these new topics is given by the traditional focus on (iii) Data Analysis and Uncertainty Evaluation. The SRA characterises the future needs and challenges in the field of mathematics and statistics in metrology and provides an outline of how the EMN Mathmet can meet these new emerging requirements.
The deployment of machine learning (ML) and deep learning (DL) in structural health monitoring (SHM) faces multiple challenges. Foremost among these is the insufficient availability of extensive high-quality data sets essential for robust training. Within SHM, high-quality data is defined by its accuracy, relevance, and fidelity in representing real-world structural scenarios (pristine as well as damaged). Although methods like data augmentation and creating synthetic data can add to datasets, they frequently sacrifice the authenticity and true representation of the data. Sharing real-world data encapsulating true structural and anomalous scenarios offers promise. However, entities are often reluctant to share raw data, given the potential extraction of sensitive information, leading to trust issues among collaborating entities.
Our study introduces a novel methodology leveraging Federated Learning (FL) to navigate these challenges. Within the FL framework, models are trained in a decentralized manner across different entities, preserving data privacy. In our research, we simulated several scenarios and compared them to traditional local training methods. Employing guided wave (GW) datasets, we distributed the data among different parties (clients) using IID (independent, identically distributed or in other words, statistically identical) mini batches of dataset, as well as non-IID configurations. This approach mirrors real-world data distribution among varied entities, such as hydrogen refueling stations.
In our methodology, the initial round involves individualized training for each client using their unique datasets . Subsequently, the model parameters are sent to the FL server, where they are averaged to construct a global model. In the second round, this global model is disseminated back to the clients to aid in predictive tasks. This iterative process continues for several rounds to reach convergence.
Our findings distinctly highlight the advantages of FL over localized training, evidenced by a marked improvement in prediction accuracy . This research underscores the potential of FL in GW-based SHM, offering a remedy to similar challenges tied to data scarcity in other SHM approaches and paving the way for a new era of collaborative, data-centric monitoring systems.
Harmonized and interoperable national Quality Infrastructure (QI) systems are essential for fostering cooperation, promoting mutual trust, and facilitating trade. The true potential of the QI is realized when its elements and actors are seamlessly integrated into a cohesive digital QI ecosystem. Recent developments towards industrial international data spaces enable such an ecosystem but require the integration of QI principles. Recognizing the lack of such a platform, Quality-X aims at setting the stage for the implementation of a QI ecosystem in international data spaces (IDS), GAIA-X and related German and European projects dedicated to secure data sharing. Quality-X is not about the construction of a platform; it is the creation of an inclusive QI ecosystem with harmonized interfaces. Instead of imposing rigid data structures, it prioritizes interoperability. Through the utilization of Decentralized Identifiers (DIDs), Verifiable Credentials, and Identity Hubs, Quality-X seeks seamless interactions across diverse service provider systems.
This white paper introduces the concept and vision of Quality-X and discusses the general prerequisites for integrating QI processes within data spaces. Further on, we introduce existing testbeds, which will serve as an experimental proving ground for exploring various use cases related to the implementation of the vision of a QI-Digital.
+++
Harmonisierte und interoperable nationale Qualitätsinfrastrukturen (QI) sind für die Förderung der Zusammenarbeit, des gegenseitigen Vertrauens und der Erleichterung des Handels unerlässlich. Das wahre Potenzial der QI kommt zum Tragen, wenn ihre Elemente und Akteure nahtlos in ein kohärentes digitales QI-Ökosystem integriert werden. Die jüngsten Entwicklungen hin zu industriellen internationalen Datenräumen ermöglichen ein solches Ökosystem, erfordern jedoch die Integration von QI-Prinzipien. Angesichts des Fehlens einer solchen Plattform zielt Quality-X darauf ab, die Voraussetzungen für die Umsetzung eines QI-Ökosystems in internationalen Datenräumen (IDS), GAIA-X und verwandten deutschen und europäischen Projekten zum sicheren Datenaustausch zu schaffen. Bei Quality-X geht es nicht um den Aufbau einer Plattform, sondern um die Schaffung eines umfassenden QI-Ökosystems mit harmonisierten Schnittstellen. Anstatt starre Datenstrukturen aufzuerlegen, steht die Interoperabilität im Vordergrund. Durch die Verwendung von dezentralen Identifikatoren (DIDs), überprüfbaren Berechtigungsnachweisen und Identitäts-Hubs strebt Quality-X eine nahtlose Interaktion zwischen verschiedenen Systemen von Dienstleistern an.
Dieses Whitepaper stellt das Konzept und die Vision von Quality-X vor und erörtert die allgemeinen Voraussetzungen für die Integration von QI-Prozessen in Datenräumen. Darüber hinaus stellen wir bestehende Testbeds vor, die als experimentelles Versuchsfeld für die Erforschung verschiedener Anwendungsfälle im Zusammenhang mit der Umsetzung der Vision einer QI-Digital dienen sollen.
The statistical tool eCerto was developed for the evaluation of measurement data to assign property values and associated uncertainties of reference materials. The analysis is based on collaborative studies of expert laboratories and was implemented using the R software environment. Emphasis was put on comparability of eCerto with SoftCRM, a statistical tool based on the certification strategy of the former Community Bureau of Reference. Additionally, special attention was directed towards easy usability from data collection through processing, archiving, and reporting. While the effects of outlier removal can be flexibly explored, eCerto always retains the original data set and any manipulation such as outlier removal is (graphically and tabularly) documented adequately in the report. As a major reference materials producer, the Bundesanstalt für Materialforschung und -prüfung (BAM) developed and will maintain a tool to meet the needs of modern data processing, documentation requirements, and emerging fields of RM activity. The main features of eCerto are discussed using previously certified reference materials.
Metaproteomics, the study of the collective proteome within a microbial ecosystem, has substantially grown over the past few years. This growth comes from the increased awareness that it can powerfully supplement metagenomics and metatranscriptomics analyses. Although metaproteomics is more challenging than single-species proteomics, its added value has already been demonstrated in various biosystems, such as gut microbiomes or biogas plants. Because of the many challenges, a variety of metaproteomics workflows have been developed, yet it remains unclear what the impact of the choice of workflow is on the obtained results. Therefore, we set out to compare several well-established workflows in the first community-driven, multi-lab comparison in metaproteomics: the critical assessment of metaproteome investigation (CAMPI) study. In this benchmarking study, we evaluated the influence of different workflows on sample preparation, mass spectrometry acquisition, and bioinformatic analysis on two samples: a simplified, lab-assembled human intestinal sample and a complex human fecal sample. We find that the same overall biological meaning can be inferred from the metaproteome data, regardless of the chosen workflow. Indeed, taxonomic and functional annotations were very similar across all sample-specific data sets. Moreover, this outcome was consistent regardless of whether protein groups or peptides, or differences at the spectrum or peptide level were used to infer these annotations. Where differences were observed, those originated primarily from different wet-lab methods rather than from different bioinformatic pipelines. The CAMPI study thus provides a solid foundation for benchmarking metaproteomics workflows, and will therefore be a key reference for future method improvement. [doi:10.25345/C5SX64D9M] [dataset license: CC0 1.0 Universal (CC0 1.0)]
The increasing amount and complexity of clinical data require an appropriate way of storing and analyzing those data. Traditional approaches use a tabular structure (relational databases) for storing data and thereby complicate storing and retrieving interlinked data from the clinical domain. Graph databases provide a great solution for this by storing data in a graph as nodes (vertices) that are connected by edges (links). The underlying graph structure can be used for the subsequent data analysis (graph learning). Graph learning consists of two parts: graph representation learning and graph analytics. Graph representation learning aims to reduce high-dimensional input graphs to low-dimensional representations. Then, graph analytics uses the obtained representations for analytical tasks like visualization, classification, link prediction and clustering which can be used to solve domain-specific problems. In this survey, we review current state-of-the-art graph database management systems, graph learning algorithms and a variety of graph applications in the clinical domain. Furthermore, we provide a comprehensive use case for a clearer understanding of complex graph learning algorithms.
Integrated multi-omics analyses of microbiomes have become increasingly common in recent years as the emerging omics technologies provide an unprecedented opportunity to better understand the structural and functional properties of microbial communities. Consequently, there is a growing need for and interest in the concepts, approaches, considerations, and available tools for investigating diverse environmental and host-associated microbial communities in an integrative manner. In this review, we first provide a general overview of each omics analysis type, including a brief history, typical workflow, primary applications, strengths, and limitations. Then, we inform on both experimental design and bioinformatics analysis considerations in integrated multi-omics analyses, elaborate on the current approaches and commonly used tools, and highlight the current challenges. Finally, we discuss the expected key advances, emerging trends, potential implications on various fields from human health to biotechnology, and future directions.
Mistle: bringing spectral library predictions to metaproteomics with an efficient search index
(2023)
Motivation: Deep learning has moved to the forefront of tandem mass spectrometry-driven proteomics and authentic prediction for peptide fragmentation is more feasible than ever. Still, at this point spectral prediction is mainly used to validate database search results or for confined search spaces. Fully predicted spectral libraries have not yet been efficiently adapted to large search space problems that often occur in metaproteomics or proteogenomics.
Results: In this study, we showcase a workflow that uses Prosit for spectral library predictions on two common metaproteomes and implement an indexing and search algorithm, Mistle, to efficiently identify experimental mass spectra within the library. Hence, the workflow emulates a classic protein sequence database search with protein digestion but builds a searchable index from spectral predictions as an in-between step.
We compare Mistle to popular search engines, both on a spectral and database search level, and provide evidence that this approach is more accurate than a database search using MSFragger. Mistle outperforms other spectral library search engines in terms of run time and proves to be extremely memory efficient with a 4- to 22-fold decrease in RAM usage. This makes Mistle universally applicable to large search spaces, e.g. covering comprehensive sequence databases of diverse microbiomes.
Availability and implementation: Mistle is freely available on GitHub at https://github.com/BAMeScience/Mistle.
Motivation: Inferring taxonomy in mass spectrometry-based shotgun proteomics is a complex task. In multi-species or viral samples of unknown taxonomic origin, the presence of proteins and corresponding taxa must be inferred from a list of identified peptides, which is often complicated by protein homology: many proteins do not only share peptides within a taxon but also between taxa. However, the correct taxonomic inference is crucial when identifying different viral strains with high-sequence homology—considering, e.g., the different epidemiological characteristics of the various strains of severe acute respiratory syndrome-related coronavirus-2. Additionally, many viruses mutate frequently, further complicating the correct identification of viral proteomic samples.
Results: We present PepGM, a probabilistic graphical model for the taxonomic assignment of virus proteomic samples with strain-level resolution and associated confidence scores. PepGM combines the results of a standard proteomic database search algorithm with belief propagation to calculate the marginal distributions, and thus confidence scores, for potential taxonomic assignments. We demonstrate the performance of PepGM using several publicly available virus proteomic datasets, showing its strain-level resolution performance. In two out of eight cases, the taxonomic assignments were only correct on the species level, which PepGM clearly indicates by lower confidence scores.
Availability and implementation: PepGM is written in Python and embedded into a Snakemake workflow. It is available at https://github.com/BAMeScience/PepGM.
The application and benefits of Semantic Web Technologies (SWT) for managing, sharing, and (re-)using of research data are demonstrated in implementations in the field of Materials Science and Engineering (MSE). However, a compilation and classification are needed to fully recognize the scattered published works with its unique added values. Here, the primary use of SWT at the interface with MSE is identified using specifically created categories. This overview highlights promising opportunities for the application of SWT to MSE, such as enhancing the quality of experimental processes, enriching data with contextual information in knowledge graphs, or using ontologies to perform specific queries on semantically structured data. While interdisciplinary work between the two fields is still in its early stages, a great need is identified to facilitate access for nonexperts and develop and provide user-friendly tools and workflows. The full potential of SWT can best be achieved in the long term by the broad acceptance and active participation of the MSE community. In perspective, these technological solutions will advance the field of MSE by making data FAIR. Data-driven approaches will benefit from these data structures and their connections to catalyze knowledge generation in MSE.
This talk highlights a proof-of-concept that demonstrates the ability to calculate high-resolution Fourier transforms. These can be combined with multi-scale modeling to simulate scattering over a wide range, from small-angle scattering to XRD and PDF.
The preprint documenting this is available on the ArXiv here:
https://doi.org/10.48550/arXiv.2303.13435
The Jupyter notebook, VASP calculation details and MOUSE measured scattering patterns are available from this Zenodo repository: https://dx.doi.org/10.5281/zenodo.7764045
## Summary:
This notebook and associated datasets (including VASP details) accompany a manuscript available on the ArXiv (https://doi.org/10.48550/arXiv.2303.13435) and hopefully soon in a journal as short communication as well. Most of the details needed to understand this notebook are explained in that paper with the same title as above. For convenience, the abstract is repeated here:
## Paper abstract:
We demonstrate a strategy for simulating wide-range X-ray scattering patterns, which spans the small- and wide scattering angles as well as the scattering angles typically used for Pair Distribution Function (PDF) analysis. Such simulated patterns can be used to test holistic analysis models, and, since the diffraction intensity is presented coupled to the scattering intensity, may offer a novel pathway for determining the degree of crystallinity.
The ``Ultima Ratio'' strategy is demonstrated on a 64-nm Metal Organic Framework (MOF) particle, calculated from $Q<0.01$\,$\mathrm{nm}^{-1}$ up to $Q\approx150$\,$\mathrm{nm}^{-1}$, with a resolution of 0.16\,\AA. The computations exploit a modified 3D Fast Fourier Transform (3D-FFT), whose modifications enable the transformations of matrices at least up to $8000^3$ voxels in size. Multiple of these modified 3D-FFTs are combined to improve the low-$Q$ behaviour.
The resulting curve is compared to a wide-range scattering pattern measured on a polydisperse MOF powder.
While computationally intensive, the approach is expected to be useful for simulating scattering from a wide range of realistic, complex structures, from (poly-)crystalline particles to hierarchical, multicomponent structures such as viruses and catalysts.
We demonstrate a strategy for simulating wide-range X-ray scattering patterns, which spans the small- and wide scattering angles as well as the scattering angles typically used for Pair Distribution Function (PDF) analysis. Such simulated patterns can be used to test holistic analysis models, and, since the diffraction intensity is on the same scale as the scattering intensity, may offer a novel pathway for determining the degree of crystallinity.
The "Ultima Ratio" strategy is demonstrated on a 64-nm Metal Organic Framework (MOF) particle, calculated from Q < 0.01 1/nm up to Q < 150 1/nm, with a resolution of 0.16 Angstrom. The computations exploit a modified 3D Fast Fourier Transform (3D-FFT), whose modifications enable the transformations of matrices at least up to 8000^3 voxels in size. Multiple of these modified 3D-FFTs are combined to improve the low-Q behaviour. The resulting curve is compared to a wide-range scattering pattern measured on a polydisperse MOF powder. While computationally intensive, the approach is expected to be useful for simulating scattering from a wide range of realistic, complex structures, from (poly-)crystalline particles to hierarchical, multicomponent structures such as viruses and catalysts.
Episodic failures of ice-dammed lakes have produced some of the largest floods in history, with disastrous consequences for communities in high mountains. Yet, estimating changes in the activity of ice-dam failures through time remains controversial because of inconsistent regional flood databases. Here, by collating 1,569 ice-dam failures in six major mountain regions, we systematically assess trends in peak discharge, volume, annual timing and source elevation between 1900 and 2021. We show that extreme peak flows and volumes (10 per cent highest) have declined by about an order of magnitude over this period in five of the six regions, whereas median flood discharges have fallen less or have remained unchanged.
Ice-dam floods worldwide today originate at higher elevations and happen about six weeks earlier in the year than in 1900. Individual ice-dammed lakes with repeated outbursts show similar negative trends in magnitude and earlier occurrence, although with only moderate correlation to glacier thinning8. We anticipate that ice dams will continue to fail in the near future, even as glaciers thin and recede. Yet widespread deglaciation, projected for nearly all regions by the end of the twenty-first century9, may bring most outburst activity to a halt.
Comprehensive evaluation of peptide de novo sequencing tools for monoclonal antibody assembly
(2023)
Monoclonal antibodies are biotechnologically produced proteins with various applications in research, therapeutics and diagnostics. Their ability to recognize and bind to specific molecule structures makes them essential research tools and therapeutic agents. Sequence information of antibodies is helpful for understanding antibody–antigen interactions and ensuring their affinity and specificity. De novo protein sequencing based on mass spectrometry is a valuable method to obtain the amino acid sequence of peptides and proteins without a priori knowledge. In this study, we evaluated six recently developed de novo peptide sequencing algorithms (Novor, pNovo 3, DeepNovo, SMSNet, PointNovo and Casanovo), which were not specifically designed for antibody data. We validated their ability to identify and assemble antibody sequences on three multi-enzymatic data sets. The deep learning-based tools Casanovo and PointNovo showed an increased peptide recall across different enzymes and data sets compared with spectrum-graph-based approaches. We evaluated different error types of de novo peptide sequencing tools and their performance for different numbers of missing cleavage sites, noisy spectra and peptides of various lengths. We achieved a sequence coverage of 97.69–99.53% on the light chains of three different antibody data sets using the de Bruijn assembler ALPS and the predictions from Casanovo. However, low sequence coverage and accuracy on the heavy chains demonstrate that complete de novo protein sequencing remains a challenging issue in proteomics that requires improved de novo error correction, alternative digestion strategies and hybrid approaches such as homology search to achieve high accuracy on long protein sequences.
In mass spectrometry based proteomics, protein homology leads to
many shared peptides within and between species. This complicates
taxonomic inference. inference. We introduce PepGM, a graphical model for taxonomic profiling of viral proteomes and metaproteomic datasets.
Using the graphical model, our approach computes statistically sound
scores for taxa based on peptide scores from a previous database
search, eliminating the need for commonly used heuristics.
In mass spectrometry based proteomics, protein homology leads to
many shared peptides within and between species. This complicates
taxonomic inference. inference. We introduce PepGM, a graphical model for taxonomic profiling of viral proteomes and metaproteomic datasets.
Using the graphical model, our approach computes statistically sound
scores for taxa based on peptide scores from a previous database
search, eliminating the need for commonly used heuristics.
In mass spectrometry based proteomics, protein homology leads to
many shared peptides within and between species. This complicates
taxonomic inference. inference. We introduce PepGM, a graphical model for taxonomic profiling of viral proteomes and metaproteomic datasets.
Using the graphical model, our approach computes statistically sound
scores for taxa based on peptide scores from a previous database
search, eliminating the need for commonly used heuristics.
In mass spectrometry based proteomics, protein homology leads to
many shared peptides within and between species. This complicates
taxonomic inference. inference. We introduce PepGM, a graphical model for taxonomic profiling of viral proteomes and metaproteomic datasets.
Using the graphical model, our approach computes statistically sound
scores for taxa based on peptide scores from a previous database
search, eliminating the need for commonly used heuristics. heuristics.
In mass spectrometry based proteomics, protein homology leads to
many shared peptides within and between species. This complicates
taxonomic inference. inference. We introduce PepGM, a graphical model for taxonomic profiling of viral proteomes and metaproteomic datasets.
Using the graphical model, our approach computes statistically sound
scores for taxa based on peptide scores from a previous database
search, eliminating the need for commonly used heuristics. heuristics.
Probability based taxonomic profiling of viral and microbiome samples using PepGM and Unipept
(2022)
In mass spectrometry based proteomics, protein homology leads to many shared peptides within and between species. This complicates
taxonomic inference in samples of unknown taxonomic origin. PepGM uses a graphical model for taxonomic profiling of viral proteomes and
metaproteomic datasets providing taxonomic confidence scores. To build the graphical model, a list of potentially present taxa needs to be
inferred. To this end, we integrate Unipept, which enables the fast querying of potentially present taxa. Together, they allow for taxonomic
inference with statistically sound confidence scores.
We interpret solving the multi-vehicle routing problem as a team Markov game with partially observable costs. For a given set of customers to serve, the playing agents (vehicles) have the common goal to determine the team-optimal agent routes with minimal total cost. Each agent thereby observes only its own cost. Our multi-agent reinforcement learning approach, the so-called multi-agent Neural Rewriter, builds on the single-agent Neural Rewriter to solve the problem by iteratively rewriting solutions. Parallel agent action execution and partial observability require new rewriting rules for the game. We propose the introduction of a so-called pool in the system which serves as a collection point for unvisited nodes. It enables agents to act simultaneously and exchange nodes in a conflict-free manner. We realize limited disclosure of agent-specific costs by only sharing them during learning. During inference, each agents acts decentrally, solely based on its own cost. First empirical results on small problem sizes demonstrate that we reach a performance close to the employed OR-Tools benchmark which operates in the perfect cost information setting.
Introduction:
With the introduction of accurate deep learning predictors, spectral matching applications might experience a renaissance in tandem mass spectrometry (MS/MS) driven proteomics. Deep learning models, e.g., Prosit, predict complete MS/MS spectra from peptide sequences and give the unprecedented ability to accurately predict mass spectra that may arise from any given proteome. However, the amount of spectral data is enormous when querying large search spaces, e.g., metaproteomes composed of many different species.
Current spectral library search software, such as SpectraST, is not equipped to meet run time and memory constraints imposed by such large MS/MS databases, covering several millions of peptide spectrum predictions.
Methods:
Inspired by the fragment index data structure that had been introduced with MSFragger, we implement an efficient peak matching algorithm for computing spectral similarity between query and library spectra. Mistle (Metaproteomic index and spectral library search engine) uses index partitioning and SIMD (Single instruction, multiple data) intrinsics, which greatly improves speed and memory efficiency for searching large spectral libraries. Mistle is written in C++20 and highly parallelized.
Results:
We demonstrate the efficiency of Mistle on two predicted spectral libraries for the lab-assembled microbial communities 9MM and SIHUMIx. Compared to the spectral library search engine SpectraST, Mistle shows a >10-fold runtime improvement and is also faster than msSLASH, which uses locality-sensitive hashing. Although Mistle is slower than MSFragger, Mistle‘s memory footprint is an order of magnitude smaller. Furthermore, we find evidence that the spectral matching approach to predicted libraries identifies peptides with higher precision. Mistle detects peptides not found by database search via MSFragger and in turn uncovers unnoticed false discoveries among their matches.
Conclusion:
In this study, we show that predicted spectral libraries can enhance peptide identification for metaproteomics. Mistle provides the means to efficiently search large-scale spectral libraries, highlighted for the microbiota 9MM and SIHUMIx.
Applying data-driven AI systems makes it possible to extract patterns from given data, generate predictions and helps making decisions. Material research and testing holds a plethora of AI-based applications, for example, for the automatized search and synthesis of new materials, the detection of materials defects, or the prediction of process and materials parameters (inverse problems). However, AI algorithms can often only be as good as the training data from which the corresponding models are learned. Therefore, it is also indispensable to develop measures for the standardization and quality assurance of such data.
For this purpose, we develop and implement methods from transferring data from various sources into a homogeneous data repository with uniform data descriptions. Through the standardization and corresponding machine-readable interfaces, research data can be made usable and reusable for further data analyses. In addition to the technical implementation of integrative platforms, it is crucial that quality-assured research data management is recognized and implemented as an integral part of daily scientific work. Finally, we provide a vision of how the Federal Institute for Materials Research and Testing can benefit from data-driven AI systems. We discuss early applications and take a peek at future research.
In diesem Vortrag wird die Perspektive einer digitalen Qualitätsinfrastruktur (QI) auf informatischer Seite vorgestellt. Eine zu entwickelnde QI-Cloud ist die Grundlage einer verteilten IT-Plattform über die digitalisierte Prozesse der QI abgewickelt, Daten sicher vorgehalten und ausgetauscht sowie digitale Zertifikate ausgestellt werden können.
Dazu werden Methoden wie die Distributed Ledger Technologie sowie Smart Standards beschrieben, die das Potential haben, essentielle technologische Bestandteile einer digital transformierten QI zu werden.
Metaproteomics has substantially grown over the past years and supplements other omics approaches by bringing valuable functional information, enabling genotype- phenotype linkages and connections to metabolic outputs. Currently, a wide variety of metaproteomic workflows is available, yet their impact on the results remains to be thoroughly assessed.
Here, we carried out the first community-driven, multi-lab comparison in metaproteomics: the critical assessment of metaproteome investigation (CAMPI) study. Based on well-established workflows, we evaluated the influence of sample preparation, mass spectrometry acquisition, and bioinformatic analysis using two samples: a simplified, lab-assembled human intestinal model and a human fecal sample.
Although bioinformatic pipelines contributed to variability in peptide identification, wet-lab workflows were the most important source of differences between analyses. Overall, these peptide-level differences largely disappeared at the protein group level. Differences were observed between peptide- and protein-centric approaches for the predicted community composition but similar functional profiles were found across workflows.
The CAMPI findings demonstrate the robustness of current metaproteomics research and provide a perspective for future benchmarking studies.
Driven by recent technological advances and the need for improved viral diagnostic applications, mass spectrometry-based proteomics comes into play for detecting viral pathogens accurately and efficiently. However, the lack of specific algorithms and software tools presents a major bottleneck for analyzing data from host-virus samples. For example, accurate species- and strain-level classification of a priori unidentified organisms remains a very challenging task in the setting of large search databases. Another prominent issue is that many existing solutions suffer from the protein inference issue, aggravated because many homologous proteins are present across multiple species. One of the contributing factors is that existing bioinformatic algorithms have been developed mainly for single-species proteomics applications for model organisms or human samples. In addition, a statistically sound framework was lacking to accurately assign peptide identifications to viral taxa. In this presentation, an overview is given on current bioinformatics developments that aim to overcome the above-mentioned issues using algorithmic and statistical methods. The presented methods and software tools aim to provide tailored solutions for both discovery-driven and targeted proteomics for viral diagnostics and taxonomic sample profiling. Furthermore, an outlook is provided on how the bioinformatic developments might serve as a generic toolbox, which can be transferred to other research questions, such as metaproteomics for profiling microbiomes and identifying bacterial pathogens.
Im Auftrag des Bundesministeriums für Wirtschaft und Klimaschutz haben DIN und DKE im Januar 2022 die Arbeiten an der zweiten Ausgabe der Deutschen Normungsroadmap Künstliche Intelligenz gestartet. In einem breiten Beteiligungsprozess und unter Mitwirkung von mehr als 570 Fachleuten aus Wirtschaft, Wissenschaft, öffentlicher Hand und Zivilgesellschaft wurde damit der strategische Fahrplan für die KI-Normung weiterentwickelt. Koordiniert und begleitet wurden diese Arbeiten von einer hochrangigen Koordinierungsgruppe für KI-Normung und -Konformität.
Mit der Normungsroadmap wird eine Maßnahme der KI-Strategie der Bundesregierung umgesetzt und damit ein wesentlicher Beitrag zur „KI – Made in Germany“ geleistet.
Die Normung ist Teil der KI-Strategie und ein strategisches Instrument zur Stärkung der Innovations- und Wettbewerbsfähigkeit der deutschen und europäischen Wirtschaft. Nicht zuletzt deshalb spielt sie im geplanten europäischen Rechtsrahmen für KI, dem Artificial Intelligence Act, eine besondere Rolle.
Glow discharge optical emission spectroscopy (GD-OES) is a technique for the analysis of solids such as metals, semiconductors, and ceramics. A low-pressure glow discharge plasma is applied in this system, which ‘sputters’ and promotes the sample atoms to a higher energy state. When the atoms return to their ground state, they emit light with characteristic wavelengths, which a spectrometer can detect. Thus, GD-OES combines the advantages of ICP-OES with solid sampling techniques, which enables it to determine the bulk elemental composition and depth profiles. However, direct solid sampling methods such as glow-discharge spectroscopy require reference materials for calibration due to the strong matrix effect.
Reference materials are essential when the accuracy and reliability of measurement results need to be guaranteed to generate confidence in the analysis. These materials are frequently used to determine measurement uncertainty, validate methods, suitability testing, and quality assurance. In addition, they guarantee that measurement results can be compared to recognized reference values. Unfortunately, the availability of certified reference materials suited to calibrate all elements in different matrix materials is limited. Therefore various calibration strategies and the preparation of traceable matrix-matched calibration standards will be discussed.
Machine learning is an essential component of the growing field of data science. Through statistical methods, algorithms are trained to make classifications or predictions, uncovering key insights within data mining projects. Therefore, it was tried in our work to combine GD-OES with machine learning strategies to establish a new and robust calibration model, which can be used to identify the elemental composition and concentration of metals from a single spectrum. For this purpose, copper reference materials from different manufacturers, which contain various impurity elements, were investigated using GD-OES. The obtained spectra information are evaluated with different algorithms (e.g., gradient boosting and artificial neural networks), and the results are compared and discussed in detail.
In view of the increasing digitization of research and the use of data-intensive measurement and analysis methods, research institutions and their staff are faced with the challenge of documenting a constantly growing volume of data in a comprehensible manner, archiving them for the long term, and making them available for discovery and re-use by others in accordance with the FAIR principles. At BAM, we aim to facilitate the integration of research data management (RDM) strategies during the whole research cycle from the creation and standardized description of materials datasets to their publication in open repositories. To this end, we present the BAM Data Store, a central system for internal RDM that fulfills the heterogenous demands of materials science and engineering labs. The BAM Data Store is based on openBIS, an open-source software developed by the ETH Zurich that has originally been created for life science laboratories but that has since been deployed in a variety of research domains. The software offers a browser-based user interface for the digital representation of lab inventory entities (e.g., samples, chemicals, instruments, and protocols) and an electronic lab notebook for the standardized documentation of experiments and analyses.
To investigate whether openBIS is a suitable framework for the BAM Data Store, we carried out a pilot phase during which five research groups with employees from 16 different BAM divisions were introduced to the software. The pilot groups were chosen to represent a diverse array of domain use cases and RDM requirements (e.g., small vs big data volume, heterogenous vs structured data types) as well as varying levels of prior IT knowledge on the users’ side.
Overall, the results of the pilot phase are promising: While the creation of custom data structures and metadata schemas can be time-intensive and requires the involvement of domain experts, the system offers specific benefits in the form of a simplified documentation and automation of research processes, as well as constituting a basis for data-driven analysis. In this way, heterogeneous research workflows in various materials science research domains could be implemented, from the synthesis and characterization of nanomaterials to the monitoring of engineering structures. In addition to the technical deployment and the development of domain-specific metadata standards, the pilot phase also highlighted the need for suitable institutional infrastructures, processes, and role models. An institute-wide rollout of the BAM Data Store is currently being planned.
Mass spectrometry-based proteomics provides a holistic snapshot of the entire protein set of living cells on a molecular level. Currently, only a few deep learning approaches exist that involve peptide fragmentation spectra, which represent partial sequence information of proteins. Commonly, these approaches lack the ability to characterize less studied or even unknown patterns in spectra because of their use of explicit domain knowledge. Here, to elevate unrestricted learning from spectra, we introduce ‘ad hoc learning of fragmentation’ (AHLF), a deep learning model that is end-to-end trained on 19.2 million spectra from several phosphoproteomic datasets. AHLF is interpretable, and we show that peak-level feature importance values and pairwise interactions between peaks are in line with corresponding peptide fragments. We demonstrate our approach by detecting post-translational modifications, specifically protein phosphorylation based on only the fragmentation spectrum without a database search. AHLF increases the area under the receiver operating characteristic curve (AUC) by an average of 9.4% on recent phosphoproteomic data compared with the current state of the art on this task. Furthermore, use of AHLF in rescoring search results increases the number of phosphopeptide identifications by a margin of up to 15.1% at a constant false discovery rate. To show the broad applicability of AHLF, we use transfer learning to also detect cross-linked peptides, as used in protein structure analysis, with an AUC of up to 94%.
MALDI-TOF-MS-Based Identification of Monoclonal Murine Anti-SARS-CoV-2 Antibodies within One Hour
(2022)
During the SARS-CoV-2 pandemic, many virus-binding monoclonal antibodies have been developed for clinical and diagnostic purposes. This underlines the importance of antibodies as universal bioanalytical reagents. However, little attention is given to the reproducibility crisis that scientific studies are still facing to date. In a recent study, not even half of all research antibodies mentioned in publications could be identified at all. This should spark more efforts in the search for practical solutions for the traceability of antibodies. For this purpose, we used 35 monoclonal antibodies against SARS-CoV-2 to demonstrate how sequence-independent antibody identification can be achieved by simple means applied to the protein. First, we examined the intact and light chain masses of the antibodies relative to the reference material NIST-mAb 8671. Already half of the antibodies could be identified based solely on these two parameters. In addition, we developed two complementary peptide mass fingerprinting methods with MALDI-TOF-MS that can be performed in 60 min and had a combined sequence coverage of over 80%. One method is based on the partial acidic hydrolysis of the protein by 5 mM of sulfuric acid at 99 degrees C. Furthermore, we established a fast way for a tryptic digest without an alkylation step. We were able to show that the distinction of clones is possible simply by a brief visual comparison of the mass spectra. In this work, two clones originating from the same immunization gave the same fingerprints. Later, a hybridoma sequencing confirmed the sequence identity of these sister clones. In order to automate the spectral comparison for larger libraries of antibodies, we developed the online software ABID 2.0. This open-source software determines the number of matching peptides in the fingerprint spectra. We propose that publications and other documents critically relying on monoclonal antibodies with unknown amino acid sequences should include at least one antibody fingerprint. By fingerprinting an antibody in question, its identity can be confirmed by comparison with a library spectrum at any time and context.
The amount of data generated worldwide is constantly increasing. These data come from a wide variety of sources and systems, are processed differently, have a multitude of formats, and are stored in an untraceable and unstructured manner, predominantly in natural language in data silos. This problem can be equally applied to the heterogeneous research data from materials science and engineering. In this domain, ways and solutions are increasingly being generated to smartly link material data together with their contextual information in a uniform and well-structured manner on platforms, thus making them discoverable, retrievable, and reusable for research and industry. Ontologies play a key role in this context. They enable the sustainable representation of expert knowledge and the semantically structured filling of databases with computer-processable data triples.
In this perspective article, we present the project initiative Materials-open-Laboratory (Mat-o-Lab) that aims to provide a collaborative environment for domain experts to digitize their research results and processes and make them fit for data-driven materials research and development. The overarching challenge is to generate connection points to further link data from other domains to harness the promised potential of big materials data and harvest new knowledge.
MALDI-TOF-MS-based identification of monoclonal murine anti-SARS-CoV-2 antibodies within one hour
(2022)
During the SARS-CoV-2 pandemic, many virus-binding monoclonal antibodies have been developed for clinical and diagnostic purposes. This underlines the importance of antibodies as universal bioanalytical reagents. However, little attention is given to the reproducibility crisis that scientific studies are still facing to date. In a recent study, not even half of all research antibodies mentioned in publications could be identified at all. This should spark more efforts in the search for practical solutions for the traceability of antibodies. For this purpose, we used thirty-five monoclonal antibodies against SARS-CoV-2 to demonstrate how sequence-independent antibody identification can be achieved by simple means applied onto the protein. First, we examined the intact and light chain masses of the antibodies relative to the reference material NIST-mAb 8671. Already half of the antibodies could be identified based solely on these two parameters. In addition, we developed two complementary peptide mass fingerprinting methods with MALDI-TOF-MS that can be performed in 45 minutes and had a combined sequence coverage of over 80%. One method is based on the partial acidic hydrolysis of the protein by 5 mM of sulfuric acid at 99 °C. Furthermore, we established a fast way for a tryptic digest without an alkylation step. We were able to show that the distinction of clones is possible simply by a brief visual comparison of the mass spectra. In this work, two clones originating from the same immunization gave the same fingerprints. Later, a hybridoma sequencing confirmed the sequence identity of these sister clones. In order to automate the spectral comparison for larger libraries of antibodies, we developed the online software ABID 2.0 (https://gets.shinyapps.io/ABID/). This open-source software determines the number of matching peptides in the fingerprint spectra. We propose that publications and other documents critically relying on monoclonal antibodies with unknown amino acid sequences should include at least one antibody fingerprint. By fingerprinting an antibody in question, its identity can be confirmed by comparison with a library spectrum at any time and context.
Through connecting genomic and metabolic information, metaproteomics is an essential approach for understanding how microbiomes function in space and time. The international metaproteomics community is delighted to announce the launch of the Metaproteomics Initiative (www.metaproteomics.org), the goal of which is to promote dissemination of metaproteomics fundamentals, advancements, and applications through collaborative networking in microbiome research. The Initiative aims to be the central information hub and open meeting place where newcomers and experts interact to communicate, standardize, and accelerate experimental and bioinformatic methodologies in this feld. We invite the entire microbiome community to join and discuss potential synergies at the interfaces with other disciplines, and to collectively promote innovative approaches to gain deeper insights into microbiome functions and dynamics.
Metaproteomics has matured into a powerful tool to assess functional interactions in microbial communities. While many metaproteomic workflows are available, the impact of method choice on results remains unclear. Here, we carry out a community-driven, multi-laboratory comparison in metaproteomics: the critical assessment of metaproteome investigation study (CAMPI). Based on well-established workflows, we evaluate the effect of sample preparation, mass spectrometry, and bioinformatic analysis using two samples: a simplified, laboratory-assembled human intestinal model and a human fecal sample. We observe that variability at the peptide level is predominantly due to sample processing workflows, with a smaller contribution of bioinformatic pipelines. These peptide-level differences largely disappear at the protein group level. While differences are observed for predicted community composition, similar functional profiles are obtained across workflows. CAMPI demonstrates the robustness of present-day metaproteomics research, serves as a template for multi-laboratory studies in metaproteomics, and provides publicly available data sets for benchmarking future developments.
3D printing enables a better control over the microstructure of bone restoring constructs, addresses the challenges seen in the preparation of patient-specific bone scaffolds, and overcomes the bottlenecks that can appear in delivering drugs/growth factors promoting bone regeneration. Here, 3D printing is employed for the fabrication of an osteogenic construct made of hydrogel nanocomposites. Alginate dialdehyde-gelatin (ADA-GEL) hydrogel is reinforced by the incorporation of bioactive glass nanoparticles, i.e. mesoporous silica-calcia nanoparticles (MSNs), in two types of drug (icariin) loading. The composites hydrogel is printed as superhydrated composite constructs in a grid structure. The MSNs not only improve the mechanical stiffness of the constructs but also induce formation of an apatite layer when the construct is immersed in simulated body fluid (SBF), thereby promoting cell adhesion and proliferation. The nanocomposite constructs can hold and deliver icariin efficiently, regardless of its incorporation mode, either as loaded into the MSNs or freely distributed within the hydrogel. Biocompatibility tests showed that the hydrogel nanocomposites assure enhanced osteoblast proliferation, adhesion, and differentiation. Such optimum biological properties stem from the superior biocompatibility of ADA-GEL, the bioactivity of the MSNs, and the supportive effect of icariin in relation to cell Proliferation and differentiation. Taken together, given the achieved structural and biological properties and effective drug delivery capability, the hydrogel nanocomposites show promising potential for bone tissue engineering.
Single- and multi-layer complex networks have been proven as a powerful tool to study the dynamics within social, technological, or natural systems. An often observed common goal is to optimize these systems for specific purposes by minimizing certain costs while maximizing a desired output. Acknowledging that especially real-world systems from the coupled socio-ecological realm are highly intertwined this work exemplifies that in such systems the optimization of a certain subsystem, e.g. to increase the resilience against external pressure in an ecological network, may unexpectedly diminish the stability of the whole coupled system. For this purpose we utilize an adaptation of a previously proposed conceptual bi-layer network model composed of an ecological network of diffusively coupled resources co-evolving with a social network of interacting agents that harvest these resources and learn each other’s strategies depending on individual success. We derive an optimal coupling strength that prevents collapse in as many resources as possible if one assumes that the agents’ strategies remain constant over time.
We then show that if agents socially learn and adapt strategies according to their neighbors’ success, this optimal coupling strength is revealed to be a critical parameter above which the probability for a global collapse in terms of irreversibly depleted resources is high—an effect that we denote the tragedy of the optimizer. We thus find that measures which stabilize the Dynamics within a certain part of a larger co-evolutionary system may unexpectedly cause the emergence of novel undesired globally stable states. Our results therefore underline the importance of holistic approaches for managing socio-ecological systems because stabilizing effects which focus on single subsystems may be counter-beneficial for the system as a whole.
Adaptations of animal cells to growth in suspension culture concern in particular viral vaccine production, where very specific aspects of virus-host cell interaction need to be taken into account to achieve high cell specific yields and overall process productivity. So far, the complexity of alterations on the metabolism, enzyme, and proteome level required for adaptation is only poorly understood. In this study, for the first time, we combined several complex analytical approaches with the aim to track cellular changes on different levels and to unravel interconnections and correlations. Therefore, a Madin-Darby canine kidney (MDCK) suspension cell line, adapted earlier to growth in suspension, was cultivated in a 1-L bioreactor. Cell concentrations and cell volumes, extracellular metabolite concentrations, and intracellular enzyme activities were determined. The experimental data set was used as the input for a segregated growth model that was already applied to describe the growth dynamics of the parental adherent cell line. In addition, the cellular proteome was analyzed by liquid chromatography coupled to tandem mass spectrometry using a label-free protein quantification method to unravel altered cellular processes for the suspension and the adherent cell line. Four regulatory mechanisms were identified as a response of the adaptation of adherent MDCK cells to growth in suspension. These regulatory mechanisms were linked to the proteins caveolin, cadherin-1, and pirin. Combining cell, metabolite, enzyme, and protein measurements with mathematical modeling generated a more holistic view on cellular processes involved in the adaptation of an adherent cell line to suspension growth.
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability.
Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
One of the most widely used methods to detect an acute viral infection in clinical specimens is diagnostic real-time polymerase chain reaction. However, because of the COVID-19 pandemic, mass-spectrometry-based proteomics is currently being discussed as a potential diagnostic method for viral infections. Because proteomics is not yet applied in routine virus diagnostics, here we discuss its potential to detect viral infections. Apart from theoretical considerations, the current status and technical limitations are considered. Finally, the challenges that have to be overcome to establish proteomics in routine virus diagnostics are highlighted.
To gain a thorough appreciation of microbiome dynamics, researchers characterize the functional relevance of expressed microbial genes or proteins. This can be accomplished through metaproteomics, which characterizes the protein expression of microbiomes. Several software tools exist for analyzing microbiomes at the functional level by measuring their combined proteome-level response to environmental perturbations. In this survey, we explore the performance of six available tools, to enable researchers to make informed decisions regarding software choice based on their research goals. Tandem mass spectrometry-based proteomic data obtained from dental caries plaque samples grown with and without sucrose in paired biofilm reactors were used as representative data for this evaluation. Microbial peptides from one sample pair were identified by the X! tandem search algorithm via SearchGUI and subjected to functional analysis using software tools including eggNOG-mapper, MEGAN5, MetaGOmics, MetaProteomeAnalyzer (MPA), ProPHAnE, and Unipept to generate functional annotation through Gene Ontology (GO) terms. Among these software tools, notable differences in functional annotation were detected after comparing differentially expressed protein functional groups. Based on the generated GO terms of these tools we performed a peptide-level comparison to evaluate the quality of their functional annotations. A BLAST analysis against the NCBI non-redundant database revealed that the sensitivity and specificity of functional annotation varied between tools. For example, eggNOG-mapper mapped to the most number of GO terms, while Unipept generated more accurate GO terms. Based on our evaluation, metaproteomics researchers can choose the software according to their analytical needs and developers can use the resulting feedback to further optimize their algorithms. To make more of these tools accessible via scalable metaproteomics workflows, eggNOG-mapper and Unipept 4.0 were incorporated into the Galaxy platform.
Metaproteomics, the study of the collective protein composition of multi-organism systems, provides deep insights into the biodiversity of microbial communities and the complex functional interplay between microbes and their hosts or environment. Thus, metaproteomics has become an indispensable tool in various fields such as microbiology and related medical applications. The computational challenges in the analysis of corresponding datasets differ from those of pure-culture proteomics, e.g., due to the higher complexity of the samples and the larger reference databases demanding specific computing pipelines. Corresponding data analyses usually consist of numerous manual steps that must be closely synchronized. With MetaProteomeAnalyzer and Prophane, we have established two open-source software solutions specifically developed and optimized for metaproteomics. Among other features, peptide-spectrum matching is improved by combining different search engines and, compared to similar tools, metaproteome annotation benefits from the most comprehensive set of available databases (such as NCBI, UniProt, EggNOG, PFAM, and CAZy). The workflow described in this protocol combines both tools and leads the user through the entire data analysis process, including protein database creation, database search, protein grouping and annotation, and results visualization. To the best of our knowledge, this protocol presents the most comprehensive, detailed and flexible guide to metaproteomics data analysis to date. While beginners are provided with robust, easy-to-use, state-of-the-art data analysis in a reasonable time (a few hours, depending on, among other factors, the protein database size and the number of identified peptides and inferred proteins), advanced users benefit from the flexibility and adaptability of the workflow.
Although metaproteomics, the study of the collective proteome of microbial communities, has become increasingly powerful and popular over the past few years, the field has lagged behind on the availability of user-friendly, end-to-end pipelines for data analysis. We therefore describe the Connection from two commonly used metaproteomics data processing tools in the field, MetaProteomeAnalyzer and PeptideShaker, to Unipept for downstream analysis.
Through these connections, direct end-to-end pipelines are built from database searching to taxonomic and functional annotation.
Untargeted accurate strain-level classification of a priori unidentified organisms using tandem mass spectrometry is a challenging task. Reference databases often lack taxonomic depth, limiting peptide assignments to the species level. However, the extension with detailed strain information increases runtime and decreases statistical power. In addition, larger databases contain a higher number of similar proteomes. We present TaxIt, an iterative workflow to address the increasing search space required for MS/MS-based strain-level classification of samples with unknown taxonomic origin. TaxIt first applies reference sequence data for initial identification of species candidates, followed by automated acquisition of relevant strain sequences for low level classification. Furthermore, proteome similarities resulting in ambiguous taxonomic assignments are addressed with an abundance weighting strategy to increase the confidence in candidate taxa. For benchmarking the performance of our method, we apply our iterative workflow on several samples of bacterial and viral origin. In comparison to noniterative approaches using unique peptides or advanced abundance correction, TaxIt identifies microbial strains correctly in all examples presented (with one tie), thereby demonstrating the potential for untargeted and deeper taxonomic classification. TaxIt makes extensive use of public, unrestricted, and continuously growing sequence resources such as the NCBI databases and is available under open-source BSD license at https://gitlab.com/rki_bioinformatics/TaxIt.