Filtern
Erscheinungsjahr
Dokumenttyp
Schlagworte
Organisationseinheit der BAM
- 8 Zerstörungsfreie Prüfung (2)
- 8.5 Röntgenbildgebung (2)
- 1 Analytische Chemie; Referenzmaterialien (1)
- 1.2 Biophotonik (1)
- 4 Material und Umwelt (1)
- 4.1 Biologische Materialschädigung und Referenzorganismen (1)
- 6 Materialchemie (1)
- 6.7 Materialsynthese und Design (1)
- S Qualitätsinfrastruktur (1)
- S.2 Digitalisierung der Qualitätsinfrastruktur (1)
Interfacing artificial devices with the human brain is the central goal of neurotechnology. Yet, our imaginations are often limited by currently available paradigms and technologies. Suggestions for brain−machine interfaces have changed over time, along with the available technology.
Mechanical levers and cable winches were used to move parts of the brain during the mechanical age. Sophisticated electronic wiring and remote control have arisen during the electronic age, ultimately leading to plug-and-play computer interfaces. Nonetheless, our brains are so complex that these visions, until recently, largely remained unreachable dreams. The general problem, thus far, is that most of our technology is mechanically and/or electrically engineered, whereas the brain is a living, dynamic entity. As a result, these worlds are difficult to interface with one another. Nanotechnology, which encompasses engineered solid-state objects and integrated circuits, excels at small length scales of single to a few hundred nanometers and, thus, matches the sizes of biomolecules, biomolecular assemblies, and parts of cells. Consequently, we envision nanomaterials and nanotools as opportunities to interface with the brain in alternative ways. Here, we review the existing literature on the use of nanotechnology in brain−machine interfaces and look forward in discussing perspectives and limitations based on the authors’ expertise across a range of complementary disciplines from neuroscience, engineering, physics, and chemistry to biology and medicine, computer science and mathematics, and social science and jurisprudence. We focus on nanotechnology but also include information from related fields when useful and complementary.
The ability of industrial X-ray computed tomography (CT) to scan an object with several internal and external features at once causes increasing adoption in dimensional metrology. In order to evaluate the quality of a measurement value, the task-specific measurement uncertainty has to be determined. Currently, VDI/VDE 2630 part 2.1 gives a guideline to determine the uncertainty of CT measurements experimentally by conducting repeated measurements. This is costly and time-consuming. Thus, the aim is to determine the task-specific measurement uncertainty numerically by simulations (e. g. according to the guide to expression of uncertainty in measurement (GUM) Supplement 1). To achieve that, a digital twin is necessary. This contribution presents a simple first approach how a digital twin can be built. In order to evaluate this approach, a study comparing measurements and simulations of different real CT systems was carried out by determining the differences between the measurement results of the digital twin and of the measurement results of the real-world CT systems. The results have shown a moderate agreement between real and simulated data. To improve on this aspect, a standardized method to characterize CT systems and methods to implement CT parameters into the simulation with sufficient accuracy will be developed.
Against the backdrop of the sustainability transition of economies worldwide, decarbonizing road traffic is high on the agenda. This has focused the interest of policymakers and automobile manufacturers on sustainable, zero-emission powertrain technologies. Among these technologies, hydrogen fuel cell (FC) vehicles have a positive climate impact, given that their hydrogen is produced from renewable energy. However, FC vehicles have not yet gained significant market shares. Therefore, based on the technological innovation systems (TIS) approach, this study analyzes how FC vehicles are influenced by EVs and internal combustion engine (ICE) vehicles as their context structures. To operationalize the technology relations between our focal FC-TIS and its context structures, we use the sum of international publications, patents filed at the European Patent Office, and international ISO and IEC standards as indicators for each technology. Our results show that the FC-TIS is dominated by its context structures, especially regarding commercially relevant patents and international standards. Therefore, we conclude that the FC-TIS is in its formative life-cycle phase and identify the need for intensified patenting and standardization in relation to the competing EVs and ICE vehicles.
Recent DNA-based studies have shown that the built environment is surprisingly rich in fungi. These indoor fungi – whether transient visitors or more persistent residents – may hold clues to the rising levels of human allergies and other medical and building-related health problems observed globally. The taxo¬nomic identity of these fungi is crucial in such pursuits. Molecular identification of the built mycobiome is no trivial undertaking, however, given the large number of unidentified, misidentified, and technically compromised fungal sequences in public sequence databases. In addition, the sequence metadata required to make informed taxonomic decisions – such as country and host/substrate of collection – are often lacking even from reference and ex-type sequences. Here we report on a taxonomic annotation workshop (April 10–11, 2017) organized at the James Hutton Institute/University of Aberdeen (UK) to facilitate reproducible studies of the built mycobiome. The 32 participants went through public fungal ITS bar¬code sequences related to the built mycobiome for taxonomic and nomenclatural correctness, technical quality, and metadata availability. A total of 19,508 changes – including 4,783 name changes, 14,121 metadata annotations, and the removal of 99 technically compromised sequences – were implemented in the UNITE database for molecular identification of fungi (https://unite.ut.ee/) and shared with a range of other databases and downstream resources. Among the genera that saw the largest number of changes were Penicillium, Talaromyces, Cladosporium, Acremonium, and Alternaria, all of them of significant importance in both culture-based and culture-independent surveys of the built environment.
A European round robin test according to ISO 5725-2 was conceptually prepared, realised, and evaluated. The aim was to determine the inter-laboratory variability of the overall process for the ecotoxicological characterization of construction products in eluates and bioassays. To this end, two construction products BAM-G1 (granulate) and HSR-2 (roof sealing sheet), both made of EPDM polymers (rubber), were selected. The granular construction product was eluted in a one stage batch test, the planar product in the Dynamic Surface Leaching test (DSLT). A total of 17 laboratories from 5 countries participated in the round robin test: Germany (12), Austria (2), Belgium (1), Czech Republic (1) and France (1). A test battery of four standardised ecotoxicity tests with algae, daphnia, luminescent bacteria and zebrafish eggs was used. As toxicity measures, EC50 and LID values were calculated. All tests, except the fish egg test, were basically able to demonstrate toxic effects and the level of toxicity. The reproducibility of test results depended on the test specimens and the test organisms. Generally, the variability of the EC50 or LID values increased with the overall level of toxicity. For the very toxic BAM-G1 eluate a relative high variability of CV ¼ 73%e110% was observed for EC50 in all biotests, while for the less toxic HSR-2 eluate the reproducibility of EC50 varied with sensitivity: it was very good (CV ¼ 9.3%) for the daphnia test with the lowest sensitivity, followed by the algae test (CV ¼ 36.4%). The luminescent bacteria test, being the most sensitive bioassay for HSR-2 Eluate, showed the highest variability (CV ¼ 74.8%). When considering the complex overall process the reproducibility of bioassays with eluates from construction products was acceptable.
Fast temperature field generation for welding simulation and reduction of experimental effort
(2009)
The quality of welding processes is governed by the occurring induced distortions yielding an
increase in production costs due to necessary reworking. Especially for more complex specimens
it is difficult to evaluate the optimal configuration of welding sequences in order to
minimise the distortion. Even experienced welding operators can solve this task only by trial
and error which is time and cost consuming.
In modern engineering the application of welding simulation is already known to be able to
analyse the heat effects of welding virtually. However, the welding process is governed by
complex physical interactions. Thus, recent weld thermal models are based on many simplifications.
The state of the art is to apply numerical methods in order to solve the transient heat
conduction equation. Therefore, it is not possible to use the real process parameters as input
for the mathematical model. The model parameters which allow calculating a temperature
field that is in best agreement with the experiments cannot be defined directly but inversely by
multiple simulations runs. In case of numerical simulation software based on finite discretisation
schemes this approach is very time consuming and requires expert users. The weld
thermal model contains an initial weakness which has to be adapted by finding an optimal set
of model parameters. This process of calibration is often done against few experiments. The
range of model validity is limited. An extension can be obtained by performing a calibration
against multiple experiments.
The focus of the paper is to show a combined modelling technique which provides an efficient
solution of the inverse heat conduction problem mentioned above. On the one hand the inverse
problem is solved by application of fast weld thermal models which are closed form
solutions of the heat conduction equation. In addition, a global optimisation algorithm allows
an automated calibration of the weld thermal model. This technique is able to provide a temperature
field automatically that fits the experimental one with high accuracy within minutes
on ordinary office computers. This fast paradigm permits confirming the application of welding
simulation in an industrial environment as automotive industry.
On the other hand, the initial model weakness is compensated by calibrating the model
against multiple experiments. The unknown relationship between model and process parameters
is approximated by a neural network. The validity of the model is increased successively
and enables to decrease experimental effort. For a test case it is shown, that this approach
yields accurate temperature fields within very short amount of time for unknown process parameters
as input data to the model contributing to the requirement to construct a substitute
system of the real welding process.
The objective of this paper is to demonstrate a new simulation technique which allows fast and automatic generation of temperature fields as input for subsequent thermomechanical welding simulation. The basic idea is to decompose the process model into an empirical part based on neural networks and a phenomenological part that describes the physical phenomena. The strength of this composite modelling approach is the automatic calibration of mathematical models against experimental data without the need for manual interference by an experienced user. As an example for typical applications in laser beam and GMA-laser hybrid welding, it is shown that even 3D heat conduction models of a low complexity can approximate measured temperature fields with a sufficient accuracy. In general, any derivation of model fitting parameters from the real process adds uncertainties to the simulation independent of the complexity of the underlying phenomenological model. The modelling technique presented hybridises empirical and phenomenological models. It reduces the model uncertainties by exploiting additional information which keeps normally hidden in the data measured when the model calibration is performed against few experimental data sets. In contrast, here the optimal model parameter set corresponding to a given process parameter is computed by means of an empirical submodel based on relatively large set of experimental data. The approach allows making a contribution to an efficient compensation of modelling inaccuracies and lack of knowledge about thermophysical material properties or boundary conditions. Two illustrating examples are provided.