Refine
Year of publication
Document Type
- conference proceeding (article) (1159)
- Article (1025)
- Part of a Book (158)
- conference proceeding (presentation, abstract) (128)
- conference talk (63)
- Preprint (38)
- Report (24)
- Book (15)
- conference proceeding (volume) (8)
- Doctoral Thesis (8)
- Patent (7)
- Working Paper (7)
- Moving Images (5)
- Lecture (4)
- Edited collection (3)
- Other (2)
Language
- English (2654) (remove)
Has Fulltext
- no (2654) (remove)
Is part of the Bibliography
- no (2654)
Keywords
- Simulation (25)
- field emission (24)
- Bildgebendes Verfahren (16)
- Current measurement (15)
- Deep Learning (15)
- silicon (15)
- additive manufacturing (14)
- BEHAVIOR (13)
- Literaturbericht (13)
- simulation (13)
Institute
- Fakultät Informatik und Mathematik (684)
- Fakultät Maschinenbau (647)
- Fakultät Elektro- und Informationstechnik (600)
- Fakultät Angewandte Natur- und Kulturwissenschaften (305)
- Laboratory for Safe and Secure Systems (LAS3) (210)
- Regensburg Center of Biomedical Engineering - RCBE (208)
- Fakultät Bauingenieurwesen (202)
- Labor Biomechanik (LBM) (120)
- Regensburg Medical Image Computing (ReMIC) (118)
- Fakultät Angewandte Sozial- und Gesundheitswissenschaften (116)
Begutachtungsstatus
- peer-reviewed (1262)
- begutachtet (27)
The present study investigates the interface between carbon steel and titanium samples annealed at different temperatures (𝜗1 = 800 ◦C and 𝜗2 = 1050 ◦C). In both cases, an observable layer forms at the interface, with its thickness increasing from t𝜗1= 2.75 ± 0.15 μm at 800 ◦C to t𝜗2= 8.86 ± 0.29 μm at 1050 ◦C. The layer’s composition and thickness evolve with temperature. Analysis reveals approximately 40 at.-% carbon concentration in the exterior region, indicating likely titanium carbide creation. X-ray diffraction identifies titanium carbide peaks, while microscopy and elemental mapping confirm compositional gradients at the interface.
Electron Backscatter Diffraction (EBSD) shows a gradient in grain size near the TiC surface, reflecting TiC nucleation rates. XRD data detect both titanium carbide and titanium phases, with TiC becoming more prominent at 1050 ◦C. Rietveld analysis further confirms TiC formation. Notably, distinct diffraction patterns on the contact and rear sides suggest a Ti(C, O, N) presence. Depth profiles exhibit varying surface and depth carbon concentrations, attributed to temperature effects. The study successfully demonstrates TiC coating fabrication through hot pressing, wherein Ti(C, O, N) coatings arise from titanium’s affinity for reacting with oxygen and nitrogen. This research contributes to the understanding of phase transformations and interfacial properties in titanium-carbon steel systems.
This article describes a contactless fiber-optic position sensor. It comprises a Polymer Optical Fiber that is grinded to form a D-shaped cross-section with an exposed fiber core. This sensor has two photodiodes at both fiber ends to measure the emitted light intensity. Light is coupled using a red LED at the side face into the exposed core of the fiber at a defined position. The position of the LED at the length of the fiber is measured by calculating the optical power quotient measured by both photodiodes. To test this sensor, Polymer Optical Fibers with different side surface roughness are produced and qualified. Measurements show that the optical power quotient is reproducible and nearly linear over the length of the fiber. It is also seen that the fiber attenuation increases when grinding the fiber side-face with rougher sandpaper. Position measurements show an absolute position error of this sensor in the range of a few millimeters. Microscope images show surface defects along the polished side face of the fiber that are expected to lead to a nonuniform attenuation along the fiber and cause the position errors. Overall, it is proven that this sensor principle works as a contactless low-cost position sensor for short distances with an absolute position standard deviation error lower than 1 mm.
In this work, high-current field emission electron source chips were fabricated using laser-micromachining and MEMS technology. The resulting chips were combined with commercially available printed circuit boards (PCBs) to obtain a multichip electron source. By controlling the separate electron sources using an external current control circuit, we were able to divide the desired total current evenly across the individual chips deployed in the PCB-carrier. In consequence, we were able to show a decreased degradation due to the reduced current load per chip. First, a single electron source chip was measured without current regulation. A steady-state emission current of 1 mA with a high stability of ±1.3% at an extraction voltage of 250 V was observed. At this current level, a mean degradation slope of −0.7 μA/min with a nearly perfect transmission ratio of 99% ± 0.4% was determined. The measurements of a fully assembled multichip PCB-carrier electron source, using a current control circuit for regulation, showed that an even distribution of the desired total current led to a decreased degradation. This was determined by the increase in the required extraction voltage over time. For this purpose, two current levels were applied to the electron source chips of the PCB-carrier using an external current control circuit. First, 300 μA total current was evenly distributed among the individual electron source chips followed by the emission of 300 μA per electron source chip. This allows the observation of the influence of a distributed and nondistributed total current, carried by the electron source chips. Thereby, we obtained an increase in the mean degradation slope from +0.011 V/min (300 μA distributed) to +0.239 V/min (300 μA per chip), which is approximately 21 times higher. Moreover, our current control circuit improved the current stability to under 0.1% for both current levels, 300 μA distributed and 300 μA per chip.
BACKGROUND
Differentiation of high-flow from low-flow vascular malformations (VMs) is crucial for therapeutic management of this orphan disease.
OBJECTIVE
A convolutional neural network (CNN) was evaluated for differentiation of peripheral vascular malformations (VMs) on T2-weighted short tau inversion recovery (STIR) MRI.
METHODS
527 MRIs (386 low-flow and 141 high-flow VMs) were randomly divided into training, validation and test set for this single-center study. 1) Results of the CNN's diagnostic performance were compared with that of two expert and four junior radiologists. 2) The influence of CNN's prediction on the radiologists' performance and diagnostic certainty was evaluated. 3) Junior radiologists' performance after self-training was compared with that of the CNN.
RESULTS
Compared with the expert radiologists the CNN achieved similar accuracy (92% vs. 97%, p = 0.11), sensitivity (80% vs. 93%, p = 0.16) and specificity (97% vs. 100%, p = 0.50). In comparison to the junior radiologists, the CNN had a higher specificity and accuracy (97% vs. 80%, p < 0.001; 92% vs. 77%, p < 0.001). CNN assistance had no significant influence on their diagnostic performance and certainty. After self-training, the junior radiologists' specificity and accuracy improved and were comparable to that of the CNN.
CONCLUSIONS
Diagnostic performance of the CNN for differentiating high-flow from low-flow VM was comparable to that of expert radiologists. CNN did not significantly improve the simulated daily practice of junior radiologists, self-training was more effective.
Analysis and Improvement of Engineering Exams Toward Competence Orientation by Using an AI Chatbot
(2024)
ChatGPT is currently one of the most advanced general chatbots. This development leads to diverse challenges in higher education, like new forms of teaching and learning, additional exam methods, new possibilities for plagiarism, and many more topics. On the other side with the development of advanced AI tools, pure knowledge will be less and less important, and demands from industry will change toward graduates with higher competencies. Education has therefore to be changed from knowledge-centered toward competence centered. The goal of this article is to use ChatGPT for analyzing and improving the competence orientation of exams in engineering education. The authors use ChatGPT to analyze exams from different engineering subjects to evaluate the performance of this chatbot and draw conclusions about the competence orientation of the tested exams. The obtained information is used to develop ideas for increasing the competence orientation of exams. From this analysis, it is visible that ChatGPT gives good performance mainly where knowledge is tested. It has, however, much more problems with transfer questions or tasks where students need creativity or complex insights for finding new solutions. Based on this result, exams and also lectures can be optimized toward competence orientation.
The article presents the process of developing a silicon electron source designed for high-vacuum microelectromechanical system (HV MEMS) devices, i.e., MEMS electron microscope and MEMS x-ray source. Technological constraints and issues of such an electron source are explained. The transition from emitters made of carbon nanotubes to emitters made of pure silicon is described. Overall, the final electron source consists of a silicon tip emitter and a silicon gate electrode integrated on the same glass substrate. The source generates an electron beam without any carbon nanotube coverage. It generates a high and stable electron current and works after the final bonding process of an HV MEMS device.
Microchips are intensively used in almost all nowadays electronic devices. With the continuous advancement of our technologies, they get smaller in size than ever before. They generate high-intensity heat loads that need to be transported effectively such that they may function properly. Heat pipes have proven to be very effective in transporting relatively large heat loads from miniature components. They are of seamless structure that involves a working fluid capable of evaporation and condensation at the working temperature of the electronic chips. The working fluid is derived to move from the condenser to the evaporator via multiple microgrooves using capillary forces. It is important that the condensate reaches the evaporator at a proper rate such that no dry-out or flooding occur. In this work, we are particularly interested in the case of capillary-driven flows in rectangular microchannels. A generalized model is developed that works for axe-symmetric rectangular channels of arbitrary, moderately varying width profiles. It also accounts for any contrast of viscosity between the liquid and the vapor under isothermal conditions. The model shows to reduce to the special case of imbibition in straight and uniform microchannels, for which comparisons with experimental and modeling works show an excellent match. Cases representing linearly and quadratically varying converging/diverging width profiles have been explored. It is found that the viscosity ratio has a significant influence on the rate at which the meniscus advances. The model also negates the common practice found in the literature of using the formula developed for imbibition rates in capillary tubes for rectangular microchannels by replacing the diameter of the tube with the hydraulic diameter. It is also found that the channel profile has an influential effect on the imbibition rates. For tapered microchannels, the capillary force increases along the channel length while it decreases for diverging ones. It is interestingly demonstrated that, for quadratically tapered microchannel, the speed of the meniscus increases towards the end of the microchannel compared with linearly varying microchannels. On the other hand, for diverging microchannels, the speed of the meniscus decreases due to the increase in the cross-sectional area. Computational fluid dynamics (CFD) analysis has been conducted to provide a framework for confirmation and verification for which very good match has been established, which builds confidence in the modeling approach.
Preliminary considerations on the form-finding of a tensegrity joint to be used in dynamic orthoses
(2024)
Small and medium-sized enterprises (SMEs) increasingly need to manage nformation technology (IT) effectively in order to remain competitive. However, compared to larger organizations, SMEs often face challenges in terms of resources and employer attractiveness, and regularly do not have the need to employ a Chief Information Officer (CIO) on a full-time basis. To address this issue, a growing number of global experts have begun to provide CIO services on a part-time basis for multiple clients simultaneously. This approach allows SMEs to tap into the expertise of experienced IT leaders at a fraction of the cost and without committing to long-term arrangements. While these professionals, known as “Fractional CIOs”, have proven their value in the field, there has been a lack of academic research on this emerging trend. Therefore, we carried out a comprehensive research project between 2020 and 2023, involving 62 Fractional CIOs from 10 countries. The research produced a definition, different types of engagements, and success factors for Fractional CIOs and their engagements. This paper summarizes these findings for a wider audience of academics and practitioners.
We investigate the influence of the geometry and doping level on the performance of n-type silicon nanowire field emitters on silicon pillar structures. Therefore, multiple cathodes with 50 by 50 pillar arrays (diameter: 5 μm, height: 30 μm, spacing: 50 μm) were fabricated and measured in diode configuration. In the first experiment, we compared two geometry types using the same material. Geometry 1 is black silicon, which is a highly dense surface covering a forest of tightly spaced silicon needles resulting from self-masking during a plasma etching process of single crystal silicon. Geometry 2 are silicon nanowires, which are individual spaced-out nanowires in a crownlike shape resulting from a plasma etching process of single crystal silicon. In the second experiment, we compared two different silicon doping levels [n-type (P), 1–10 and <0.005 Ω cm] for the same geometry. The best performance was achieved with lower doped silicon nanowire samples, emitting 2 mA at an extraction voltage of 1 kV. The geometry/material combination with the best performance was used to assemble an integrated electron source. These electron sources were measured in a triode configuration and reached onset voltages of about 125 V and emission currents of 2.5 mA at extraction voltages of 400 V, while achieving electron transmission rates as high as 85.0%.
An Inexpensive Uv-Led Photoacoustic Based Real-Time Sensor-System Detecting Exhaled Trace-Acetone
(2024)
In this research we present a low-cost system for breath acetone analysis based on UV-LED photoacoustic spectroscopy. We considered the end-tidal phase of exhalation, which represents the systemic concentrations of volatile organic compounds (VOCs) – providing clinically relevant information about the human health. This is achieved via the development of a CO2-triggered breath sampling system, which collected alveolar breath over several minutes in sterile and inert containers. A real-time mass spectrometer is coupled to serve as a reference device for calibration measurements and subsequent breath analysis. The new sensor system provided a 3σ detection limit of 6.4 ppbV and an NNEA of 1.1E-9 Wcm-1Hz-0.5. In terms of the performed breath analysis measurements, 12 out of 13 fell within the error margin of the photoacoustic measurement system, demonstrating the reliability of the measurements in the field.
In the early-stage development of sheet metal parts, key design properties of new structures must be specified. As these decisions are made under significant uncertainty regarding drawing configuration changes, they sometimes result in the development of new parts that, at a later design stage, will not be drawable. As a result, there is a need to increase the certainty of experience-driven drawing configuration decisions.
Complementing this process with a global sensitivity analysis can provide insight into the impact of various changes in drawing configurations on drawability, unveiling cost-effective strategies to ensure the drawability of new parts. However, when quantitative global sensitivity approaches, such as Sobol's method, are utilized, the computational requirements for obtaining Sobol indices can become prohibitive even for small application problems. To circumvent computational limitations, we evaluate the applicability of different surrogate models engaged in computing global design variable sensitivities for the drawability assessment of a deep-drawn component.
Here, we show in an exemplary application problem, that both a standard kriging model and an ensemble model can provide commendable results at a fraction of the computational cost. Moreover, we compare our surrogate models to existing approaches in the field. Furthermore, we show that the error introduced by the surrogate models is of the same order of magnitude as that from the choice of drawability measure. In consequence, our surrogate models can improve the cost-effective development of a component in the early design phase.
Control and Automation of services of the urban infrastructure offered to citizens and tourists are elementary parts of a smart city. But both rely on a stable supply of data from sensors spread across the whole city, e. g., the fill level sensors of waste bins needed for a waste management tool which we developed in a collaboration with the Regensburg city council for the on-demand collection of waste bins. Europe has a lot of historic cities like Regensburg with narrow streets and huge building walls, some made from granite and fieldstones, which often represents an insurmountable obstacle to wireless data transmission. The reduction of the road traffic volume poses an additional challenge for city planners. By means of networked planning and simulation software, the situation, state and efficiency of citywide logistic services can be monitored and optimized. In the course of such optimizations, we propose the combination of digital and logistic services. As an example, we show that monitoring state information, such as the waste bin fill levels, can be accomplished using the same vehicles and the same planning software, that is used for luggage transportation. Moreover, we describe how we adapted a solver for a variant of the TSP, namely the prize-collecting traveling salesman, to optimize the route planning dynamically.
Aims
Human-computer interactions (HCI) may have a relevant impact on the performance of Artificial Intelligence (AI). Studies show that although endoscopists assessing Barrett’s esophagus (BE) with AI improve their performance significantly, they do not achieve the level of the stand-alone performance of AI. One aspect of HCI is the impact of AI on the degree of certainty and confidence displayed by the endoscopist. Indirectly, diagnostic confidence when using AI may be linked to trust and acceptance of AI. In a BE video study, we aimed to understand the impact of AI on the diagnostic confidence of endoscopists and the possible correlation with diagnostic performance.
Methods
22 endoscopists from 12 centers with varying levels of BE experience reviewed ninety-six standardized endoscopy videos. Endoscopists were categorized into experts and non-experts and randomly assigned to assess the videos with and without AI. Participants were randomized in two arms: Arm A assessed videos first without AI and then with AI, while Arm B assessed videos in the opposite order. Evaluators were tasked with identifying BE-related neoplasia and rating their confidence with and without AI on a scale from 0 to 9.
Results
The utilization of AI in Arm A (without AI first, with AI second) significantly elevated confidence levels for experts and non-experts (7.1 to 8.0 and 6.1 to 6.6, respectively). Only non-experts benefitted from AI with a significant increase in accuracy (68.6% to 75.5%). Interestingly, while the confidence levels of experts without AI were higher than those of non-experts with AI, there was no significant difference in accuracy between these two groups (71.3% vs. 75.5%). In Arm B (with AI first, without AI second), experts and non-experts experienced a significant reduction in confidence (7.6 to 7.1 and 6.4 to 6.2, respectively), while maintaining consistent accuracy levels (71.8% to 71.8% and 67.5% to 67.1%, respectively).
Conclusions
AI significantly enhanced confidence levels for both expert and non-expert endoscopists. Endoscopists felt significantly more uncertain in their assessments without AI. Furthermore, experts with or without AI consistently displayed higher confidence levels than non-experts with AI, irrespective of comparable outcomes. These findings underscore the possible role of AI in improving diagnostic confidence during endoscopic assessment.
Heavy smoke development represents an important challenge for operating physicians during laparoscopic procedures and can potentially affect the success of an intervention due to reduced visibility and orientation. Reliable and accurate recognition of smoke is therefore a prerequisite for the use of downstream systems such as automated smoke evacuation systems. Current approaches distinguish between non-smoked and smoked frames but often ignore the temporal context inherent in endoscopic video data. In this work, we therefore present a method that utilizes the pixel-wise displacement from randomly sampled images to the preceding frames determined using the optical flow algorithm by providing the transformed magnitude of the displacement as an additional input to the network. Further, we incorporate the temporal context at evaluation time by applying an exponential moving average on the estimated class probabilities of the model output to obtain more stable and robust results over time. We evaluate our method on two convolutional-based and one state-of-the-art transformer architecture and show improvements in the classification results over a baseline approach, regardless of the network used.
Real-time computational speed and a high degree of precision are requirements for computer-assisted interventions. Applying a segmentation network to a medical video processing task can introduce significant inter-frame prediction noise. Existing approaches can reduce inconsistencies by including temporal information but often impose requirements on the architecture or dataset. This paper proposes a method to include temporal information in any segmentation model and, thus, a technique to improve video segmentation performance without alterations during training or additional labeling. With Motion-Corrected Moving Average, we refine the exponential moving average between the current and previous predictions. Using optical flow to estimate the movement between consecutive frames, we can shift the prior term in the moving-average calculation to align with the geometry of the current frame. The optical flow calculation does not require the output of the model and can therefore be performed in parallel, leading to no significant runtime penalty for our approach. We evaluate our approach on two publicly available segmentation datasets and two proprietary endoscopic datasets and show improvements over a baseline approach.
Training data for Neural Networks is often scarce in the medical domain, which often results in models that struggle to generalize and consequently showpoor performance on unseen datasets. Generally, adding augmentation methods to the training pipeline considerably enhances a model’s performance. Using the dataset of the Foot Ulcer Segmentation Challenge, we analyze two additional augmentation methods in the domain of chronic foot wounds - local warping of wound edges along with projection and blurring of shapes inside wounds. Our experiments show that improvements in the Dice similarity coefficient and Normalized Surface Distance metrics depend on a sensible selection of those augmentation methods.
Case study research is one of the most widely used research methods in Information Systems (IS). In recent years, an increasing number of publications have used case studies with few sources of evidence, such as single interviews per case. While there is much methodological guidance on rigorously conducting multiple case studies, it remains unclear how researchers can achieve an acceptable level of rigour for this emerging type of multiple case study with few sources of evidence, i.e., multiple mini case studies. In this context, we synthesise methodological guidance for multiple case study research from a cross-disciplinary perspective to develop an analytical framework. Furthermore, we calibrate this analytical framework to multiple mini case studies by reviewing previous IS publications that use multiple mini case studies to provide guidelines to conduct multiple mini case studies rigorously. We also offer a conceptual definition of multiple mini case studies, distinguish them from other research approaches, and position multiple mini case studies as a pragmatic and rigorous approach to research emerging and innovative phenomena in IS.
Generative deep learning approaches for the design of dental restorations: A narrative review
(2024)
Objectives:
This study aims to explore and discuss recent advancements in tooth reconstruction utilizing deep learning (DL) techniques. A review on new DL methodologies in partial and full tooth reconstruction is conducted.
Data/Sources:
PubMed, Google Scholar, and IEEE Xplore databases were searched for articles from 2003 to 2023.
Study selection:
The review includes 9 articles published from 2018 to 2023. The selected articles showcase novel DL approaches for tooth reconstruction, while those concentrating solely on the application or review of DL methods are excluded. The review shows that data is acquired via intraoral scans or laboratory scans of dental plaster models. Common data representations are depth maps, point clouds, and voxelized point clouds. Reconstructions focus on single teeth, using data from adjacent teeth or the entire jaw. Some articles include antagonist teeth data and features like occlusal grooves and gap distance. Primary network architectures include Generative Adversarial Networks (GANs) and Transformers. Compared to conventional digital methods, DL-based tooth reconstruction reports error rates approximately two times lower.
Conclusions:
Generative DL models analyze dental datasets to reconstruct missing teeth by extracting insights into patterns and structures. Through specialized application, these models reconstruct morphologically and functionally sound dental structures, leveraging information from the existing teeth. The reported advancements facilitate the feasibility of DL-based dental crown reconstruction. Beyond GANs and Transformers with point clouds or voxels, recent studies indicate promising outcomes with diffusion-based architectures and innovative data representations like wavelets for 3D shape completion and inference problems.
Clinical significance:
Generative network architectures employed in the analysis and reconstruction of dental structures demonstrate notable proficiency. The enhanced accuracy and efficiency of DL-based frameworks hold the potential to enhance clinical outcomes and increase patient satisfaction. The reduced reconstruction times and diminished requirement for manual intervention may lead to cost savings and improved accessibility of dental services.
Background:
With the prevalence of burnout among surgeons posing a significant threat to healthcare outcomes, the mental toughness of medical professionals has come to the fore. Mental toughness is pivotal for surgical performance and patient safety, yet research into its dynamics within a global and multi-specialty context remains scarce. This study aims to elucidate the factors contributing to mental toughness among surgeons and to understand how it correlates with surgical outcomes and personal well-being.
Methods:
Utilizing a cross-sectional design, this study surveyed 104 surgeons from English and German-speaking countries using the Mental Toughness Questionnaire (MTQ-18) along with additional queries about their surgical practice and general life satisfaction. Descriptive and inferential statistical analyses were applied to investigate the variations in mental toughness across different surgical domains and its correlation with professional and personal factors.
Results:
The study found a statistically significant higher level of mental toughness in micro-surgeons compared to macro-surgeons and a positive correlation between mental toughness and surgeons' intent to continue their careers. A strong association was also observed between general life satisfaction and mental toughness. No significant correlations were found between the application of psychological skills and mental toughness.
Conclusion:
Mental toughness varies significantly among surgeons from different specialties and is influenced by professional dedication and personal life satisfaction. These findings suggest the need for targeted interventions to foster mental toughness in the surgical community, potentially enhancing surgical performance and reducing burnout. Future research should continue to explore these correlations, with an emphasis on longitudinal data and the development of resilience-building programs.
Digital Twins (DT) implementation in the Built Environment (BE) industry is still in its early stages. Aiming to increase the knowledge about DT, this study analyzes how DT can be understood in the BE sector and investigates its different potential benefits and expected challenges. To do so, the Systematic Literature Review (SLR) approach was employed. Using 228 publications, the current study presents a proposed definition and structure for DT systems. The proposed structure is based on four main layers: physical, digital, application, and user layers. The study also classified the applications of DT into six groups: sustainability and environmental, facility management, safety, health, and risk management, structural performance, construction management, and architectural and urban-related applications. The challenges of DT implementation were also grouped based on industry-related, social and organizational, economic, technological, and political and legal challenges. Based on the results, future research directions and practical recommendations were presented to support the successful deployment of the technology.
The present paper takes a novel approach to production of fibre-reinforced thermoplastic tubes. The method begins with the raw materials, reinforcing fibre and thermoplastic granulate which are processed to tapes through a newly developed direct impregnation process. It is followed by consolidation of fibre-reinforced thermoplastic tubes using infrared (IR) emitters in the filament winding process. This process employs various angles and utilizes a rotatable consolidation axis. The winding process operates at a constant speed, addressing the challenge of bending the fibre-reinforced tapes in the angle reversal areas near the tube ends. Experiments have confirmed that the process can run at speeds reaching approximately 470 mm/min. The design of the impregnation line takes into account the properties of the thermoplastic and the roving, allowing for a speed of up to 1 m/s.
A novel method for controlling the rebound behavior of small balls made of Al2O3 with a radius of 2.381 mm is presented. It uses different types of micro-structured surfaces of soft magnetoactive elastomers. These surfaces were fabricated via laser micromachining and include fully ablated surfaces as well as micrometer-sized lamellas with a fixed width of 90 µm, height of 250 µm and three different gap sizes (15, 60 and 105 µm). The lamellas can change their orientation from edge-on to face-on configuration according to the direction of the external magnetic field from a permanent magnet. The orientation of the external magnetic field significantly influences the rebound behavior of the balls, from a coefficient of restitution e of to < 0.1. The highest relative change in the coefficient of restitution between zero field and face-on configuration of is observed for lamellas with a gap of 60 µm. Other characteristics of the ball rebound such as the penetration depth into an Magnetoactive elastomer and the maximum deceleration are investigated as well. The proposed method does not require a constant power supply due to the use of permanent magnets. It may find novel applications in the field of impact engineering.
With the ongoing miniaturization of wireless devices, the importance of wearable textiles in the antenna segment has increased significantly in recent years. Due to the widespread utilization of wireless body sensor networks for healthcare and ubiquitous applications, the design of wearable antennas offers the possibility of comprehensive monitoring, communication, and energy harvesting and storage. This article reviews a number of properties and benefits to realize comprehensive background information and application ideas for the development of lightweight, compact and low-cost wearable patch antennas. Furthermore, problems and challenges that arise are addressed. Since both electromagnetic and mechanical specifications must be fulfilled, textile and flexible antennas require an appropriate trade-off between materials, antenna topologies, and fabrication methods—depending on the intended application and environmental factors. This overview covers each of the above issues, highlighting research to date while correlating antenna topology, feeding techniques, textile materials, and contacting options for the defined application of wearable planar patch antennas.
ntervention with motivational emails can have a positive effect on course retention in e-learning. It is, however, not yet clear whether different forms of emails affect course retention and how students make progress during the sending of emails. We therefore used a voluntary asynchronous online course with 206 students. Students were randomly divided into four groups: text–picture personalised email vs. text personalised email vs. generalised email vs. no email. Emails were sent weekly for 3 months. Results yield that more students made progress in the text–picture personalised email group than in the control group. Students in all email groups progressed by more units than students in the control group. Only students in email groups completed the course and only students in personalised email groups reacted to the emails. Emails were accepted by most students enrolled. The findings suggest that cost-effective and easily implemented emails can encourage students to progress from unit to unit.
As global demand for green hydrogen rises, potential hydrogen exporters move into the spotlight. However, the large-scale installation of on-grid hydrogen electrolysis for export can have profound impacts on domestic energy prices and energy-related emissions. Our investigation explores the interplay of hydrogen exports, domestic energy transition and temporal hydrogen regulation, employing a sector-coupled energy model in Morocco. We find substantial co-benets of domestic climate change mitigation and hydrogen exports, whereby exports can reduce domestic electricity prices while mitigation reduces hydrogen export prices. However, increasing hydrogen exports quickly in a system that is still dominated by fossil fuels can substantially raise domestic electricity prices, if green hydrogen production is not regulated. Surprisingly, temporal matching of hydrogen production lowers domestic electricity cost by up to 31% while the effect on exporters is minimal. This policy instrument can steer the welfare (re-)distribution between hydrogen exporting firms, hydrogen importers, and domestic electricity consumers and hereby increases acceptance among actors.
Aims
Recent evidence suggests the possibility of intraprocedural phase recognition in surgical operations as well as endoscopic interventions such as peroral endoscopic myotomy and endoscopic submucosal dissection (ESD) by AI-algorithms. The intricate measurement of intraprocedural phase distribution may deepen the understanding of the procedure. Furthermore, real-time quality assessment as well as automation of reporting may become possible. Therefore, we aimed to develop an AI-algorithm for intraprocedural phase recognition during ESD.
Methods
A training dataset of 364385 single images from 9 full-length ESD videos was compiled. Each frame was classified into one procedural phase. Phases included scope manipulation, marking, injection, application of electrical current and bleeding. Allocation of each frame was only possible to one category. This training dataset was used to train a Video Swin transformer to recognize the phases. Temporal information was included via logarithmic frame sampling. Validation was performed using two separate ESD videos with 29801 single frames.
Results
The validation yielded sensitivities of 97.81%, 97.83%, 95.53%, 85.01% and 87.55% for scope manipulation, marking, injection, electric application and bleeding, respectively. Specificities of 77.78%, 90.91%, 95.91%, 93.65% and 84.76% were measured for the same parameters.
Conclusions
The developed algorithm was able to classify full-length ESD videos on a frame-by-frame basis into the predefined classes with high sensitivities and specificities. Future research will aim at the development of quality metrics based on single-operator phase distribution.
Limitations in computer-assisted diagnosis include lack of labeled data and inability to model the relation between what experts see and what computers learn. Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. While deep learning techniques are broad so that unseen information might help learn patterns of interest, human insights to describe objects of interest help in decision-making. This paper proposes a novel approach, DeepCraftFuse, to address the challenge of combining information provided by deep networks with visual-based features to significantly enhance the correct identification of cancerous tissues in patients affected with Barrett’s esophagus (BE). We demonstrate that DeepCraftFuse outperforms state-of-the-art techniques on private and public datasets, reaching results of around 95% when distinguishing patients affected by BE that is either positive or negative to esophageal cancer.
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis.For this task, the deep learning techniques’ black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett’s esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett’s esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem.
Determinants of household electricity consumption measured by smart meters found by the authors in a scoping review were analyzed for the example of Germany utilizing the 2018 Survey of Income and Expenditure. All variables identified in the scoping review were covered in the survey (number and type of appliances, sociodemographic, and dwelling-related aspects). One can therefore use this large representative data set to test these relationships for German households. Expenditure on electricity is considered an indicator of household electricity consumption. The determinants show weak to moderate correlations with energy expenditure in bivariate analyses. The multivariate analysis shows effects of household-specific, dwelling-related, and appliance-specific factors. Models considering only one aspect overestimate this effect. Thus, all three aspects should be considered simultaneously when explaining residential electricity consumption. The largest effects are found for electricity as the main energy source for heating, the number of household members, as well as their presence at home. While household structure plays an important part in explaining residential energy consumption, dwelling and appliance-related aspects influence it as well. The latter aspects may be influenced by appropriate policy measures.
Aims
Artificial Intelligence (AI) systems in gastrointestinal endoscopy are narrow because they are trained to solve only one specific task. Unlike Narrow-AI, general AI systems may be able to solve multiple and unrelated tasks. We aimed to understand whether an AI system trained to detect, characterize, and segment early Barrett’s neoplasia (Barrett’s AI) is only capable of detecting this pathology or can also detect and segment other diseases like early squamous cell cancer (SCC).
Methods
120 white light (WL) and narrow-band endoscopic images (NBI) from 60 patients (1 WL and 1 NBI image per patient) were extracted from the endoscopic database of the University Hospital Augsburg. Images were annotated by three expert endoscopists with extensive experience in the diagnosis and endoscopic resection of early esophageal neoplasias. An AI system based on DeepLabV3+architecture dedicated to early Barrett’s neoplasia was tested on these images. The AI system was neither trained with SCC images nor had it seen the test images prior to evaluation. The overlap between the three expert annotations („expert-agreement“) was the ground truth for evaluating AI performance.
Results
Barrett’s AI detected early SCC with a mean intersection over reference (IoR) of 92% when at least 1 pixel of the AI prediction overlapped with the expert-agreement. When the threshold was increased to 5%, 10%, and 20% overlap with the expert-agreement, the IoR was 88%, 85% and 82%, respectively. The mean Intersection Over Union (IoU) – a metric according to segmentation quality between the AI prediction and the expert-agreement – was 0.45. The mean expert IoU as a measure of agreement between the three experts was 0.60.
Conclusions
In the context of this pilot study, the predictions of SCC by a Barrett’s dedicated AI showed some overlap to the expert-agreement. Therefore, features learned from Barrett’s cancer-related training might be helpful also for SCC prediction. Our results allow different possible explanations. On the one hand, some Barrett’s cancer features generalize toward the related task of assessing early SCC. On the other hand, the Barrett’s AI is less specific to Barrett’s cancer than a general predictor of pathological tissue. However, we expect to enhance the detection quality significantly by extending the training to SCC-specific data. The insight of this study opens the way towards a transfer learning approach for more efficient training of AI to solve tasks in other domains.
Aims
While AI has been successfully implemented in detecting and characterizing colonic polyps, its role in therapeutic endoscopy remains to be elucidated. Especially third space endoscopy procedures like ESD and peroral endoscopic myotomy (POEM) pose a technical challenge and the risk of operator-dependent complications like intraprocedural bleeding and perforation. Therefore, we aimed at developing an AI-algorithm for intraprocedural real time vessel detection during ESD and POEM.
Methods
A training dataset consisting of 5470 annotated still images from 59 full-length videos (47 ESD, 12 POEM) and 179681 unlabeled images was used to train a DeepLabV3+neural network with the ECMT semi-supervised learning method. Evaluation for vessel detection rate (VDR) and time (VDT) of 19 endoscopists with and without AI-support was performed using a testing dataset of 101 standardized video clips with 200 predefined blood vessels. Endoscopists were stratified into trainees and experts in third space endoscopy.
Results
The AI algorithm had a mean VDR of 93.5% and a median VDT of 0.32 seconds. AI support was associated with a statistically significant increase in VDR from 54.9% to 73.0% and from 59.0% to 74.1% for trainees and experts, respectively. VDT significantly decreased from 7.21 sec to 5.09 sec for trainees and from 6.10 sec to 5.38 sec for experts in the AI-support group. False positive (FP) readings occurred in 4.5% of frames. FP structures were detected significantly shorter than true positives (0.71 sec vs. 5.99 sec).
Conclusions
AI improved VDR and VDT of trainees and experts in third space endoscopy and may reduce performance variability during training. Further research is needed to evaluate the clinical impact of this new technology.
Background and objectiveDue to the high prevalence of dental caries, fixed dental restorations are regularly required to restore compromised teeth or replace missing teeth while retaining function and aesthetic appearance. The fabrication of dental restorations, however, remains challenging due to the complexity of the human masticatory system as well as the unique morphology of each individual dentition. Adaptation and reworking are frequently required during the insertion of fixed dental prostheses (FDPs), which increase cost and treatment time. This article proposes a data-driven approach for the partial reconstruction of occlusal surfaces based on a data set that comprises 92 3D mesh files of full dental crown restorations.MethodsA Generative Adversarial Network (GAN) is considered for the given task in view of its ability to represent extensive data sets in an unsupervised manner with a wide variety of applications. Having demonstrated good capabilities in terms of image quality and training stability, StyleGAN-2 has been chosen as the main network for generating the occlusal surfaces. A 2D projection method is proposed in order to generate 2D representations of the provided 3D tooth data set for integration with the StyleGAN architecture. The reconstruction capabilities of the trained network are demonstrated by means of 4 common inlay types using a Bayesian Image Reconstruction method. This involves pre-processing the data in order to extract the necessary information of the tooth preparations required for the used method as well as the modification of the initial reconstruction loss.ResultsThe reconstruction process yields satisfactory visual and quantitative results for all preparations with a root mean square error (RMSE) ranging from 0.02 mm to 0.18 mm. When compared against a clinical procedure for CAD inlay fabrication, the group of dentists preferred the GAN-based restorations for 3 of the total 4 inlay geometries.ConclusionsThis article shows the effectiveness of the StyleGAN architecture with a downstream optimization process for the reconstruction of 4 different inlay geometries. The independence of the reconstruction process and the initial training of the GAN enables the application of the method for arbitrary inlay geometries without time-consuming retraining of the GAN.
Aims
Endoscopic retrograde cholangiopancreaticography (ERCP) is the gold standard in the diagnosis as well as treatment of diseases of the pancreatobiliary tract. However, it is technically complex and has a relatively high complication rate. In particular, cannulation of the papillary ostium remains challenging. The aim of this study is to examine whether a deep-learning algorithm can be used to detect the major duodenal papilla and in particular the papillary ostium reliably and could therefore be a valuable tool for inexperienced endoscopists, particularly in training situation.
Methods
We analyzed a total of 654 retrospectively collected images of 85 patients. Both the major duodenal papilla and the ostium were then segmented. Afterwards, a neural network was trained using a deep-learning algorithm. A 5-fold cross-validation was performed. Subsequently, we ran the algorithm on 5 prospectively collected videos of ERCPs.
Results
5-fold cross-validation on the 654 labeled data resulted in an F1 value of 0.8007, a sensitivity of 0.8409 and a specificity of 0.9757 for the class papilla, and an F1 value of 0.5724, a sensitivity of 0.5456 and a specificity of 0.9966 for the class ostium. Regardless of the class, the average F1 value (class papilla and class ostium) was 0.6866, the sensitivity 0.6933 and the specificity 0.9861. In 100% of cases the AI-detected localization of the papillary ostium in the prospectively collected videos corresponded to the localization of the cannulation performed by the endoscopist.
Conclusions
In the present study, the neural network was able to identify the major duodenal papilla with a high sensitivity and high specificity. In detecting the papillary ostium, the sensitivity was notably lower. However, when used on videos, the AI was able to identify the location of the subsequent cannulation with 100% accuracy. In the future, the neural network will be trained with more data. Thus, a suitable tool for ERCP could be established, especially in the training situation.
Background:
Stroke as a cause of disability in adulthood causes an increasing demand for therapy and care services, including telecare and teletherapy.
Objectives: Aim of the study is to analyse the acceptance of telepresence robotics and digital therapy applications. Methods: Longitudinal study with a before and after survey of patients, relatives and care and therapy staff.
Results: Acceptance of the technology analysed is high in all three groups. Although acceptance among patients declined in parts of the cases in the second survey after having used telerobotics, all in all approval ratings remained high. With regard to patients no significant correlation was found between the general technology acceptance and the acceptance of use of telerobotics.
Conclusion:
Accepted new telecare and teletherapies can be offered with the help of telepresence robotics. This requires knowledge of and experience with the technology.
In a variety of tomographic applications, data cannot be fully acquired, leading to severely underdetermined image reconstruction. Conventional methods result in reconstructions with significant artifacts. In order to remove these artifacts, regularization methods have to be applied that incorporate additional information. An important example is TV reconstruction which is well known to efficiently compensate for missing data and well reduces reconstruction artifacts. At the same time, however, tomographic data is also contaminated by noise, which poses an additional challenge. The use of a single regularizer within a variational regularization framework must therefore account for both the missing data and the noise. However, a single regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over different scales, in which case ℓ1 curvelet regularization methods work well. To address this issue, in this paper we introduce a novel variational regularization framework that combines the advantages of two different regularizers. The basic idea of our framework is to perform reconstruction in two stages, where the first stage mainly aims at accurate reconstruction in the presence of noise, and the second stage aims at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet-TV approach. We define and implement a curvelet transform adapted to the limited view problem and demonstrate the advantages of our approach in a series of numerical experiments in this context.
Most of the common model-based reconstruction schemes in magnetic particle imaging (MPI) use idealized assumptions, e.g., of an ideal field-free-line (FFL) topology. However, the magnetic fields that are generated in real MPI scanners have distortions and, therefore, model-based approaches often lead to inaccurate reconstructions and may contain artifacts. In order to improve the reconstruction quality in MPI, more realistic MPI models need to be derived. In the present work, we address this problem and present a hybrid model for MPI that allows us to incorporates real measurements of the applied magnetic fields. We will explain that the measurements, that are needed to setup a model for the magnetic fields, can be obtained in a novel calibration procedure that is independent of the resolution and which is much less time-consuming than the one employed in measurement-based MPI reconstructions.We will also present a discretization strategy for this model, that can be used in context of algebraic reconstructions. The presented approach was validated on simulated data in [1], however, its evaluation on real data is a topic for future research.
In various fields of applications, inverse problems are characterized by their sensitivity to data perturbations which can cause severe reconstruction errors. Hence, regularization procedures are employed in order to ensure stability and reconstruction quality. To overcome limitations of classical approaches such as the filtered singular value decomposition (SVD), frame based
diagonalization methods have been studied in the recent years, e.g., wavelet-vagulette (WVD) decomposition. While these methods can be well adapted to the problem at hand, it is well-known, that the lack of translation invariance in multiscale systems can cause specific artifacts in the recovered object. Thus, to overcome these drawbacks we use the translation invariant diagonal frame decomposition (TI-DFD) of linear operators. For illustration, we construct a TI-WVD for one-dimensional integration operator, and confirm our theoretical findings by numerical simulations.
Due to the complexity and the number of factors involved in factory layout planning, computers were identified as an efficient tool to support the process. However, so far no method for computer-aided layout planning has gained wide acceptance in practical application. One reason for this is that in present approaches either the user or the computer designs the layout, neglecting either the qualitative or the quantitative goals. To bridge this gap, this article introduces a concept for human-computer-integration based on evaluative feedback and inverse reinforcement learning. A key element of the concept is the interactive planning process in which user and computer alternately design and improve the layout until a satisfactory layout is found. The user evaluates the layouts according to qualitative criteria, adjusts them intuitively and specifies objectives and restrictions in an explorative way. The computer on the other hand – in form of a reinforcement algorithm – generates possible layouts and incorporates the user’s feedback into its policy. This synergy is expected to generate better results than an expert or an algorithm alone could. Furthermore, in the context of learning factories, it encourages critical thinking and allows students to develop a deeper understanding of the factors that contribute to efficient manufacturing processes. Both an architecture for the implementation is proposed and the requirements for the user interface are specified.
In this paper, we present a new approach to determine the estimated time of arrival (ETA) for bus routes using (Deep) Graph Convolutional Networks (DGCNs). In addition we use the same DGCN to detect detours within a route. In our application, a classification of routes and their underlying graph structure is performed using Graph Learning. Our model leads to a fast prediction and avoids solving the vehicle routing problem (VRP) through expensive computations. Moreover, we describe how to predict travel time for all routes using the same DGCN Model. This method makes it possible not to use a more computationally intensive approximation algorithm when determining long travel times with many intermediate stops, but to use our network for an early estimate of the quality of a route. Long travel times, in our case result from the use of a call-bus system, which must distribute many passengers among several vehicles and can take them to places without a regular stop. For a case study, the rural town of Roding in Bavaria is used. Our training data for this area results from an approximation algorithm that we implemented to optimize routes, and to generate an archive of routes of varying quality simultaneously.
The third-order nonlinear susceptibility of silica glass is measured via self-phase modulation in standard single mode fibers at a wavelength of 1550 nm. To minimize the influence of polarization state changes along the propagation only meter-long fibers were investigated. With pulse durations of picoseconds a quasi-instantaneous nonlinearity with ultrafast electronic and fast nuclear-vibration contributions produces under conditions of negligible dispersion a classic and clean nonlinear phase shift following exactly the shape of the pulse power. The complex pulse envelope was retrieved from frequency optical gating spectrograms. The nonlinear fiber parameter γ could be determined with an accuracy of 3.7 percent. Considering the mode field structure and the doping influence the nonlinear refractive index of silica glass as the fiber base material was found to be n 2 =2.22⋅10 −16 cm 2 W±6.0% for picosecond-long pulses. Comparing nonlinear phase shifts from linear and circular polarized light a nuclear-vibration contribution to the cubic fiber nonlinearity of 25 percent was estimated.
In this paper, we present a new Hybrid Genetic Search (HGS) algorithm for solving the Capacitated Vehicle Routing Problem for Pickup and Delivery (CVRPPD) as it is required for public transport in rural areas. One of the biggest peculiarities here is that a large area has to be covered with as few vehicles as possible. The basic idea of this algorithm is based on a more general version of HGS, which we adopted to solve the CVRPPD in rural areas. It also implements improvements that lead to the acceleration of the algorithm and, thereby, to a faster generation of a fastest route. We tested the algorithm on real road data from Roding, a rural district in Bavaria, Germany. Moreover, we designed an API for converting data from the Openrouteservice, so that our algorithm can be applied on real world examples as well.
Quantum Machine Learning: Foundation, New Techniques, and Opportunities for Database Research
(2023)
In the last few years, the field of quantum computing has experienced remarkable progress. The prototypes of quantum computers already exist and have been made available to users through cloud services (e.g., IBM Q experience, Google quantum AI, or Xanadu quantum cloud). While fault-tolerant and large-scale quantum computers are not available yet (and may not be for a long time, if ever), the potential of this new technology is undeniable. Quantum algorithms havethe proven ability to either outperform classical approaches for several tasks, or are impossible to be efficiently simulated by classical means under reasonable complexity-theoretic assumptions. Even imperfect current-day technology is speculated to exhibit computational advantages over classical systems. Recent research is using quantum computers to solve machine learning tasks. Meanwhile, the database community already successfully applied various machine learning algorithms for data management tasks, so combining the fields seems to be a promising endeavour. However, quantum machine learning is a new research field for most database researchers. In this tutorial, we provide a fundamental introduction to quantum computing and quantum machine learning and show the potential benefits and applications for database research. In addition, we demonstrate how to apply quantum machine learning to the optimization of join order problem for databases.
Internal transport systems are an essential part of intralogistics in production and distribution facilities. These are characterized by a variety of technologies as well as a multitude of interactions with other processes, such as warehouse, picking, and production processes. Therefore, resource planning and control of these systems is complex, especially for discontinuous conveyors. In this task, users can be supported by Digital Twins for decision-making, as they are suitable for investigating both future system states and possible actions. However, relevant use cases that are generally applicable across sectors as well as a generic system architecture for Digital Twins for resource planning and process control of in-plant transport systems have not yet been sufficiently investigated. In this paper, use cases are presented, relevant functions defined, and, finally, a generic functional and a logical reference architecture described. This is conducted with the design science in information systems research method together with a Systems Engineering approach. The use cases are determined at industrial partners of the research project TwInTraSys, which explores Digital Twins for the planning and control of internal transport systems. They are generalized and, thus, also applicable to other production and distribution facilities in different sectors. Further, the reference architecture can provide a basis for the successful implementation of the Digital Twin.
Influence of carbon content on the formation of TiC at diffusion bonded titanium-steel interface
(2023)
Hot pressing of pure Ti and various carbon steels in a temperature range of 950 – 1050 °C creates an up to 9 μm thick compound layer of TiC at the Ti/ steel interface. The calculation of the activation energy for layer formation is 126.5 - 136.7 kJ/mol, independent of the steels carbon content. As the carbon content of the steel increases, the layer thickness also increases, which provides enormous potential for the surface modification of Ti/ Ti-alloys.
The construction industry, supported by the materials industry, is a major user of natural resources. Automation and robotics have the potential to play a key role in the development of circular construction by increasing productivity, reducing waste, increasing safety, and mitigating labor shortages. Starting with a brief synopsis of the history of construction robotics and the concept of robot-oriented design, this article presents exemplary case studies of research projects and entrepreneurial activities in which the authors have participated that have contributed to the advancement of concrete construction. The activities of the authors have systematically led to spin-offs and start-ups, especially in recent years (e.g., CREDO Robotics GmbH, ARE23 GmbH, KEWAZO GmbH, ExlenTec Robotics GmbH, etc.), which shows that the use of construction robots is becoming an important part of the construction industry. With the use of automation and robotics in the built environment especially for concrete construction, current challenges such as the housing shortage can be addressed using the leading machinery and robot technology in Germany and other parts of the world. The knowledge and know-hows gained in these endeavors will lay the groundwork for the next frontier of construction robotics beyond the construction sites.
An effective method for on-demand control over the impact dynamics of droplets on a magnetoresponsive surface is reported. The surface is comprised of micrometer-sized lamellas from a magnetoactive elastomer on a copper substrate. The surface itself is fabricated using laser micromachining. The orientation of the lamellae is switched from edge-on (orthogonal to the surface) to face-on (parallel to the surface) by changing the direction of a moderate (<250 mT) magnetic field. This simple actuation technique can significantly change the critical velocities of droplet rebound, deposition, and splashing. Rebound and deposition regimes can be switched up to Weber number We < 13 ± 3, while deposition and splashing can be switched in the range of 32 < We < 52. Because a permanent magnet is used, no permanent power supply is required for maintaining the particular regime of droplet impact. The presented technology is highly flexible and enables selective fabrication and actuation of microstructures on complex devices. It has great potential for applications in soft robotics, microfluidics, and advanced thermal management.
It is shown that the advancing (ACA) and receding (RCA) contact angles of water on extremely soft (shear modulus of the order of 10 kPa) magnetoactive elastomer (MAE) films significantly depend on the applied magnetic field. The difference between these angles, known as the contact angle hysteresis, is examined. The roles of the filler concentration and material softness are elaborated. The highest change in the contact angle hysteresis (CAH) from 34° in the absence of magnetic field to 76° in a magnetic field of 0.4 T is achieved for the softest sample with the lowest mass fraction of iron particles (70 wt%). The dependence of the CAH on magnetization history (“magnetic hysteresis”) is observed. This magnetic hysteresis is clearly pronounced for the ACA and has little effect on the RCA. Magnetic field-induced changes of the surface roughness exhibit qualitatively the same hysteresis behavior with regard to the external magnetic field as the ACA. The results are promising for the development of smart surfaces for applications where the dynamic wetting has to be controlled.
The movement of a meniscus inside a capillary tube has been extensively studied in the context of displacing one fluid with another immiscible one. This phenomenon exists in many applications including pharmaceutical, oil production, filtration and separation processes, and others. When one of the phases is entrapped inside a capillary tube, it forms what is called a ganglion with two menisci between the two fluids. In a straight uniform capillary tube, a stagnant entrapped ganglion is symmetric. The situation is different if the capillary tube is tapered in which case the two menisci assume different curvatures. Such inhomogeneity of the capillary pressure self-propels the ganglion to move. The fate of the ganglion inside the tapered tube depends on whether it is wetting or nonwetting to the tube wall. That is, after the initial movement, a wetting ganglion accelerates towards the tapered end of the tube while a nonwetting one decelerates towards the wider end before reaching a terminal configuration. Such fates are linked to the variations of the capillary pressure, which continuously increases for a wetting ganglion and decreases for the nonwetting one. In this work, a generalized model is developed that not only describes capillary-driven dynamics over a wide range of viscosity and density contrasts but also pressure-driven scenarios with/without gravity. The model, however, neglects the inertial effect of the two fluids on account of the fact that it is confined to the very early time of the movement process. A first-order nonlinear ordinary differential equation is developed that describes the dynamic behavior of both the wetting and nonwetting ganglions. A fourth-order Runge-Kutta algorithm is developed to solve the model equations. Furthermore, a computational fluid dynamics (CFD) analysis was used to provide a comparison and verification framework.
Background:
Pediatricians are important sources of information for parents regarding their children's health. During the COVID-19 pandemic, pediatricians faced a variety of challenges regarding information uptake and transfer to patients, practice organization and consultations for families. This qualitative study aimed at shedding light on German pediatricians’ experiences of providing outpatient care during the first year of the pandemic.
Methods:
We conducted 19 semi-structured, in-depth interviews with pediatricians in Germany from July 2020 to February 2021. All interviews were audio recorded, transcribed, pseudonymized, coded, and subjected to content analysis.
Results:
Pediatricians felt able to keep up to date regarding COVID-19 regulations. However, staying informed was time consuming and onerous. Informing the patients was perceived as strenuous, especially when political decisions had not been officially communicated to pediatricians or if the recommendations were not supported by the professional judgment of the interviewees. Some felt that they were not taken seriously or adequately involved in political decisions. Parents were reported to consider pediatric practices as sources of information also for non-medical inquiries. Answering these questions was time consuming for the practice personnel and involved non-billable hours. Practices had to adapt their set-up and organization immediately to the new circumstances of the pandemic, which proved costly and laborious as well. Some changes in the organization of routine care, such as the separation of appointments for patients with acute infection from preventive appointments, were perceived as positive and effective by some study participants. Telephone and online consultations were established at the beginning of the pandemic and considered helpful for some situations, whereas for others these methods were deemed insufficient (e.g. for examinations of sick children). All pediatricians reported reduced utilization mainly due to a decline in acute infections. However, preventive medical check-ups and immunization appointments were reported to be mostly attended.
Conclusion:
Positive experiences of reorganizing pediatric practice should be disseminated as “best practices” in order to improve future pediatric health services. Further research could show how some of these positive experiences in reorganizing care during the pandemic are to be maintained by pediatricians in the future.
Additive Manufacturing (AM) is a future-oriented manufacturing technology that is experiencing an enormous boom in the times of Industry 4.0. As a result, various AM technologies and printer models from different manufacturers are entering the market over a short time span. With the advancing establishment of this manufacturing technology for series applications, the expectations and requirements of the fabricated components are also increasing. However, a major challenge is the application-specific selection of the most suitable AM process due to a lack of comparable data. Furthermore, there needs to be more know-how regarding the geometrical and mechanical characteristics of AM parts. This paper addresses this problem by comparing the three most common plasticbased AM processes in the areas of surface quality, dimensional accuracy, and mechanical properties. Roughness measurements, evaluation of a benchmark artifact, tensile tests, and load increase tests are carried out. Based on the results, the individual possibilities and limitations of the compared AM processes can be detected.
The determination of durability-relevant material resistances of concrete is of great importance. They serve as input to engineering models to predict the durability of structures under real environmental conditions. The natural resistances have to be determined in time-consuming experiments, since the processes in nature are very slow. This is particularly important for new materials where long-term experience is not yet available. Thus, accelerated testing is required. Only that way new materials can be evaluated regarding their durability and subsequently be used in practical applications. For carbonation, a new R3 accelerated test method is presented in this contribution. An automated carbonation pressure chamber was developed. It consists of a pressure vessel, automated in such a way that it can apply gas overpressure of various intensities up to 8 bar to mortar and concrete samples. Simultaneously, it can control and regulate the ambient CO2 concentration from 0 to 99.5% in a fully automated and continuously variable procedure. Experiments were carried out with varying combinations of gas overpressure at CO2 concentrations of 3 vol.-% to achieve the most time-efficient carbonation of mortars and concretes. Mortars with different material compositions were used to evaluate the general suitability of the test procedure with the developed equipment. The automated carbonation pressure chamber enables reliable carbonation testing with a total duration under accelerated conditions of only 7 days.
Inverse problems are at the heart of many practical problems such as image reconstruction or nondestructive testing. A characteristic feature is their instability with respect to data perturbations. To stabilize the inversion process, regularization methods must be developed and applied. In this paper, we introduce the concept of filtered diagonal frame decomposition, which extends the classical filtered SVD to the case of frames. The use of frames as generalized singular systems allows a better match to a given class of potential solutions and is also beneficial for problems where the SVD is not analytically available. We show that filtered diagonal frame decompositions yield convergent regularization methods, derive convergence rates under source conditions and prove order optimality. Our analysis applies to bounded and unbounded forward operators. As a practical application of our tools, we study filtered diagonal frame decompositions for inverting the Radon transform as an unbounded operator on L2(R2).
Older adults in long-term care homes are at high risk of experiencing
reduced quality of life (QoL) and depression. Technology-assisted biography work can have a positive impact on QoL and mood, but there is little research on its use with this target group. The purpose of this paper is to examine the effect of tablet-based biography work conducted by volunteers on the QoL of residents and volunteers. A pretest-posttest control group design with an intervention period of 3 months and a 3-month follow-up was used. Results show a significant increase in participation for volunteers and residents after
the intervention, which is stable for residents until follow-up. Volunteers also show significant improvement in mental QoL immediately after the intervention. There were no significant effects for life satisfaction, self-esteem, or depression. No significant changes were found for the control group. Digitally conducted tablet-based biography work appears to have effects on QoL-associated outcomes.
The glass industry is facing increased challenges regarding climate protection targets and rising energy costs. The integration of renewable energy including conversion and storage is a key for both challenges in this energy-intensive industrial sector, which has been mainly relying on fossil gas so far. The options considered to this point for reducing CO2 emissions and switching to a renewable energy supply involve far-reaching changes of the established melting processes. This entails significant risks in terms of influences on glass quality and stable production volumes. The presented approach for the integration of a Power-to-Methane (PtM) system into the glass industry is a completely new concept and has not been considered in detail before. It allows the use of established oxyfuel melting processes, the integration of fluctuating renewable energy sources and a simultaneous reduction of CO2 emissions by more than 78%. At the same time, natural gas purchases become obsolete. A techno-economic evaluation of the complete PtM process shows, that 1,76 €/m3 or 1,26 €/kg synthetic natural gas are possible with renewable energy supply. Using electricity from the energy grid would require electricity prices < 0,126 €/kWh to allow cost competitive PtM processes in the glass industry. Such electricity prices could be achieved by electricity market-based optimization and operation of the PtM system. This operation strategy would require AI-based algorithms predicting availabilities and prices on future-based markets.
The expected lifespan of cement-based materials, particularly concrete, is at least 50 years. Changes in the pore structure of the material need to be considered due to external influences and associated transport processes. The expansion behaviour of concrete and mortar during freeze–thaw attacks, combined with de-icing salt agents, is crucial for both internal and external damage. It is essential to determine and simulate the expansion behaviour of these materials in the laboratory, as well as detect the slow, long-term expansion in real structures. This study measures the expansion of mortar samples during freeze–thaw loading using a high-resolution hand-held 3D laser scanner. The specimens are prepared with fully or partially saturated pore structures through water storage or drying. During freeze–thaw experiments, the specimens are exposed to pure water or a 3% sodium chloride solution (NaCl). Results show contraction during freezing and subsequent expansion during thawing. Both test solutions exhibit similar expansion behaviour, with differences primarily due to saturation levels. Further investigations are required to explore the changing expansion behaviour caused by increasing microcracking resulting from continuous freeze–thaw cycles. A numerical analysis using a 3D coupled hygro-thermo-mechanical (HTM) model is conducted to examine the freeze–thaw behaviour of the mortar. The model accurately represents the freezing deformation during the freeze–thaw cycle.
Increasing the lifetime of titanium implants through a diffusion-controlled surface treatment
(2023)
Performance test methods intend to provide a fast, accurate and precise determination of a particular building material property and thus determine the associated material performance. In concrete, various performance tests are used to classify existing or to approve new materials, to compare concrete compositions or to determine causes of damage in existing structures. The challenge of such test methods is to accelerate natural (very slow) mechanisms to determine the material performance precisely within a short time. However, the attack on the material must not be unrealistically intensive, but must represent reality, just in fast motion. The performance tests used to demonstrate the freeze-thaw resistance of concrete employ a 3% NaCl solution, with literature data ranging from 1% to 10% showing that low concentrations can result in higher surface scaling. In this paper, mortar and concrete specimens are tested at 0, 1, 3, 6, and 9% NaCl solution following the CDF procedure (DIN CEN/TS 12390-9:2017-05). The results are discussed against the background of the existing literature and show that the damage is critically dependent on the pore system and thus also on the effect of the micro-ice lens pump. With increasing freeze-thaw exposition, the pessimum in the external damage shifts towards a de-icing salt concentration of 6%. Furthermore, a novel test methodology based on 3D-laserscanning is presented to determine scaling accurately by eliminating side effects that are typically present in current standards.
Magnetoactive elastomers (MAEs) are promising materials for realization of magnetic field-controlled soft actuators. Herein, a systematic investigation of magnetic field-induced macroscopic deformations of soft MAE cylinders with a diameter of 15 mm in uniform quasi-static magnetic fields directed parallel to the cylinder’s axis is reported. The measurements were based on image processing. Thirty-six MAE samples differing in the weight fraction of the iron filler (70 wt%, 75 wt% and 80 wt%), alignment of filling particles, and the aspect ratio (0.2, 0.4, 0.6, 0.8, 1.0 and 1.2) were fabricated. MAE cylinders exhibited high relative change in height (up to 35% in the field of 485 kA/m) and lateral contraction. The dependence of the maximum extensional strain on the aspect ratio was obtained and compared with theoretical considerations. A concave dent was formed on the free circular base in magnetic fields. This concavity was characterized experimentally. A significant volumetric strain of the order of magnitude of 10% was calculated in MAEs for the first time. In consequently repeated magnetization cycles, the remanent extensional strain significantly increased after each cycle. The results are qualitatively discussed in the framework of the modern views on the magnetically induced macroscopic deformations of MAEs. The directions of further research are outlined.
Der Beitrag befasst sich mit dem Spannungsfeld der politischen Beteiligung im Migrationskontext vor dem Hintergrund der Partizipation von unterrepräsentierten Gruppen. Empirisch untersucht werden die politische Partizipation von Geflüchteten in Bayern sowie ihre Einstellungen zur Demokratie. Die Ergebnisse werden vor dem Stand der Forschung diskutiert und es werden Empfehlungen abgeleitet.
This work aims first to develop a dynamic lumped model for the isothermal reactions of hydrogen/steam with a single iron oxide/iron pellet inside a tubular reactor and to validate the model results against the experimental reaction kinetic data with the help of our STA device. To describe the temporal change in mass, and consequently, the temporal heat of reaction, the shrinking core model, based on the geometrical contracting sphere, is applied. It turned out that, the simulation model can reproduce the experimental, temporal concentration and temperature-dependent conversion rates with a maximum deviation of 4.6% during the oxidation reactions and 3.1% during the reduction reactions. In addition, a measured isothermal storage process comprising one reduction and one oxidation phase with a holding phase in between on a single reacting pellet could be reproduced with a maximum absolute deviation in the conversion rate of 1.5%. Moreover, a lumped, non-isothermal simulation model for a pelletized tubular redox-reactor including 2kg of iron oxide pellets has been established, in which the heat of reaction, heat transfer to the ambient and heat transfer between the solid and gas phases are considered. The temporal courses of the outlet gas concentration as well as the temperatures of the gas stream and the solid material at a constant input gas flow rate and a constant reacting gas inlet concentration but different input gas temperatures are estimated. Because of the endothermic nature of the reduction reaction, the inlet reacting gas temperature shall be kept high to prevent the severe temperature drop in the solid phase and, consequently, the significant reduction of the reaction rate. Contrary to that, the oxidation process requires lower input gas temperatures to avoid the excessive overheating of the reaction mass and, consequently, the sintering of the reacting pellets. Finally, five of the previous reactors have been connected in series to explore the influence of the changing inlet gas temperatures and concentrations on the dynamic performance of each storage mass.
Absorber-free laser transmission welding enables clean and precise joining of plastics without additives or adhesives. It is therefore well suited to produce optical and medical devices, which place high demands on cleanliness and accuracy.
However, the weld usually has an undesirably large vertical expansion, causing bulges and distortion. To improve this, the intensity distribution of the laser beam as well as the processing strategy must be adapted. Due to the complexity, this is aided by process simulation. However, simulation parameter calibration and verification are usually done considering the seam width and height, which is of limited significance. To overcome this, we propose a new method for image processing of microtome sections, determining the spatially resolved geometry of the weld. Thus, the deviation between experiment and simulation can be calculated pixel by pixel. This spatially resolved value is predestined for the calibration of the simulation parameters: For a parameter field with 18 different settings, the total deviation between experiment and simulation is less than 11 % after calibration.
Preface QDSM
(2023)
The first international workshop on Quantum Data Science and anagement (QDSM), co-located with VLDB 2023, is centered around addressing the possibilities of quantum computing for data science and data management. Quantum computing is a relatively new and emerging field that is believed to have huge computational potential in the future. In the QDSM workshop, we want to provide a venue for discussing and publishing novel results of applying quantum computing to hard data science and data management problems. These problems include join order optimization, designing efficient quantum feature maps, studying possibilities of solving linear programs with quantum algorithms, and divergent index tuning with quantum machine learning. Besides, we include a short and visionary survey on quantum computing for databases. Theworkshop provides a platform for active discussion on these and related topics.
Inverse problems are inherently ill-posed and therefore require regularization techniques to achieve a stable solution. While traditional variational methods have wellestablished theoretical foundations, recent advances in machine learning based approaches have shown remarkable practical performance. However, the theoretical foundations of learning-based methods in the context of regularization are still underexplored. In this paper, we propose a general framework that addresses the current gap between learning-based methods and regularization strategies. In particular, our approach emphasizes the crucial role of data consistency in the solution of inverse problems and introduces the concept of data-proximal null-space networks as a key component for their solution. We provide a complete convergence analysis by extending the concept of regularizing null-space networks with data proximity in the visual part. We present numerical results for limited-view computed tomography to illustrate the validity of our framework.
Product sounds with clearly audible tonal components are often perceived as unpleasant or annoying. If different simultaneously operating aggregates are present in a system, for example vehicle engines and gearboxes, the interaction of tonal components, similar to music, can evoke additional sensations in human auditory perception. Supplementary to a pronounced tonality, such sounds can also yield distinct degrees of consonance or dissonance between tones. Previous studies showed that the perceived dissonance had a high impact on preference judgements for sounds with similar tonality. In experiments of the present study, sounds that differed in tonality were rated with respect to the auditory sensations sharpness, tonality and dissonance by one group of participants while another group only carried out a preference task. Thereout a model for predicting perceived preference is derived from the subjective judgements of auditory sensations. The performance of the preference predictions based on subjective udgements will be compared against purely model-based predictions using different algorithms for acoustic attributes.
Owing to increasingly stringent emission limits, particulate filters have become mandatory for gasoline-engine vehicles. Monitoring their soot loading is necessary for error-free operation. The state-of-the-art differential pressure sensors suffer from inaccuracies due to small amounts of stored soot combined with exhaust gas conditions that lead to partial regeneration. As an alternative approach, radio-frequency-based (RF) sensors can accurately measure the soot loading, even under these conditions, by detecting soot through its dielectric properties. However, they face a different challenge as their sensitivity may depend on the engine operation conditions during soot formation. In this article, this influence is evaluated in more detail. Various soot samples were generated on an engine test bench. Their dielectric properties were measured using the microwave cavity perturbation (MCP) method and compared with the corresponding sensitivity of the RF sensor determined on a lab test bench. Both showed similar behavior. The values for the soot samples themselves, however, differed significantly from each other. A way to correct for this cross-sensitivity was found in the influence of exhaust gas humidity on the RF sensor, which can be correlated with the engine load. By evaluating this influence during significant humidity changes, such as fuel cuts, it could be used to correct the influence of the engineon the RF sensor.
In process analytics or environmental monitoring, the real-time recording of the composition of complex samples over a long period of time presents a great challenge. Promising solutions are label-free techniques such as surface plasmon resonance (SPR) spectroscopy. They are, however, often limited due to poor reversibility of analyte binding. In this work, we introduce how SPR imaging in combination with a semi-selective functional surface and smart data analysis can identify small and chemically similar molecules. Our sensor uses individual functional spots made from different ratios of graphene oxide and reduced graphene oxide, which generate a unique signal pattern depending on the analyte due to different binding affinities. These patterns allow four purine bases to be distinguished after classification using a convolutional neural network (CNN) at concentrations as low as 50 μM. The validation and test set classification accuracies were constant across multiple measurements on multiple sensors using a standard CNN, which promises to serve as a future method for developing online sensors in complex mixtures.
Artificial neural networks (ANNs) are used in quantitative infrared gas spectroscopy to predict concentrations on multi-component absorption spectra. Training of ANNs requires vast amounts of labelled training data which may be elaborate and time consuming to obtain. Additional data can be gained by the utilization of synthetically generated spectra, but at the cost of systematic deviations to measured data. Here, we present two approaches to train ANNs with a combination of comparatively small, measured data sets and synthetically generated data. For the first approach a neural network is trained hybridly with synthetically generated infrared absorption spectra of mixtures of N2O and CO and measured zero-gas spectra, taken with a mid-infrared dual comb spectrometer. This improves the mean absolute error (MAE) of the network predictions from 0.46 to 0.01 ppmV and 0.24 to 0.01 ppmV for the concentration predictions of N2O and CO respectively for zero-gas measurements which was previously observed for training with purely synthetic data. At the same time a similar performance on spectra from gas mixtures of 0–100 ppmV N2O and 0 to 60 ppmV CO was achieved. For the second approach an ANN pre-trained on synthetic infrared spectra of mixtures of acetone and ethanol is retrained on a small dataset consisting of 26 spectra taken with a mid-infrared photoacoustic spectrometer. In this case the MAE for the concentration predictions of ethanol and acetone are improved by 45 % and 20 % in comparison to purely synthetic training. This shows the capability of using synthetically generated data to train ANNs in combination with small amounts of measured data to further improve neural networks for gas sensing and the transferability between different sensing approaches.
The elite sport movement for athletes with hearing impairments, namely Deaflympics, differs from the Paralympic and Olympic sport movements because it exhibits a variety of distinct sociocultural and organisational characteristics. Yet, mental training with Deaflympic athletes receives little to no attention from the scientific community. Little is known about sport psychology consultants’ (SPCs) work with Deaflympic athletes. In this study, we explored SPCs’ exposure to so called Deaf sport as well as their experiences, attitudes, and assumptions regarding the utility of psychological skills training (PST) with Deaflympic athletes. A self-constructed questionnaire with closed and semi-open questions was completed by 93 (58.8% female) SPCs in European German-speaking countries. Analyses revealed SPCs had limited exposure to Deaflympic sport but indicated readiness to work with Deaflympic athletes. SPCs shared no reasons as to why PST skills and techniques would not be effective with Deaflympic athletes. However, SPCs regarded communication challenges as a major obstacle. We conclude that the integration of elite Deaf sport in SPCs’ training programmes is vital, considering SPCs’ lack of exposure and experience with Deaflympic athletes as well as their communication insecurities. In addition, further empirical research on PST effectiveness in Deaflympics athletes should provide the foundation for evidence-based utility of applied sport psychology in Deaflympic sport.
Despite the relevance and maturity of the Chief Information Officer (CIO) research field, no studies exist that exhaustively summarize the current body of knowledge, focusing on the development of the field over its entire timespan. The paper at hand addresses this research gap and presents an exhaustive literature review on the CIO research field using main path analysis. We identify the central papers in CIO research and eight main research streams by quantitatively and qualitatively analyzing 466 papers. We find that established research streams, e.g., 'Evolving role of the CIO' and 'CIO hierarchical position and relationships' as well as recently emerging research streams, e.g., 'CIO as business enabler' and 'CIOs and IT security,' draw growing attention. Based on our findings, we develop promising further avenues for research in the CIO field.
The average tenure of Chief Information Officers (CIOs) has increased over the past few years. Nevertheless, the average tenure of CIOs is shorter than that of Chief Executive Officers (CEOs). While most studies on tenure and background are based on data from US IT executives, studies on German CIOs are missing. This study analyzes the tenure of German CIOs as a proxy for management effectiveness and how certain factors influence it. An original and unique dataset of 384 IT executives from German companies is examined. The data include the size and industry sector of the companies, educational and professional backgrounds of the CIOs, and the CIOs' reporting lines. Data were analyzed using the chi-square test and Fisher's exact test. The German CIOs had a median tenure of 4.0 years. However, if we examine executives who are currently in office and executives with a completed term of office separately , the median tenure differs. The results also show that German CIOs do not have shorter tenures than German CEOs. When compared with US CIOs, the results depend on the values selected for comparison. In addition, the analysis shows that neither the size and industry sector of the companies nor the educational and professional backgrounds of the CIOs and managers of the CIO reports have a statistically significant influence on the tenure of IT executives. The factors examined in this study can be considered as preconditions for the CIO position. In the future, factors that play a role during tenure should be examined.
In our experiments we grew electron emitting carbon nanostructures on tungsten tips. Subsequently, we transferred the growth process to pre-structured phosphorus-doped n-type silicon and obtained emitting carbon nanostructures directly grown on silicon. After growth of the nanostructures, the silicon field emitters showed increased emission currents of 76 nA at 1.1 kV (compared to 6 nA under the same conditions before growth).
Friction has long been an important issue in multibody dynamics. Static friction models apply appropriate regularization techniques to convert the stick inequality and the non-smooth stick–slip transition of Coulomb’s approach into a continuous and smooth function of the sliding velocity. However, a regularized friction force is not able to maintain long-term stick. That is why dynamic friction models were developed in recent decades. The friction force depends herein not only on the sliding velocity but also on internal states. The probably best-known representative, the LuGre friction model, is based on a fictitious bristle but realizes a too-simple approximation. The recently published second-order dynamic friction model describes the dynamics of a fictitious bristle more accurately. It is based on a regularized friction force characteristic, which is continuous and smooth but can maintain long-term stick due to an appropriate shift in the regularization. Its performance is compared here to stick–slip friction models, developed and launched not long ago by commercial multibody software packages. The results obtained by a virtual friction test-bench and by a more practical festoon cable system are very promising. Thus, the second-order dynamic friction model may serve not only as an alternative to the LuGre model but also to commercial stick–slip models.
The Internet of Things (IoT) is an emerging computing paradigm providing new approaches to collect and analyze environmental data. However, as specific challenges arose, the paradigm of Edge Computing with its potential solution capabilities came into place. The combination of both paradigms is currently highly discussed in industry and research. This paper aims to contribute to this field by conducting a systematic literature review to examine the differences and relation between IoT and Edge Computing on a meta-level. It first investigates conceptual backgrounds, use cases, and implementation types. After that, the differences between the paradigms are highlighted. It becomes clear that the significant distinction is in the architectural composition. However, the scientific consensus reveals that both paradigms have a common historical background, and Edge Computing is perceived as the next step in the evolution of IoT. Furthermore, Edge Computing-based systems can address common IoT challenges identified in the two paradigms’ problem-solution space. Ultimately, there is a need for further research in security, edge intelligence, and standardization, with Edge Computing frameworks able to address these in practice.
Lean IT
(2023)
Companies have applied Lean Management and its methods in their production functions for several decades. They also increasingly use Lean Management to improve service delivery, for example, in their IT organizations, which is referred to as “Lean IT”. Lean IT finds widespread recognition in business practice, but corresponding academic research is still scarce. The paper at hand intends to shed light on the current perspectives of Lean IT from an academic point and a practitioner point of view. The paper applies an innovative quantitative approach of literature analysis using semantic entity annotator and a keyword analysis to systematically identify and compare topics academics and practitioners deem relevant in context of Lean IT. We analyze practitioner media and scholarly articles published from January 2014 to June 2019. The analysis shows that research does not seem to adequately address the topics that are highly relevant for practitioners when it comes to Lean IT, e.g., issues pertinent to Automation, DevOps, role of the CIO, IT Service Management or Scrum in context of Lean IT are under-researched. Our analysis further shows that interest in Lean IT as a field is rising in both groups. Our study can help to guide further research activities.
Titanium is used in many areas due to its excellent mechanical, biological and corrosion-resistant properties. Implants often have thin and filigree structures, providing an ideal application for laser fine cutting. In literature, the main focus is primarily on investigating and optimizing the parameters for titanium sheet thicknesses greater than 1 mm. Hence, in this study, the basic manufacturing parameters of laser power, cutting speed and laser pulsing of a 200 W modulated fibre laser are investigated for 0.15 mm thick titanium grade 2 sheets. A reproducible, continuous cut could be achieved using 90 W laserpower and 2 cutting-speed. Pulse pause variations between 85–335 μs in 50 μs steps and fixed
pulse duration of 50 μs show that a minimum kerf width of 23.4 μm, as well as a minimum cut edge roughness Rz of 3.59 μm, is achieved at the lowest pulse pause. An increase in roughness towards the laser exit side, independent of the laser pulse pause, was found and discussed. The results provide initial process parameters for cutting thin titanium sheets and thus provide the basis for further investigations, such as the influence of cutting gas pressure and composition on the cut edge.
Three-dimensional Medical Printing and Associated Legal Issues in Plastic Surgery: A Scoping Review
(2023)
Three-dimensional printing (3DP) represents an emerging field of surgery. 3DP can facilitate the plastic surgeon’s workflow, including preoperative planning, intraoperative assistance, and postoperative follow-up. The broad clinical application spectrum stands in contrast to the paucity of research on the legal framework of 3DP. This imbalance poses a potential risk for medical malpractice lawsuits. To address this knowledge gap, we aimed to summarize the current body of legal literature on medical 3DP in the US legal system. By combining the promising clinical use of 3DP with its current legal regulations, plastic surgeons can enhance patient safety and outcomes.
In a distributed system, functionally equivalent nodes work together to form a system with improved availability, reliability and fault tolerance. Thereby, the purpose is to achieve a common control objective. As multiple components cooperate to accomplish tasks, coordination between them is required. Electing a node as the temporary leader can be a possible solution to perform coordination. This work presents a self-stabilizing algorithm for the election of a leader in dynamically reconfigurable bus topology-based broadcast systems with a message and time complexity of O(1). The election is performed dynamically, i.e., not only when the leader node fails, and is criterion-based. The criterion used is a performance related value which evaluates the properties of the node regarding the ability to perform the tasks of the leader. The increased demands on the leader are taken into account and a re-election is started when the criterion value drops below a predefined level. The goal here is to distribute the load more evenly and to reduce the probability of failure due to overload of individual nodes. For improved system availability and reduced fault rates, a management level consisting of leader, assistant and co-assistant is introduced. This reduces the number of required messages and the duration in case of non-initial election. For further reduction of required messages to uniquely determine a leader, the CAN protocol is exploited. The proposed algorithm selects a node with an improved failure rate and a reduced message and hence time complexity while satisfying the safety and termination constraints. The operation of the algorithm is validated using a hardware test setup.
Dual front steering axles are quite common in multi-axled heavy duty trucks. In standard layouts of such axle combinations, the steer motions of the wheels depend not only on the rotation of the steering wheel but also on the movements of the axles. As a consequence, the model complexity of the steering system should match with the complexity of the suspension model. The development of new technologies like advanced driver assistance systems or autonomous driving can only be accomplished efficiently using extensive simulation methods. Such kind of applications demand for computationally efficient vehicle models. This paper presents a steering system model for dual front axles of heavy duty trucks which supplements the suspension model of the axles. The model takes the torsional compliance of the steering column as well as the stiffness of the tie rods and the coupling rod into account. A quasi-static solution provides a straight forward computation including the partial derivatives required for an efficient implicit solver. The steering system model matches perfectly with comparatively lean, but sufficiently accurate multibody suspension models.
Friction has long been an important issue in multibody dynamics. Static friction models apply appropriate regularization techniques to convert the stick inequality and the non-smooth stick-slip transition of Coulomb’s approach into a continuous and smooth function of the sliding velocity. However, a regularized friction force is not able to maintain long-term stick. That is why, dynamic friction models were developed in the last decades. The friction force depends herein not
only on the sliding velocity but also on internal states. The probably best known representative, the LuGre friction model, is based on a fictitious bristle but realizes a too simple approximation. The recently published second order dynamic friction model describes the dynamics of a fictitious bristle more accurately. Its performance is compared here to stick-slip friction models, developed and launched not long ago by commercial multibody software packages.
To decrease the number of kilometers driven during the development of autonomous cars or driving assistance systems, performant simulation tools are necessary. Currently, domain distance effects between simulation and reality are limiting the successful application of rendering engines in data-driven perception tasks. In order to mitigate those domain distance effects, simulation tools have to be as close to reality as possible for the given task. For optical sensors like cameras, the luminance of the scene is essential. We provide within this paper a method to measure the luminance of rendered scenes within CARLA, an often used open-source simulation environment. Thereby, it is possible to validate the environment and weather models by taking real-world measurements with photometric sensors or with the help of open-source weather data, published e.g. by the German federal service for weather data (DWD - "Deutscher Wetterdienst"). Employing our proposed luminance measurement, the domain gap resulting from the simulation can be specified, which makes it possible to evaluate the statements about the safety of the automated driving system determined within the simulation. We show that the ratio between global and diffuse radiation modeled by the default atmosphere models within CARLA are under limited conditions similar to real-world measurements taken by the DWD. Nevertheless, we show, that the ratio’s temporal variability in real-world situations is not modeled by CARLA.
This paper examines the conceptualization of sustainability in the context of information and communication technology (ICT) research. Through an inductive text analysis of sixteen literature reviews spanning from 2014 to 2023, key themes and concepts are identified, highlighting the complex relationship between ICT and sustainability. ICT is perceived both as an enabler and a problem for sustainability. Furthermore, the terminology and concept of sustainability in the context of ICT remain unclear. The emergence of digitalization as a novel socio-technical phenomenon poses additional challenges for conceptual alignment. While a holistic view of sustainability in ICT is desired, business and social implications receive less attention. The paper summarizes and discusses the developments in research on this topic over the past decade.
Control Oriented Mathematical Modeling of a Bidirectional DC-DC Converter - Part 1: Buck Mode
(2023)
Parallel connection of different batteries equipped with bidirectional DC-DC converters offers an increase of the total storage capacity, the provision of higher currents and an improvement of reliability and system availability. To share the load current among the DC-DC converters while maintaining the safe operating range of the batteries, appropriate controllers are needed. The basis for the design of these control approaches requires knowledge of both the static and dynamic characteristics of the DC-DC converter used. In this paper, the small signal analysis of a DC-DC converter in buck mode is shown using the circuit averaging technique. The paper gives an overview of all required transfer functions:. The control and line to output transfer functions for CCM and DCM relevant for average current mode control as well as for voltage control are derived and their poles and zeros are determined. This provides the basis for stability consideration, analysis of the overall control structure and controller design.
It is common practice to use maximum FAST time-weighted sound pressure levels to assess transient impact noise, as these levels correlate well with human perception of impact noise. Maximum FAST time-weighted levels are known to be dependent on the reverberation time of the receiving room. In previous studies, an analytical correction term was developed using a Dirac impulse. The correction term is used to calculate the maximum FAST time-weighted levels from peak sound pressure levels. Peak levels are independent of the reverberation time of the room. Applying the correction term makes it possible to compare measurement results from different rooms. The correction term has been validated in several studies for the standard rubber impact ball. In this paper, the influence of the source signal (Dirac impulse) on the correction term is studied. Analytical and numerical models are employed to investigate the consequences of stretching the impulse in time and of changing its shape. The results are compared with empirical solutions developed in other studies.
With an atmospheric concentration of approximately 2000 parts per billion (ppbV, 10−9) methane (CH4) is the second most abundant greenhouse gas (GHG) in the atmosphere after carbon dioxide (CO2). The task of long-term and spatially resolved GHG monitoring to verify whether climate policy actions are effective, is becoming more crucial as climate change progresses. In this paper we report the CH4 concentration readings of our photoacoustic (PA) sensor over a five day period at Hohenpeißenberg, Germany. As a reference device a calibrated cavity ringdown spectrometer Picarro G2301 from the meteorological observatory was employed. Trace gas measurements with photoacoustic instruments promise to provide low detection limits at comparably low costs. However, PA devices are often susceptible to cross-sensitivities related to environmental influences. The obtained results show that relaxation effects due to fluctuating environmental conditions, e.g. ambient humidity, are a non-negligible factor in PA sensor systems. Applying algorithm compensation techniques, which are capable of calculating the influence of relaxational effects on the photoacoustic signal, increase the accuracy of the photoacoustic sensor significantly. With an average relative deviation of 1.11 % from the G2301, the photoacoustic sensor shows good agreement with the reference instrument.
In this paper, the movement behavior of amoeboid locomotion system is investigated and the theoretical proof of the locomotion of the system is provided with the finite element method. It is shown that not only the speed of locomotion but also its direction can be influenced by the drive frequency. Depending on the drive frequency, a movement from the home position and a subsequent movement in opposite directions can be achieved. In addition, high speeds of movement can be achieved in a limited frequency range.
We present an industrial end-user perspective on the current state of quantum computing hardware for one specific technological approach, the neutral atom platform. Our aim is to assist developers in understanding the impact of the specific properties of these devices on the effectiveness of algorithm execution. Based on discussions with different vendors and recent literature, we discuss the performance data of the neutral atom platform. Specifically, we focus on the physical qubit architecture, which affects state preparation, qubit-to-qubit connectivity, gate fidelities, native gate instruction set, and individual qubit stability. These factors determine both the quantum-part execution time and the end-to-end wall clock time relevant for end-users, but also the ability to perform fault-tolerant quantum computation in the future. We end with an overview of which applications have been shown to be well suited for the peculiar properties of neutral atom-based quantum computers.
The use of quantum processing units (QPUs) promises speed-ups for solving computational problems. Yet, current devices are limited by the number of qubits and suffer from significant imperfections, which prevents achieving quantum advantage. To step towards practical utility, one approach is to apply hardware-software co-design methods. This can involve tailoring problem formulations and algorithms to the quantum execution environment, but also entails the possibility of adapting physical properties of the QPU to specific applications. In this work, we follow the latter path, and investigate how key figures— circuit depth and gate count—required to solve four cornerstone NP-complete problems vary with tailored hardware properties. Our results reveal that achieving near-optimal performance and properties does not necessarily require optimal quantum hardware, but can be satisfied with much simpler structures that can potentially be realised for many hardware approaches.m Using statistical analysis techniques, we additionally identify an underlying general model that applies to all subject problems. This suggests that our results may be universally applicable to other algorithms and problem domains, and tailored QPUs can find utility outside their initially envisaged problem domains.
The substantial possible improvements nonetheless highlight the importance of QPU tailoring to progress towards practical deployment and scalability of quantum software.
Quantum computers promise considerable speedups over classical approaches, which has raised interest from many disciplines. Since any currently available implementations suffer from noise and imperfections, achieving concrete speedups for meaningful problem sizes remains a major challenge. Yet, imperfections and noise may remain present in quantum computing for a long while. Such limitations play no role in classical software computing, and software engineers are typically not well accustomed to considering such imperfections, albeit they substantially influence core properties of software and systems. In this paper, we show how to model imperfections with an approach tailored to (quantum) software engineers. We intuitively illustrate, using numerical simulations, how imperfections influence core properties of quantum algorithms on NISQ systems, and show possible options for tailoring future NISQ machines to improve system performance in a co-design approach. Our results are obtained from a software framework that we provide in form of an easy-to-use reproduction package. It does not require computer scientists to acquire deep physical knowledge on noise, yet provide tangible and intuitively accessible means of interpreting the influence of noise on common software quality and performance indicators.
Quantum computing is a relatively new paradigm that has raised considerable interest in physics and computer science in general but has so far received little attention in software engineering and architecture. Hybrid applications that consist of both quantum and classical components require the development of appropriate quantum software architectures. However, given that quantum software engineering (QSE) in general is a new research area, quantum software architecture–a subresearch area in QSE is also understudied. The goal of this chapter is to provide a list of research challenges and opportunities for such architectures. In addition, to make the content understandable to a broader computer science audience, we provide a brief overview of quantum computing and explain the essential technical foundations.
Recent advances in the manufacture of quantum computers attract much attention over a wide range of fields, as early-stage quantum processing units (QPU) have become accessible. While contemporary quantum machines are very limited in size and capabilities, mature QPUs are speculated to eventually excel at optimisation problems. This makes them an attractive technology for database problems, many of which are based on complex optimisation problems with large solution spaces. Yet, the use of quantum approaches on database problems remains largely unexplored. In this paper, we address the long-standing join ordering problem, one of the most extensively researched database problems. Rather than running arbitrary code, QPUs require specific mathematical problem encodings. An encoding for the join ordering problem was recently proposed, allowing first small-scale queries to be optimised on quantum hardware. However, it is based on a faithful transformation of a mixed integer linear programming (MILP) formulation for JO, and inherits all limitations of the MILP method. Most strikingly, the existing encoding only considers a solution space with left-deep join trees, which tend to yield larger costs than general, bushy join trees. We propose a novel QUBO encoding for the join ordering problem. Rather than transforming existing formulations, we
construct a native encoding tailored to quantum systems, which allows us to process general bushy join trees. This makes the
full potential of QPUs available for solving join order optimisation problems.
Im Rahmen des Projektes AUT-1A wurden 123 Arbeitgeber*innen mittels Fragebogen zu ihren Erfahrungen mit der Beschäftigung von autistischen Mitarbeiter*innen befragt. Ziel war es, die beschäftigungsfördernden und -hindernden Faktoren herauszuarbeiten. Die Studie deutet darauf hin, dass sich die berufliche Qualifizierung in den Berufsbildungswerken positiv auf die nachhaltige Beschäftigung von Menschen mit Autismus-Spektrum-Diagnose (ASD) auswirkt, die Unterstützungsleistungen für Betriebe aber noch nicht hinreichend sind. Auch konnte eine mangelnde Aufklärung in Bezug auf eine autismusfreundliche Umgebungsgestaltung sowie eine mangelnde Aufklärung über die Diagnose Autismus der direkten Kolleg*innen herausgearbeitet werden.
In this article, we address the challenge of solving the ill-posed reconstruction problem in computed tomography using a translation invariant diagonal frame decomposition (TIDFD). First, we review the concept of a TI-DFD for general linear operators and the corresponding filter-based regularization concept. We then introduce the TI-DFD for the Radon transform on L 2 (R 2) and provide an exemplary construction using the TI wavelet transform. Presented numerical results clearly demonstrate the benefits of our approach over non-translation invariant counterparts.
In the treatment of hand injuries in the context of orthopedic care, movable wrist hand orthoses are used in numerous instances. Early motion therapy is in most cases advantageous for adequate, rapid and successful long-term healing of the hand. Conventional dynamic wrist hand orthoses can only be used for movement therapy to a limited extent since they represent the wrist as a simple rotating joint and neglect the complexity of the hand movement possibilities. In this paper, a preliminary concept for dynamic wrist hand orthoses based on prestressed compliant structures is presented. The distinctive feature of this concept lies in the enabling of multiaxial motion capabilities of the human hand without applying conventional joints. According to the concept the wrist region is surrounded by a prestressed compliant structure. Besides the derivation and description of the concept, a first three-dimensional computer-aided design is shown. Additionally, the necessary steps in the development of such a novel dynamic wrist orthosis are discussed.