Institut Digital Engineering (IDEE)
Refine
Document Type
- Article (41)
- Conference Proceeding (18)
- Part of a Book (3)
- Other (3)
- Part of Periodical (1)
- Report (1)
Is part of the Bibliography
- yes (67)
Keywords
- Klimaanpassung (2)
- additive manufacturing (2)
- augmented reality (2)
- mainklimaplus (2)
- temperatur (2)
- 3D-Druck (1)
- Additive Fertigung (1)
- Architecture (1)
- Building and Construction (1)
- Business game (1)
Global climate change is a cognitive challenge for many people and often evokes negative associations due to its complexity and interactions with politics, social movements and economic developments. Therefore, the possession of green skills becomes central to the fight against climate change. The European Council conclusions recognize this urgency and underline the need for a transition to green skills. This recognition also extends to higher education, where institutions have a crucial role to play in tackling the climate crisis. Personal Green Skills in Higher Education (PeGSinHE) is an Erasmus+ KA2 project coordinated by Kauno Kolegija (KK, Lithuania), Tampere University of Applied Sciences (TAMK, Finland), Hochschule für Agrar- und Umweltpädagogik (HAUP, Austria), Universidad de Málaga (UMA, Spain) and Technical University of Applied Sciences Würzburg Schweinfurt (THWS, Germany). The strategically designed project aims not only to promote green skills among students and encourage personal behavioral change in line with the Sustainable Development Goals, but also to instill a sense of social responsibility in the partner institutions. The focus is on empowering lecturers at partner universities through innovative teaching and learning methods to effectively impart green skills to students. This report describes the objectives and methodology used to assess environmental and sustainability competencies in the higher education institutions involved in the project. Methodologically, the report uses an assessment template designed to provide a comprehensive overview of best practice and baseline levels of environmental and sustainability competencies. It advocates the involvement of key stakeholders from all five partner Higher Education Institutions to ensure a broad perspective on these practices and competences within their respective countries and organizations. Different methods and perspectives will be used to collect data to enable a holistic understanding of the topic. The joint completion of the assessment template serves as a catalyst for joint discussions on the level of environmental and sustainability competencies and the identification of best practices in each organization. The results show that national implementation strategies are relatively loose, although some competency descriptions set targets for undergraduate degree programs. Challenges faced by higher education staff include resource constraints, particularly lack of time, the need for a deeper understanding of sustainable development and pedagogical tools, and the need for improved opportunities for collaboration. Given the time and resource constraints of this study, the results must be considered preliminary. Nevertheless, they confirm the findings of previous studies.
High-temperature calibration methods in additive manufacturing involve the use of advanced techniques to accurately measure and control the temperature of the build material during the additive manufacturing process. Infrared cameras, blackbody radiation sources and non-linear optimization algorithms are used to correlate the temperature of the material with its emitted thermal radiation. This is essential for ensuring the quality and repeatability of the final product. This paper presents the calibration procedure of an imaging system for in-situ measurement of absolute temperatures and temperature gradients during powder bed fusion of metal with laser beam (PBF-LB/M) in the temperature range of 500 K–1500 K. It describes the design of the optical setup to meet specific requirements in this application area as well as the procedure for accounting the various factors influencing the temperature measurement. These include camera-specific effects such as varying spectral sensitivities of the individual pixels of the sensor as well as influences of the exposure time and the exposed sensor area. Furthermore, influences caused by the complex optical path, such as inhomogeneous transmission properties of the galvanometer scanner as well as angle-dependent transmission properties of the f-theta lens were considered. A two-step fitting algorithm based on Planck's law of radiation was applied to best represent the correlation. With the presented procedure the calibrated thermography system provides the ability to measure absolute temperatures under real process conditions with high accuracy.
On the way to climate neutrality manufacturing companies need to assess the Carbon dioxide (CO2) emissions of their products as a basis for emission reduction measures. The evaluate this so-called Product Carbon Footprint (PCF) life cycle analysis as a comprehensive method is applicable, but means great effort and requires interdisciplinary knowledge. Nevertheless, assumptions must still be made to assess the entire supply chain. To lower these burdens and provide a digital tool to estimate the PCF with less input parameter and data, we make use of machine learning techniques and develop an editorial framework called MINDFUL. This contribution shows its realization by providing the software architecture, underlying CO2 factors, calculations and Machine Learning approach as well as the principles of its user experience. Our tool is validated within an industrial case study.
Computerized Numeric Control (CNC) plays an essential role in highly autonomous manufacturing systems for interlinked process chains for machine tools. NC-programs are mostly written in standardized G-code. Evaluating CNC-controlled manufacturing processes before their real application is advantageous due to resource efficiency. One dimension is the estimation of the energy demand of a part manufactured by an NC-program, e.g. to discover optimization potentials. In this context, this paper presents a Machine Learning (ML) approach to assess G-code for CNC-milling processes from the perspective of the energy demand of basic G-commands. We propose Latin Hypercube Sampling as an efficient method of Design of Experiments to train the ML model with minimum experimental effort to avoid costly setup and implementation time of the model training and deployment.
Computer Vision in Reusable Container Management : Requirements, Conception, and Data Acquisition
()
In container management, the reuse of small load carriers is a business alternative to disposal carriers. Reusable container management is furthermore a solution to improve the environmental impact of the logistic industry. The sorting and stock management of small load carriers are today primarily manual work and have consequently a low level of automation. In order to increase the automation of returnable containers, it is crucial to establish a computer vision system that (i) classifies the containers and (ii) detects potential defects or stains. This paper provides an overview and a discussion of the applications that are already in use. Object detection is necessary for many actions in the container management business processes, such as inventory and stock management. Detection of defects on the small load carrier is required for scrapping the carriers to ensure a smooth process in any business process involving the carrier and to decide whether additional process steps, e.g., cleaning, are required. The literature review in this paper establishes the demand for computer vision detection and shows the project setup necessary to conduct research in this area. The comparison with other applications of defect and anomaly detection supports the applicability and shows the need for further research in this specific academic field. This leads to a project outline and the research provides the technical implementation of the detections in container management. Accordingly, the research provides a work- flow guide from data acquisition to a high-quality dataset of labeled anomalies of small load carriers.
In the context of environmental protection, the construction industry plays a key role with significant CO2 emissions from mineral-based construction materials. Recycling these materials is crucial, but the presence of hazardous substances, i.e., in older building materials, complicates this effort. To be able to legally introduce substances into a circular economy, reliable predictions within minimal possible time are necessary. This work introduces a machine learning approach for detecting trace quantities (≥0.06 wt%) of minerals, exemplified by siderite in calcium carbonate mixtures. The model, trained on 1680 X-ray powder diffraction datasets, provides dependable and fast predictions, eliminating the need for specialized expertise. While limitations exist in transferability to other mineral traces, the approach offers automation without expertise and a potential for real-world applications with minimal prediction time.
Introduction: The negative consequences of climate change are widespread and have a global impact. An industrialized region of Germany must adapt to the effects of climate change and comply with political regulations. Previous studies indicate that economic actors who are not directly affected by climate change approach climate change mitigation and adaptation primarily based on legal requirements and often feel discouraged by the absence of data-based reports. Addressing this challenge, game-based learning emerges as a promising pathway.
Methods: To examine game-based learning’s applicability and potential for climate adaptation, we developed a business simulation game, simultaneously identifying didactically effective elements for managers who would participate in it. Using expert interviews and focus groups, we conducted a qualitative study with three HR developers from larger companies and nine managers and founders of startups to develop a business simulation game on climate adaptation. Based on the Grounded Theory methodology, theoretical coding was used to analyze the qualitative data.
Results: The derived core categories indicate that personnel development in companies is evolving in response to economic changes. Individual resources such as motivation (especially for managers), personnel and time play a crucial role in establishing a business game as an educational offering. The identified game elements can also be used theoretically and practically in the development of other educational games.
Discussion: We discussed common human resource development measures in companies and compared them with more innovative approaches such as a simulation game. The study underscores the importance of innovative approaches, such as game-based learning, in fostering climate adaptation efforts among economic actors. By integrating theoretical insights with practical applications, our findings provide valuable guidance for the development of educational games aimed at addressing complex challenges like climate change. Further research and implementation of such approaches are essential for promoting proactive climate adaptation strategies within industrialized regions and beyond.
Alternating Transfer Functions to Prevent Overfitting in Non-Linear Regression with Neural Networks
(2023)
In nonlinear regression with machine learning methods, neural networks (NNs)
are ideally suited due to their universal approximation property, which states
that arbitrary nonlinear functions can thereby be approximated arbitrarily well.
Unfortunately, this property also poses the problem that data points with
measurement errors can be approximated too well and unknown parameter
subspaces in the estimation can deviate far from the actual value (so-called
overfitting). Various developed methods aim to reduce overfitting through
modifications in several areas of the training. In this work, we pursue the
question of how an NN behaves in training with respect to overfitting when
linear and nonlinear transfer functions (TF) are alternated in different hidden
layers (HL). The presented approach is applied to a generated dataset and
contrasted to established methods from the literature, both individually and in
combination. Comparable results are obtained, whereby the common use of
purely nonlinear transfer functions proves to be not recommended generally.
Purpose of this paper:
Surveys have shown that most companies still use paper-based lists or RF handhelds to support picker-to-parts order picking. However, more modern approaches such as replacing handhelds with small wearables, Pick-by-Voice, Pick-by-Vision, or even autonomous picking robots are on the rise. This fits into a broader trend commonly referred to as "smart logistics". However, what "smartness" means in this context remains unclear. This paper aims to contribute to the understanding of smartness in picker-to- parts order picking by applying socio-technical systems theory.
Design/methodology/approach:
The methodological approach uses a socio-technical analysis and a combination of two frameworks involving smart capabilities to identify the characteristics of "smart" picker- to-parts order picking.
Findings:
Typically, smartness is considered a property of the assistive devices used in order picking (such as smart glasses in Pick-by-Vision). Instead, smartness should be judged by the extent to which the picking system is implemented as a socio-technical system with comprehensive, meaningful tasks. Thus, smart picker-to-parts order picking is an alternative concept to the digital Tayloristic approach supported by the prevailing assistive devices. A number of elements are proposed for the design of a smart picking system, including establishing responsible autonomy, reducing cognitive load, improving ergonomics, and human-machine interaction based on contextual adaptation.
Value:
This paper contributes to the still relatively small body of literature on smart picker-to- parts order picking by clarifying the dimensions of smartness in this field. A clearer understanding of smartness is helpful in avoiding getting trapped in digital Tayloristic work patterns, mediated and controlled by the currently available tools. It also supports creating a more favorable work environment for warehouse personnel.
Research limitations/implications:
So far, the concept of smartness in picker-to-part order picking is purely conceptual.
Practical implications:
The evaluation framework can be used to critically assess the technology currently used in warehouses to support picker-to-parts order picking and to guide the development of new systems.
Powder Bed Monitoring Using Semantic Image Segmentation to Detect Failures during 3D Metal Printing
(2023)
Monitoring the metal Additive Manufacturing (AM) process is an important task within the scope of quality assurance. This article presents a method to gain insights into process quality by comparing the actual and target layers. Images of the powder bed were captured and segmented using an Xception–style neural network to predict the powder and part areas. The segmentation result of every layer is compared to the reference layer regarding the area, centroids, and normalized area difference of each part. To evaluate the method, a print job with three parts was chosen where one of them broke off and another one had thermal deformations. The calculated metrics are useful for detecting if a part is damaged or for identifying thermal distortions. The method introduced by this work can be used to monitor the metal AM process for quality assurance. Due to the limited camera resolutions and inconsistent lighting conditions, the approach has some limitations, which are discussed at the end.
C7. 4 Application of Laser Line Scanners for Quality Control during Selective Laser Melting (SLM)
(2021)
Forest management relies on the analysis of satellite
imagery and time intensive physical on-site inspections. Both
methods are costly and time consuming. Satellite based images
are often not updated in a sufficient frequency to react to
infestations or other occurring problems.
Forest management benefits greatly from accurate and recent
information about the local forest areas. In order to react
appropriately and in time to incidents such as areas damaged by
storms, areas infested by bark beetles and decaying ground water
level, this information can be extracted from high resolution
imagery.
In this work, we propose UAVs to meet this demand and
demonstrate that they are fully capable of gathering this information
in a cost efficient way. Our work focuses on the cartography
of trees to optimize forest-operation. We apply deep learning for
image processing as a method to identify and isolate individual trees for GPS tagging and add some additional information such as height and diameter.
Deep Learning models are trained to detect humans, cars, and other large objects which are centered in the images. The same models struggle with detecting small and tiny objects because of architecture design decisions that reduce the entropy of small and tiny objects during training. These small and tiny objects are essential for damage identification and maintenance including inspection and documentation of aeroplanes, constructions, offshore structures, and forests. Our work defines the terms tiny and small in context of deep learning models to evaluate possible approaches to resolve the issue of low accuracy in detecting these objects. We analyse the currently applied common datasets Common Objects in Context, ImageNet and Tiny Object Detection Challenge dataset. In addition we compare these datasets and present the differences in terms of object instance size. The COCO dataset, ImageNet dataset and TinyObjects dataset are analysed regarding size categorization and relative object size. The results show the large differences between the size ratios of the three chosen datasets, with ImageNet having by far the largest object instances, COCO being in the middle and TinyObjects having the smallest objects as its name would indicate. Since the objects themselves are larger in terms of total pixel width and height, they therefore make up a bigger percentage on the superordinate picture. Looking at the size categories of the COCO dataset and our extension of the tiny and very small category, the results confirm the size hierarchy of the datasets. With ImageNet having most of its objects in the large category, COCO respectively in the medium category and TinyObjects in the very small category. By taking these results into account, the reader is able to choose a fitting dataset for their tasks.We expect our analysis to help and improve future research in the area of small and tiny object detection.
Highly autonomous production cells are a crucial part of manufacturing systems in industry 4.0 and can contribute to a sustainable value-adding process. To realize a high degree of autonomy in production cells with an industrial robot and a machine tool, an experimental approach was carried out to deal with numerous challenges on various automation levels. One crucial aspect is the scheduling problem of tasks for each resource (machine tool, tools, robot, AGV) depending on various data needed for a job-shop scheduling algorithm. The findings show that the necessary data has to be derived from different automation levels in a company: horizontally from ERP to shop-floor, vertically from the order handling department to the maintenance department. Utilizing that data, the contribution provides a cascaded scheduling approach for machine tool jobs as well as CNC and robot tasks for highly autonomous production cells supplied by AGVs.