00 Informatik, Wissen, Systeme
Refine
Year of publication
Document Type
- conference proceeding (article) (19)
- Preprint (9)
- Article (6)
- Book (5)
- Report (3)
- Doctoral Thesis (1)
- Other (1)
- Working Paper (1)
Publication reviewed
- begutachtet (31)
- nicht begutachtet (14)
Keywords
- EVELIN I (5)
- BayernCloud (4)
- Linked Open Data (4)
- CARE REGIO (3)
- Dateninfrastruktur (3)
- Informationsmanagement (3)
- Open Data (3)
- Tourismusindustrie (3)
- Digitale Transformation (2)
- Digitalisierung (2)
Institute
- Fakultät Informatik (22)
- Fakultät Tourismus-Management (9)
- Fakultät Elektrotechnik (7)
- IFA – Institut für Innovative Fahrzeugantriebe (6)
- Fakultät Maschinenbau (4)
- INIT – Institut für Nachhaltige und Innovative Tourismusentwicklung (2)
- Fakultät Betriebswirtschaft (1)
- IFI - Institut für Internationalisierung (1)
- IFM - Institut für Fahrerassistenz und vernetzte Mobilität (1)
- IPI – Institut für Produktion und Informatik (1)
Whilst Paul de Casteljau is now famous for his fundamental algorithm of curve and surface approximation, little is known about his other findings. This article offers an insight into his results in geometry, algebra and number theory. Related to geometry, his classical algorithm is reviewed as an index reduction of a polar form. This idea is used to show de Casteljau's algebraic way of smoothing, which long went unnoticed. We will also see an analytic polar form and its use in finding the intersection of two curves. The article summarises unpublished material on metric geometry. It includes theoretical advances, e.g., the 14-point strophoid or a way to link Apollonian circles with confocal conics, and also practical applications such as a recurrence for conjugate mirrors in geometric optics. A view on regular polygons leads to an approximation of their diagonals by golden matrices, a generalisation of the golden ratio. Relevant algebraic findings include matrix quaternions (and anti-quaternions) and their link with Lorentz' equations. De Casteljau generalised the Euclidean algorithm and developed an automated method for approximating the roots of a class of polynomial equations. His contributions to number theory not only include aspects on the sum of four squares as in quaternions, but also a view on a particular sum of three cubes. After a review of a complete quadrilateral in a heptagon and its angles, the paper concludes with a summary of de Casteljau's key achievements. The article contains a comprehensive bibliography of de Casteljau's works, including previously unpublished material.
Data Mining und Knowledge Discovery in Databases (KDD) sind Forschungs- und Anwendungsgebiete, die sich mit der Extraktion von nutzbarem Wissen aus Daten befassen. Dazu werden unter anderem Methoden des maschinellen Lernens eingesetzt. Die Induktive Logikprogrammierung ist ein Teilgebiet des Maschinellen Lernens, dessen Gegenstand das Lernen aus multi-relational und prädikatenlogisch repräsentierten Daten ist, während andere Lernverfahren üblicherweise Daten voraussetzen, die in Form einer einzelnen Attribut-Wert-Tabelle vorliegen. Der Einsatz von ILP-Methoden für KDD und Data Mining wird auch als Relationales Data Mining bezeichnet. Ein Anwendungsgebiet von KDD und Data Mining ist die Teilgruppenanalyse. Dabei wird eine Population von Fällen, die in einer Datenbank repräsentiert sind, nach besonders interessanten Teilgruppen der Population durchsucht, indem mögliche Teilgruppen generiert und mithilfe geeigneter Interessantheitsfunktionen bewertet werden. Die vorliegende Arbeit hat sich zum Ziel gesetzt, Methoden zur sicheren Beschränkung des Suchraums bei der Suche nach interessanten Teilgruppen in multi-relationalen Daten zu erarbeiten und zu evaluieren. Dazu wird ein Verfahren zur Suche nach interessanten Teilgruppen in multi-relationalen Datenbanken entwickelt, das verschiedene Methoden zur Suchraumbeschränkung integriert. Die verschiedenen Methoden zur Suchraumbeschränkung werden in Experimenten evaluiert und das entwickelte Verfahren zur Bearbeitung eines echten Data Mining-Problems eingesetzt. Die Arbeit bietet im einzelnen: (1) eine Formalisierung der Teilgruppenanalyse im Rahmen der ILP, (2) Optimumschätzfunktionen zu ausgewählten Interessantheitsfunktionen, (3) eine Erweiterung des bekannten Apriori-Suchverfahrens zur Warenkorbanalyse, die es erlaubt, die von Apriori durchsuchte Hypothesensprache einzuschränken, (4) einen ILP-Sprachbias für die Teilgruppenanalyse, der die Anwendung der Teilmengenbedingung des Apriori-Suchverfahrens zur Beschränkung eines ILP-Suchraums erlaubt, (5) einen SQL-Sprachbias für die Teilgruppenanalyse in multi-relationalen Datenbanken, (6) einen Ansatz zur Integration der Suchraumbeschränkung anhand von Taxonomien in einen Apriori-artigen Suchalgorithmus, (7) eine Methode zur Behandlung diskretisierter numerischer Attribute, die die Suchraumbeschränkung anhand von Allgemeinerbeziehungen zwischen Intervallen vereinheitlicht mit der Suchraumbeschränkung anhand von Taxonomien, (8) Experimente zur Wirksamkeit der verschiedenen Möglichkeiten zur Suchraumbeschränkung, (9) die Anwendung der entwickelten Ansätze auf ein echtes Data Mining-Problem mit Bank-Daten und ausführliche Vergleiche mit verwandten Arbeiten. Die Experimente wurden mit einer prototypischen Implementation der in dieser Arbeit entwickelten Ansätze durchgeführt. Dabei haben sich Teilmengenbedingung und Optimumschätzfunktionen als wirkungsvolle und zuverlässige Methoden zur Beschränkung des Suchraums erwiesen, während der Beitrag der Taxonomien zur Suchraumbeschränkung zwischen verschiedenen Anwendungen stark schwankte und in einigen Fällen nur gering war. Ein wichtiges Ergebnis der Versuche ist, daß die Teilmengenbedingung, die bisher nur zur Suchraumbeschränkung in ein-relationalen Datenbanken eingesetzt werden konnte, für multi-relationale Datenbanken und ILP-Sprachen genauso wirkungsvoll sein kann wie für ein-relationale Datenbanken.
VBA bietet das Potenzial, effektive Digitalisierungslösungen mit geringem Aufwand zu realisieren. “VBA für Office-Automatisierung und Digitalisierung" zeigt mit vielen Codebeispielen die Automatisierung von Excel, Word, Outlook, PowerPoint, SAP ERP und SOLIDWORKS und das Zusammenwirken dieser Systeme. Auch Webservices und Rest APIs werden mit VBA angesprochen und erschließen interessante Möglichkeiten bis hin zu KI. Das Buch erläutert wichtige Konzepte und gibt viele Tipps, um VBA-Anwendungen mit einfachen Mitteln unternehmenstauglich und administrierbar zu gestalten.
One goal of research activities is finding ways to manage the growing complexity of embedded systems using self- configuration methods. While autonomous configuration could potentially be used in safety-critical and real-time systems, the basic requirements are not yet in place. This paper will outline a concept for the real autonomous configuration of TDMA-based communication processes, which currently does not exist. The paper initially addresses the TDMA-specific framework conditions and a potential solution. The issue of the mandatory a-priori known schedule is resolved using a generic schedule, because a simple method based on "free-slot-reserved-for- further-nodes" is not feasible. The most difficult part the startup was implemented through the generic schedule and an ID-based collision resolution process. To demonstrate the viability of the concept, the configuration method was implemented using a FlexRay communication system. This also satisfied the goal of eliminating the need for additional hardware and preserving the fault tolerant multimaster structure of the FlexRay system. The functionality of the concept was validated under different scenarios. The configuration times were analyzed, the results of which are also detailed here.
Disentangling Human-AI Hybrids: Conceptualizing the Interworking of Humans and AI-Enabled Systems
(2023)
Artificial intelligence (AI) offers great potential in organizations. The path to achieving this potential will involve human-AI interworking, as has been confirmed by numerous studies. However, it remains to be explored which direction this interworking of human agents and AI-enabled systems ought to take. To date, research still lacks a holistic understanding of the entangled interworking that characterizes human-AI hybrids, so-called because they form when human agents and AI-enabled systems closely collaborate. To enhance such understanding, this paper presents a taxonomy of human-AI hybrids, developed by reviewing the current literature as well as a sample of 101 human-AI hybrids. Leveraging weak sociomateriality as justificatory knowledge, this study provides a deeper understanding of the entanglement between human agents and AI-enabled systems. Furthermore, a cluster analysis is performed to derive archetypes of human-AI hybrids, identifying ideal–typical occurrences of human-AI hybrids in practice. While the taxonomy creates a solid foundation for the understanding and analysis of human-AI hybrids, the archetypes illustrate the range of roles that AI-enabled systems can play in those interworking scenarios.
This paper is a summary of the creation and data usage of an autonomous model vehicle which was recreated in CarMaker[1]. The simulation data come to use when following students of this semester project will develop algorithms and simulations which are impractical to test or train in real life. This paper starts with a summary of the background information around the model vehicle and the software CarMaker[1], then reproduces the construction process, digs deeper in ROS[6] and finishes with the extraction of the data.
To address 5G design targets, massive MIMO and mmWave communication are enabling technologies. Luckily, in many respects these two technologies share a symbiotic integration. Accordingly, a logical step is to integrate mmWave communications and massive MIMO to form ”mmWave-massive MIMO” which substantially increases user throughput, improve spectral and energy efficiencies, increase the capacity of mobile networks and achieve high multiplexing gains. Thus, this work analyses the concepts, performances, comparison and discussion of these technologies called: massive MIMO, mmWave Communications and mmWave-massive MIMO systems jointly. Besides, outcomes of extensive researches, emerging trends together with their respective benefits, challenges, proposed solutions and their comparative analysis is addressed. The performance of hybrid beamforming architecture with a fully digital and analog beamforming techniques are also analyzed. Analytical and simulation results show that the low-complexity hybrid analog-digital precoding achieves all round comparable precoding gains for mmWave-Massive MIMO technology.
The fifth-generation (5G) wireless communication system requires massive connectivity with high data rates and low latency. One of the technologies to meet these requirements is mm Wave massive MIMO. This work, therefore, aspires to have an in-depth look at the channel estimation and beamforming techniques jointly with their respective architectures for mm Wave massive MIMO system. In particular; sparse, compressed sensing, machine learning and array signal processing based channel estimation are addressed from 5G channel estimation techniques. On the other hand, beamforming techniques like hybrid beamforming and the low-complexity hybrid block diagonalization schemes with their mathematical analysis are included. This work also discusses in detail the challenges, optimization methods and mitigation techniques of pilot contamination, signal detection, channel estimation and hybrid beamforming for mm Wave massive MIMO system. The result asserts that partially connected block-diagonal hybrid bema forming with array signal processing based channel estimation is more optimal than the others with respect to over all performance, complexity and energy consumption. Finally, open research directions and challenges are pointed out.
In general, current systems for the Digital Factory implement a product-process-resource (PPR) data model in a monolithic rich-client/server architecture with a single database persistence layer. Common data objects are the product bills of material, descriptions of the production processes, or the resource structure, e.g. bill of equipment. The main drawback of the current monolithic architecture is the slow rate of development, which prevents fast adoption of the software to the new production planning process (i.e., due to new technologies for the transformation of the automotive industry with the goal of electrification) is not possible. Furthermore, time-consuming and error-prone export-import operations characterize the collaboration of the engineering supply chain. Mercedes-Benz has created a new IT system architecture for their Digital Factory. The core idea of this architecture is a module-based approach. Each planning step has its own module, e.g. product analysis, layout planning or cost calculation. One single module consists of a server-based business logic, a web-based user interface and its own database. Each module is the source of master data objects that originate from the corresponding planning step and refers to data objects from predecessor planning steps. The single modules communicate mostly via KAFKA. The usage of a model based application engine allows the fast creation of different modules. Best-of-breed third-party systems for specific planning steps can be integrated into the system architecture. Web technologies allow suppliers to access the Mercedes-Benz systems directly for a fully integrated supplier collaboration. Roll-out has started and has already led to significant efficiencies.
Das Buch vermittelt die Tourismus- und Reisewirtschaft als eine globale Branche der angewandten Wirtschaftsinformatik. Sie erfordert multimediale Informations- und Kommunikationssysteme, Management-, Vertriebs- und Verarbeitungssysteme im Rahmen IT-basierter Prozesse. Fachleute der Angewandten Informatik sollen die Strukturen und Anforderungen verstehen, um innovative Systeme entwickeln und bereitstellen zu können. Fachleute des Tourismus- und Reisemanagements sollen innovative informationstechnologische Entwicklungen beurteilen sowie IT-Investitionen entscheiden können, um sie erfolgreich und resilient einzusetzen.
Neben der umfassenden Aktualisierung erhalten die Mobilitätswende, der Online-Handel, die Vernetzung in Sozialen Medien, Big Data, Künstliche Intelligenz, Mixed Reality u.a.m in dieser dritten Auflage einen erweiterten Fokus. Das Buch unterstützt die Lehre und Forschung sowie die Unternehmenspraxis.
- Gesamtschau der IT-Systeme in der Tourismus- und Reiseverkehrswirtschaft mit entscheidungsrelevanter Tiefe und Zukunftsorientierung
- Integration der Leistungsprozesse von der Kundenkommunikation bis zur Hintergrundverarbeitung und Datenanalyse
Ziel des Lehrbuches ist es, einen umfassenden Einblick in das gesamte Spektrum elektronischer Informations-, Kommunikations- und Reservierungssysteme im Tourismus zu geben. Das Lehrbuch umfasst die Inhalte der Vorlesungen mit Übungen an Hochschulen aller Ebenen. Das Lehrbuch richtet sich an Studierende des Bachelor- und Masterstudiums der Studienrichtung Tourismus sowie an Führungskräfte und Mitarbeiter/innen von Unternehmen der Reise- und Tourismuswirtschaft. Fächer des IT-basierten Informationsmanagements werden an Hochschulen bereits im Grundstudium gelehrt und im Haupt- und Masterstudium vertieft.
Das Lehrbuch gibt umfassend Einblick in das Spektrum elektronischer Informations-, Kommunikations- und Reservierungssysteme im Tourismus.
Aktuelle Trends im E-Tourismus sowie wesentliche Systeme der Reisemittler (besonders globale Distributionssysteme) und Leistungsanbieter (Flug, Hotel etc.) werden behandelt. Ein weitreichender Überblick zum Yield-, Vertriebskanal- und Kundenbeziehungsmanagement stellt wesentliche Prozesse ausführlich dar.
- Aktuelle Trends im eTourismus wie Customer Journey, eMarketing, eCommerce, Geoinformationen, mTourismus
- Fallbeispiele mit exemplarischem Charakter
- Anregungen für zukünftige Realisierungsanwendungen als Inspiration für die Zukunft des eTourismus
Beim offenen Internet geht es um die Schaffung von Regeln zur Wahrung der gleichberechtigten und nichtdiskriminierenden Behandlung des Datenverkehrs bei der Bereitstellung von Internetzugangsdiensten
und damit verbundener Rechte der Endnutzer. Im Mittelpunkt der aktuellen Regelungen steht das Best-Effort-Prinzip. Demnach sollen Provider alle Datenpakete unabhängig von Inhalt, Anwendung, Herkunft und Ziel gleich behandeln und schnellstmöglich durch ihre Infrastruktur transportieren. Gestattet wird den Zugangsanbietern ein angemessenes Verkehrsmanagement, um das Netz integer und sicher zu halten oder um eine drohende Netzüberlastung zu vermeiden. Provider können Spezialdienste zusätzlich zum traditionellen Internet anbieten, um ein spezielles Qualitätsniveau zu gewährleisten. Dafür muss ausreichend Netzkapazität bereitstehen.
Home health applications have evolved over the last few decades. Assistive systems such as a data platform in connection with health devices can allow for health-related data to be automatically transmitted to a database. However, there remain significant challenges concerning intermodular communication. Central among them is the challenge of achieving interoperability, the ability of devices to communicate and share data with each other. A major goal of this project was to extend an existing data platform (COMES®) and establish working interoperability by connecting assistive devices with differing approaches. We describe this process for a sleep monitoring and a physical exercise device. Furthermore, we aimed to test this setup and the implementation with a data platform in both a laboratory and an in-home setting with 11 elderly participants. The platform modification was realized, and the relevant changes were made so that the incoming data could be processed by the data platform, as well as visually displayed in real-time. Data was recorded by the respective device and transmitted into the data server with minor disruptions. Our observations affirmed that difficulties and data loss are far more likely to occur with increasing technical complexity, in the event of instable internet connection, or when the device setup requires (elderly) subjects to take specific steps for proper functioning. We emphasize the importance for tests and evaluations of home health technologies in real-life circumstances.
Demographic change is advancing inexorably. The number of people in need of care is growing rapidly, while at the same time there is already a shortage of several thousand caregivers to provide adequate, needs-based nursing care. Intelligent assistance systems can make a contribution to solving this problem. In this article, diverse assistance systems from the areas of nursing care, health care, rehabilitation and training, and mobility are presented as examples. These different components, equipped with the appropriate sensors, data transmission units and interfaces, can all be integrated into a system platform and thus form a comprehensive intelligent, digital assistance system or a mobile diagnosis and therapy platform.
This paper is a summary oft he traffic light detection for an autonomous model vehicle which was created for a student semester project. The traffic light detection comes to use when the model vehicle takes part in the VDI-Cup or other future Cups. The VDI-Cup will be discribed later on. The structure of this paper start with a summary, followed by the discribtion oft the VDI-Cup. After that we will explain how we implemented the traffic light detection and finishes with the conclusion of our implementation.
Collecting the training set required for building a robust neural network for autonomous driving requires large amount of data. It is nearly impossible to collect this data only by recording the driving of real world vehicles. There is no organization or company that is able to provide the resources needed to tackle this task.[1] Therefore the approach currently used is to generate the large amount of training data by simulating virtual cars in computer simulations that try to mirror real world road traffic as close as possible. Besides the huge amount it is also important that the data generated varies quiet a lot, otherwise the neural network can not learn to adapt to the many different situations that occur in real world road traffic every day.[2] Therefore it would be great to record the virtual car driving on as many different tracks as possible. To solve this issue this paper proposes a fast and simple iterative algorithm that can be used to procedural generate tracks that can be used for the recording and training of autonomous driving cars.
Lane detection is an essential part for an autonomous car to function. With a lane departure warning system many modern cars are already equipped with a lane detection system. But models and methods to predict lanes are numerous and often follow different approaches to solve the lane detection problem. This paper wants to give an overview on some of the state of the art models and methods for lane detection that lately achieved good results on public datasets. The paper will also look at training and testing a SCNN [10] using the free Colaboratory service from Google [29].
In the last decade autonomous driving has evolved from a science fictional dream to an everyday reality. With the advance of more and more companies bringing their versions of self-driving cars on the street it is just a matter of time before the majority of transportation will be in the hand of computers. But with the deadly car accident involving a self-driving Uber car back in 2018 there is also the question about how reliable autonomous driving really is and how we can validate and test the safety of this new road user 1. An uprising approach towards creating robust and adaptable neural networks is called domain randomization. This paper explores the possibility of using this method to create training data with driving simulations. It will propose a list of important criteria and factors affecting the selection of a fitting simulation. Furthermore it will present a track generator which is able to create useful tracks and export them to a format which can be used by several common simulations used in the field of autonomous driving research.
This paper is a summary of the time to collision (TTC) calculation for an autonomous model vehicle which was created for a student semester project. The TTC comes to use when the model vehicle takes part in the Carolo-and VDI-Cup which will be described later on. The structure of this paper starts with a summary of the background information, then digs deeper into what algorithms are discovered already and how they work and finishes with the explanation and conclusion of our implementation.
A real time object detection system is an essential core feature of any autonomous driving car. With the variety of models and data sets available, there is great opportunity to train a model that meets the high requirements of autonomous vehicle. In this paper we want to present an object detection model for traffic signs and lights implemented on a NVIDIA Xavier board [7] using the robot operating system (ROS) [9]. Furthermore we explain how to create and evaluate a data set as well as to test the accuracy and performance of the implemented detector. In regards to pattern recognition with neural networks, the performance and accuracy must also be weighed up. The goal is to achieve one without sacrificing the other. Considering all these aspects, we decided to train a YOLOv4 Standard and a YOLOv4 Tiny configuration [2]. While the accuracy of the YOLOv4 network exhibited a very high accuracy rate on high and mediocre resolutions but the performance is too poor to be used in a real time object detection system. On the other hand, the YOLOv4 tiny network reached the strived for performance even on the highest resolution scales tested, but at the cost of accuracy.
Design Science Research is a research paradigm suitable for application-oriented disciplines that develop (construct) artifacts as solutions to practical problems. Design Science Research is known to be a mainstream research paradigm in engineering and other disciplines. In recent years, Design Science Research (DSR) has become an established research approach in the field of Information Systems (IS). Nevertheless, there is an ongoing debate about the methodology and guidelines for Design Science Research in Information Systems (IS-DSR). This paper proposes to gather and leverage insights from other design disciplines, such as engineering, to provide clarity and inspiration for IS-DSR and to work towards a common understanding of design science research across disciplines. This paper provides results of an initial empirical analysis of research literature from engineering disciplines. The results provide suggestions for validating DSR results and contribute to the understanding of research guidelines for DSR. In addition, a novel, fine-grained, and operational framework for analyzing DSR papers and projects is presented. The third contribution is a proposal to develop a common basic schema for design science research, analogous to the standard IMRaD schema for empirical research. Based on the analysis of samples of papers, this paper proposes IDEaD as the standard scheme for Design Science Research, i.e., Introduction, Description, Evaluation, and Discussion.
The electrification of urban bus fleets is a challenging task, especially for smaller public transport operators. The main challenge lies in the uncertainty about many technical aspects, like range of vehicles under different circumstances or charging times, that are new for the operators. The purpose of this research is to introduce an approach to solve this problem by incorporating all available data from an existing bus fleet and finding an optimal solution with discrete mathematical optimization. Extensive data logging in the project enabled us to leverage tracking data from the whole bus network including trajectories, powertrain data, and operational data. This enabled us to validate assumptions about the energy demand, waiting times, and different traffic situations during the day. To get better insights into the requirements of an urban bus fleet, we simulated the potential electric buses in detail and extracted other necessary data like actual dwell times. Based on the simulation results and processed data, we implemented a linear programming model to search for a cost-optimal configuration of vehicles and charging infrastructure. We tested the framework with a scenario in which we analyzed the solutions with different numbers of diesel buses in the fleet. The application of our algorithm shows that it can produce optimal results in a short amount of time, for a medium-sized city in Germany. We also demonstrate that the flexible and constraint-based formulation of this approach allows it to be incorporated in the planning process of most public transport operators.
In the field of computer science, Continuous Practices enable companies to frequently and instantly provide new software and products to customers and stakeholders. With a growing interest in these practices, some secondary literature has been published within this research area. However, there are still open questions when it comes to teaching such practices to computer science students. With more and more companies demanding these skills from graduated students, educating them the required knowledge and skills is necessary. This systematic literature review follows the methodology of Kitchenham and analyses which of these practices are taught in higher computer science education. Along with the kind of courses that use them, it is reviewed how they are taught in higher computer science education and how these approaches differ from each other. The systematic literature review points out, that there are currently different teaching approaches described in literature. The review might help educators to gain new ideas of how to develop an own course to teach such practices or how to implement such contents into existing courses.
There is hardly a university that does not offer a course in software engineering for computer scientists. Due to the expanding complexity of software systems and rapidly changing requirements, it has become increasingly difficult to teach students all the content they need for their professional careers in the industry or academia. Additionally, teaching modelling with modelling languages like UML is a sophisticated task for educators. Student-generated solutions may be visually different from a sample solution and still be correct. Regarding large software engineering courses, individual feedback for students is usually not possible or comes with a time delay. However, it would contribute to their learning success. Therefore, a rising number of software tools can be found to support this area of teaching. In this paper, a systematic literature review is presented. It follows the methodology of Kitchenham and provides an overview about the tools, that are used to support teaching of modelling in higher software engineering education. Alongside the functionalities and their differences, this literature review summarizes the difficulties, that motivated educators to develop these tools.
Work-In-Progress: Converting textual software engineering class diagram exercises to UML models
(2022)
Class diagram exercises are an important part in the development of software engineering students in higher computer science education. Generating textual exercises with sample solutions for such courses is time-consuming for educators, especially with multiple courses and different contexts. According to literature, the automatic generation of diagrams from structured text is possible. However, students often do not receive template based exercise texts but descriptions in natural language, which is still not a closed research topic. To address this problem, this paper discusses a model that analyses real exercise texts used for software engineering education, considers each individual sentences components and provides a class diagram. Due to the complexity of natural language, the model does not deliver perfect results so far, but is a great work in progress for the attempt to generate sample solutions for given exercise texts.
Ein Blick auf aktuelle Entwicklungen bei Blockchains und deren Auswirkungen auf den Energieverbrauch
(2020)
Der enorme Stromverbrauch von Bitcoin hat dazu geführt, dass in Wissenschaft und Praxis oft eher undifferenziert Diskussionen über die Nachhaltigkeit von Blockchain- bzw. Distributed-Ledger-Technologie allgemein geführt werden. Allerdings ist die Blockchain-Technologie bereits heute alles andere als homogen – nicht nur hinsichtlich ihrer Anwendungen, die mittlerweile weit über Kryptowährungen hinaus in Wirtschaft und öffentlichen Sektor reichen, sondern auch bezüglich ihrer technischen Charakteristika und insbesondere ihres Stromverbrauchs. Dieser Beitrag fasst den Status quo des Stromverbrauchs verschiedener Implementierungen von Blockchain-Technologie zusammen und geht dabei besonders auf das kürzlich erfolgte Bitcoin Halving sowie sogenannte ZK-Rollups ein. Wir argumentieren, dass Bitcoin und andere Proof-of-Work-Blockchains zwar in der Tat sehr viel Strom verbrauchen, aber bereits heute alternative Blockchain-Lösungen mit deutlich geringerem Stromverbrauch verfügbar sind und weitere vielversprechende Konzepte erprobt werden, die gerade den Stromverbrauch von großen Blockchain-Netzwerken in naher Zukunft noch einmal deutlich senken könnten. Daraus schließen wir, dass die Kritik am Stromverbrauch von Bitcoin zwar legitim ist, jedoch daraus nicht eine Energieproblematik von Blockchain-Technologie generell abgeleitet werden darf. In vielen Fällen, in denen mithilfe von energieeffizienteren Blockchain-Varianten Prozesse digitalisiert oder verbessert werden können, darf sogar per Saldo durchaus mit Energieeinsparungen gerechnet werden.
“Es herrscht Aufbruchstimmung und es bestätigte sich der Eindruck, dass die Relevanz von Open Data in der Tourismusbranche angekommen ist”, so das Fazit von Prof. Dr. Guido Sommer von der Hochschule Kempten, selbst Impulsgeber des 3. Round Table Open Data, der am 24.01.2020 in Treuchtlingen ausgerichtet wurde.
Um den steigenden Ansprüchen der Gäste an aktuelle touristische Informationen gerecht zu werden, spielt besonders die digitale Aufbereitung und Verbreitung von relevanten Daten eine entscheidende Rolle. Derzeit ist die Erfassung und Aktualisierung von Inhalten nicht koordiniert und erfolgt in der Regel dezentral. In der Praxis führt dies häufig zu fehlenden Daten oder einer Flut von unvollständigen Informationen durch doppelte Datenpflege in unterschiedlicher Datenqualität . Während Gäste bei der Suche nach verlässlichen und für sie relevanten Informationen oft verzweifeln, sind die Bereitsteller der Informationen in der Regel damit überfordert, den Bedarf der Detailtiefe in allen Kanälen ausreichend zu bedienen. So müssen zum Beispiel Veranstaltungshinweise von Destinationen in mehreren Systemen eingestellt und gepflegt werden, damit die Informationen über verschiedenen Kanäle auch entsprechend verbreitet werden können. Sollen Daten zwischen unterschiedlichen Kanälen ausgetauscht werden, sind jeweils proprietäre Schnittstellen zwischen den betroffenen Systemen erforderlich. Mit jedem neuen Anbieter am Markt steigt der Bedarf für zusätzliche Schnittstellen zu allen bereits etablierten Systemen immer schneller an. Aufgrund unterschiedlicher Datenstrukturen steigen dabei nicht nur die Kosten , sondern auch die Wahrscheinlichkeit von Inkonsistenzen und Datenverlusten durch
unvollständigen Datenaustausch. Die dezentrale Organisation führt in der Folge oft zu Datensilos und technischen Insellösungen innerhalb der Tourismusbranche.
With the digital transformation and the increasing technological requirements that the tourism industry has to face in the future, the importance of data quality and digital data streams is growing. As more and more data is collected from a few large platforms and often used as an exclusive basis for the development of new services, both, the tourism industry and users are becoming increasingly dependent on the business models and applications of global players. In order to guarantee a digital market for data and applications in the coming years and to find an answer on how to react actively to the outlined developments, it is necessary to consider certain data as infrastructure and as a public good. Following this thought, it must be ensured that the relevant digital information flows in tourism will be transparent and openly available as a crucial basis for innovation and new business models for the general public in the future. The provision of data as Open Data is an approach that has become more and more established worldwide and beyond the field of tourism, and it has been anchored in legislation since the signing of the Open Data Charter by the G8 states in June 2013 (1). In addition to increasing transparency and more data democracy, Open Data is also associated with high economic expectations.
An EU study estimates that, based on the free flow of data, the added value for the European data economy could increase from just under € 300 billion in 2016 to € 739 billion in 2020 (2). For Germany, the authors of a study commissioned by the Konrad-Adenauer-Stiftung estimated between 20.1 and 131.1 billion euros in potential added value through Open Data by 2026, depending on the degree to which Open Data can be established as a key strategic component of social action (3). However, in the field of tourism, the topic of Open Data is still relatively new. For this reason, a Think Tank with international and cross-industry experts set a first goal to discuss and evaluate the significance of Open Data for the tourism industry in September 2017. At the heart of the shared vision on exploiting the emerging opportunities is the development of an Open Digital Data Infrastructure for tourism, where all relevant information in tourism should be provided as Open Data through a coordinated collaboration and cross-industry networking of stakeholders. The joint development of an Open Knowledge Graph for tourism in Europe based on Linked Open Data proposed by Prof. Dr. Fensel of the Semantic Technology Institute in Innsbruck could be an approach to realise the vision.
Since trust is an important basis for creating purposeful networks of people and data, the development of coordinated strategies for the implementation of the vision was primarily considered as a leadership challenge. In addition, the need to prepare the topic of Open Data in tourism for different stakeholders in such a way that the respective challenges and opportunities for tourism become transparent and clearly understandable, was also identified. This white paper summarizes the main findings of the Think Tank and provides an overview and recommendations on current developments in Open Data in tourism. At the same time, it is a call to the industry to actively participate in the strategic discussion and to take on part of the leadership task for the implementation of an Open Digital Data Infrastructure in tourism.
Mit der digitalen Transformation und den zunehmenden technologischen Anforderungen, auf die sich die Tourismusbranche in der Zukunft einstellen muss, wächst die Bedeutung von Datenqualität und digitalen Datenströmen. Während immer mehr Daten von wenigen großen Plattformen gesammelt und häufig als exklusive Grundlage für die Entwicklung neuer Dienste eingesetzt werden, steigt sowohl die Abhängigkeit der Tourismusbranche als auch die der Nutzer von den Geschäftsmodellen und Anwendungen der globalen Player. Um in den kommenden Jahren einen digitalen Markt für Daten und Anwendungen gewährleisten zu können und eine Antwort darauf zu finden, wie man auf die beschriebene Entwicklung aktiv reagieren kann, ist es erforderlich, bestimmte Daten als Infrastruktur und öffentliches Gut zu betrachten. Dabei ist sicherzustellen, dass die entsprechenden digitalen Informationsflüsse im Tourismus in der Zukunft als entscheidende Grundlage für Innovationen und neue Geschäftsmodelle für die Allgemeinheit transparent und offen verfügbar sein werden. Die Bereitstellung von Daten als Open Data ist ein Ansatz, der sich spätestens seit der Unterzeichnung der Open Data Charta durch die G8-Staaten im Juni 2013 über den Tourismus hinaus weltweit immer mehr etabliert und auch in der Ge-setzgebung verankert (1). Neben zunehmender Transparenz und mehr Datendemokratie sind mit Open Data vor allem auch hohe wirtschaftliche Erwartungen verbunden.
Eine Studie der EU schätzt, dass die Wertschöpfung der europäischen Datenwirtschaft auf Grundlage freier Datenströme von knapp 300 Milliarden Euro in 2016 bis auf 739 Milliarden Euro im Jahr 2020 steigen könnte (2). Für Deutschland rechnen Autoren einer durch die Konrad-Adenauer-Stiftung beauftragten Studie bis 2026 mit einer möglichen Wertschöpfung durch Open Data zwischen 12,1 und 131,1 Milliarden Euro, die abhängig davon ist, wie stark Open Data als strategische Kernkom-ponente für das gesellschaftliche Handeln etabliert werden kann (3). Im Bereich Tourismus ist das Thema Open Data allerdings noch relativ neu. Ein Think Tank aus internationalen und branchenübergreifenden Experten machte es sich im September 2017 daher erstmals zum Ziel, die Bedeutung von Open Data für die Tourismusbranche zu diskutieren und eine Bewertung vorzunehmen. Kern der dabei entwickelten gemeinsamen Vision zur Nutzung der entstehenden Chancen ist der Aufbau einer offenen digitalen Dateninfrastruktur im Tou-rismus, bei der alle relevanten Information im Tourismus durch die koordinierte Zu-sammenarbeit und branchenübergreifende Vernetzung der Akteure als Open Data bereitgestellt werden sollten. Der von Prof. Dr. Fensel vom Semantic Technology Institute in Innsbruck vorgeschlagene gemeinsame Aufbau eines Open Knowledge Graphen für den Tourismus in Europa auf Basis von Linked Open Data könnte ein Ansatz sein, die Vision zu realisieren.
Da Vertrauen eine entscheidende Voraussetzung für die zielführende Vernetzung von Menschen und Daten darstellt, wurde die Entwicklung von koordinierten Strategien zur Umsetzung der Vision in erster Linie als Leadership-Herausforderung eingeschätzt. Außerdem wurde die Notwendigkeit identifiziert, das Thema Open Data im Tourismus für unterschiedliche Stakeholder so aufzubereiten, dass die jeweiligen Herausforderungen und Chancen für den Tourismus transparent und klar verständ-lich werden. Das vorliegende White Paper fasst die wichtigsten Ergebnisse des Think Tanks zusammen und gibt einen Überblick und Empfehlungen zu aktuellen Entwicklungen von Open Data im Tourismus. Es ist zugleich ein Aufruf an die Branche, sich aktiv an der strategischen Diskussion zu beteiligen und einen Teil der Leadership-Aufgabe für die Umsetzung einer offenen digitalen Dateninfrastruktur im Tourismus zu übernehmen
The prediction of the future behavior of drivers is a challenging research topic. Therefore, this paper presents a new approach for occupancy prediction of the surrounding vehicles based on a static overapproximation of the driver behavior in longitudinal direction and a situation specific overapproximation of the driver behavior in lateral direction. Compared to existing probabilistic motion prediction approaches no prior knowledge of the situation is necessary. Therefore, the presented approach is not limited to specific situations and can be used to predict the occupancy in unstructured environments. The evaluation of the approach with real world data from the common road benchmark dataset shows the reduction of the occupancy area size up to 70% compared to a baseline method. Nevertheless, the prediction is accurate up to a prediction time of 2 seconds whereby the safety of the autonomous vehicle is ensured. The presented approach successfully handles the trade-off between occupancy area size and prediction safety while being applicable to all situations.
The rising adoption of cloud computing and increasing interconnections among its actors lead to the emergence of network-like structures and new associated risks. A major obstacle for addressing these risks is the lack of transparency concerning the underlying network structure and the dissemination of risks therein. Existing research does not consider the risk perspective in a cloud network’s context. We address this research gap with the construction of a reference model that can display such networks and therefore supports risk identification. We evaluate the reference model through real-world examples and interviews with industry experts and demonstrate its applicability. The model provides a better understanding of cloud networks and causalities between related risks. These insights can be used to develop appropriate risk management strategies in cloud networks. The reference model sets a basis for future risk quantification approaches as well as for the design of (IT) tools for risk analysis.
In this contribution we present a game-based learning concept which is based on mobile devices. It focuses a joyful stabilization of knowledge and the engagement of students using the Gamification approach and its game mechanics. Previous findings how to promote students’ motivation are adapted in the mobile context and discussed. A pre-evaluation of the prototype is described with its findings.
Learning centered teaching becomes an important factor in a global perspective of learning software engineering. The Just-in-Time Teaching approach is used in a Chinese-German empirical case study. In a one year terminated project we will analyze the performance of our students in an active learning scenario with Just-in-Time Teaching and Peer Instruction. We will contribute an inter-cultural comparison of achieved competencies by student's self-assessment and teacher's observation.
The task-based language learning (TBLL) approach is used in the context of foreign language pedagogy. Since a programming language is also a language by definition, this emerges the question whether the approach can also be used for learning programming. The paper presents fundamentals of the TBLL approach and illustrates how it can be adopted for learning an object-oriented programming language. It gives suggestions for concrete specifications of a task-based programming learning (TBPL) and shows their effects on an exemplary programming task
In this research, we investigate the possibility of applying ranking task activity in teaching and learning software engineering courses. We introduce three types of ranking tasks, conceptual-, contextual- and sequential ranking questions, which cover most core topics such as requirement analysis, architecture design and quality validation in the course. We have also done experiments on a group of students to see if ranking tasks could increase their conceptual knowledge in specific areas. Assessments were given in order to evaluate the effectiveness of this activity, showing an obvious increase in complex conceptual understanding.
In this contribution a process and a way for a standardized documentation are proposed for the creation of gamified and competency-based learning activities. Furthermore the application of the process and its documentation is described by using an exemplary learning activity which was created, implemented and evaluated. The findings indicate, that the use of gamification design elements for learning a design pattern can be effective and successful and can lead to an increase in learning motivation.
Eine der größten Barrieren der heutigen Elektrofahrzeuge ist ihre limitierte Reichweite. Obwohl die Mehrheit der Autofahrer nicht mehr als 40 Kilometer pro Tag fährt ([MiD2008]), ist das Kaufverhalten für Elektrofahrzeuge mit viel kleinerer Reichweite als solcher mit Verbrennungsmotoren sehr zurückhaltend. Mit dieser Arbeit adressieren wir die zusätzlichen Informationsbedürfnisse der Fahrer von E-Fahrzeugen durch die Entwicklung einer Applikation für Standard-Smartphones, welche sowohl eine detaillierte und genaue grafische Repräsentation der Reichweite des Elektrofahrzeugs ermöglicht, als auch einen fortgeschrittenen Routing-Algorithmus implementiert, der den spezifischen Energieverbrauch des Elektrofahrzeugs berücksichtigt.
Occluding Edges Soft Shadows
(2016)
In this paper, a new algorithm to render soft shadows in real time applications is introduced, namely the Occluding Edges Soft Shadow algorithm (short OESS). The algorithm approximates the shadow cast from linear lights by finding the outlines of an occluding object (Occluding Edges) and considering these in a fragment’s illumination. The method is based on the shadow mapping technique, whereby its capability of rendering the shadow at an interactive rate does not depend on the complexity of the scene. The paper supplies an overview for several methods to produce shadows and soft shadows in real time computer graphics, a detailed description of the newly developed algorithm, and a section with results and future possibilities for improvements.
The project IDA - Intelligent Data Acquisition is an interdisciplinary project in the fields of applied informatics and mechanical engineering. Its purpose is to collect process relevant information in industrial foundry processes like iron casting with handmade and mechanically made molds. Currently a lot of data sets are collected by hand. But these contain inaccuracies and errors and are not available digitally for further analysis. As a result it is not possible to evaluate them automatically. In particular it is not possible to conclude from a defect cast part to the whole set of its production parameters. We develop several procedures to collect these data sets and prepare them for computation in data analysis algorithms. The acquisition of digitally available data in IDA is done mostly by optical sensors. In this paper we describe our approach especially regarding marking and recognition of relevant objects. Furthermore we show first results in environments close to reality.
Support prospect of success in mathematics for first-year students in engineering by early feedback
(2020)
Providing students with the necessary mathematical skills is a growing challenge in German engineering degree courses. This is due to manifold mostly pre-university reasons, for example, inadequate or heterogeneous secondary education in Germany. Preparatory courses, a common remedy offered by many universities, often turn out to lack the expected effect. However, implying an early, mandatory test that scans the basic mathematical skills defined as entrance standard, showed positive results, as it detects the individual knowledge gaps of first-year students and motivates them to take action. For this purpose, we coupled the test with detailed, individual feedback and a cluster of affiliated support offers - preparatory to the test as well as attendant to the first year. Being mandatory, the test already pushes more students in our preparatory math courses. Whereas its detailed, individual feedback enables students to choose among the subsequent support offers and to cope with remaining gaps. The support offers range from transition workshops, co-learning tutorials, an online exercise platform, video tutorials up to conventional worksheets and consultation hours. This setup of mandatory test, early and detailed feedback and versatile supporting offers, tackling the missing skills, significantly increased the success in the basic mathematics and mathematics dependent modules of our engineering bachelor courses. In the process, this setup generates several additional, statistical outcomes, like a rating of the different German pre-university school systems, which helps to further work on measures dealing with the decreasing skills of beginners.