004 Datenverarbeitung; Informatik
Refine
Document Type
- Article in a Periodical of the TH Wildau (16)
- Conference Proceeding (9)
- Article (7)
- Diploma Thesis (7)
- Report (6)
- Master's Thesis (2)
- Bachelor Thesis (1)
- Preprint (1)
- Study Thesis (1)
Year of publication
Institute
- Fachbereich Betriebswirtschaft / Wirtschaftsinformatik (bis 8/2014) (16)
- Fachbereich Wirtschaft, Informatik, Recht (16)
- Fachbereich Ingenieur- und Naturwissenschaften (8)
- Fachbereich Ingenieurwesen / Wirtschaftsingenieurwesen (bis 8/2014) (7)
- Bibliotheksinformatik (2)
- Fachbereich Wirtschaft, Verwaltung und Recht (bis 8/2014) (1)
Has Fulltext
- yes (50)
Keywords
- Wirtschaftsinformatik (3)
- E-Learning (2)
- Geschäftsprozess (2)
- business computing (2)
- e-learning (2)
- machine learning (2)
- Autismus (1)
- Bibliotheksmanagementsystem (1)
- CRM (1)
- Clusteralgorithmus (1)
To guarantee the safety of medical devices, including embedded systems, it is essential to consider both electronic components and the natural environment during validation and verification. In contrast to prior research, we present a hardware-in-the-loop environment that connects a real medical system to a biological model in real time for validation, including the modeling of the mechanical component of the heart valves in addition to the modeling of the electrical conduction and electrical stimulation of the heart chambers. Our model accounts for the dynamic adaptation of the temporal processes in the heart chambers to the pacing frequency of the individual chambers as a function of the action potential. This study investigates two additional risk factors affecting the heart under different conditions: pacemaker syndrome and electrical stimulation during the vulnerable phase. Both can be life-threatening to the patient if left untreated.
In implementing our concept on a physical pacemaker connected to our software-based model of the heart, we discovered that the test pacemaker was unable to generate the required heart rate in three of the scenarios we tested. Additionally, our tests revealed occurrences of pacemaker syndrome and stimulation in the vulnerable phase.
The article discusses the approach to solving the problem of reducing time spent on the preparation of medical images for teaching neural networks, by reducing the time of creating masks for images. The task is considered on the example of processing images of the mucous membrane of the paranasal sinus. The specifics of the task did not allow effectively using existing software solutions. During the study, a software solution was proposed, which made it possible to radically reduce the time of creating masks for images. The article also analyzes the shortcomings of the automated creation of masks, as well as the directions of their solution. The loss of time due to the adjustment of the color palette can be reduced even more to 1-2 minutes, the average deviation is 7.61%.
Website blocking in the European Union: Network interference from the perspective of Open Internet
(2024)
By establishing an infrastructure for monitoring and blocking networks in accordance with European Union (EU) law on preventive measures against the spread of information, EU member states have also made it easier to block websites and services and monitor information. While relevant studies have documented Internet censorship in non-European countries, as well as the use of such infrastructures for political reasons, this study examines network interference practices such as website blocking against the backdrop of an almost complete lack of EU-related research. Specifically, it performs and demonstrates an analysis for the total of 27 EU countries based on three different sources. They include first, tens of millions of historical network measurements collected in 2020 by Open Observatory of Network Interference volunteers from around the world; second, the publicly available blocking lists used by EU member states; and third, the reports issued by network regulators in each country from May 2020 to April 2021. Our results show that authorities issue multiple types of blocklists. Internet Service Providers limit access to different types and categories of websites and services. Such resources are sometimes blocked for unknown reasons and not included in any of the publicly available blocklists. The study concludes with the hurdles related to network measurements and the nontransparency from regulators regarding specifying website addresses in blocking activities.
The object of the study is the process of identifying the state of a computer network. The subject of the study are the methods of identifying the state of computer networks. The purpose of the paper is to improve the efficacy of intrusion detection in computer networks by developing a method based on transformer models. The results obtained. The work analyzes traditional machine learning algorithms, deep learning methods and considers the advantages of using transformer models. A method for detecting intrusions in computer networks is proposed. This method differs from known approaches by utilizing the Vision Transformer for Small-size Datasets (ViTSD) deep learning algorithm. The method incorporates procedures to reduce the correlation of input data and transform data into a specific format required for model operations. The developed methods are implemented using Python and the GOOGLE COLAB cloud service with Jupyter Notebook. Conclusions. Experiments confirmed the efficiency of the proposed method. The use of the developed method based on the ViTSD algorithm and the data preprocessing procedure increases the model's accuracy to 98.7%. This makes it possible to recommend it for practical use, in order to improve the accuracy of identifying the state of a computer system.
Purpose
This study investigates whether the artificial neural network approach, when used on a large organizational soft HR performance dataset, results in a better (R2/RMSE) model compared to the linear regression. With the use of predictive modelling, a more informed base for managerial decision making within soft HR performance management is offered.
Design/methodology/approach
The study builds on a dataset (n > 43 k) stemming from an annual employee MNC survey. It covers several soft HR performance drivers and outcomes (such as engagement, satisfaction and others) that either have evidence of a dual-role nature or non-linear relationships. This study applies the framework for artificial neural network analysis in organization research (Scarborough and Somers, 2006).
Findings
The analysis reveals a substantial artificial neural network model performance (R2 > 0.75) with an excellent fit statistic (nRMSE <0.10) and all drivers have the same relative importance (RMI [0.102; 0.125]). This predictive analysis revealed that the organization has to increase six of the drivers, keep two on the same level and decrease one.
Originality/value
Up to date, this study uses the largest dataset in soft HR performance management. Additionally, the predictive results reveal that specific target values lay below the current levels to achieve optimal performance.
Understanding Website Privacy Policies—A Longitudinal Analysis Using Natural Language Processing
(2023)
Privacy policies are the main method for informing Internet users of how their data are collected and shared. This study aims to analyze the deficiencies of privacy policies in terms of readability, vague statements, and the use of pacifying phrases concerning privacy. This represents the undertaking of a step forward in the literature on this topic through a comprehensive analysis encompassing both time and website coverage. It characterizes trends across website categories, top-level domains, and popularity ranks. Furthermore, studying the development in the context of the General Data Protection Regulation (GDPR) offers insights into the impact of regulations on policy comprehensibility. The findings reveal a concerning trend: privacy policies have grown longer and more ambiguous, making it challenging for users to comprehend them. Notably, there is an increased proportion of vague statements, while clear statements have seen a decrease. Despite this, the study highlights a steady rise in the inclusion of reassuring statements aimed at alleviating readers’ privacy concerns.
Tagungsbeiträge zu aktuellen Entwicklungen in der Forschung und in der Industrie.
Empirical insights into high-promising commercial sentiment analysis solutions that go beyond their vendors’ claims are rare. Moreover, due to ongoing advances in the field, earlier studies are far from reflecting the current situation due to the constant evolution of the field. The present research aims to evaluate and compare current solutions. Based on tweets on the airline service quality, we test the solutions of six vendors with different market power, such as Amazon, Google, IBM, Microsoft, and Lexalytics, and MeaningCloud, and report their measures of accuracy, precision, recall, (macro) F1, time performance, and service level agreements (SLA). For positive and neutral classifications, none of the solutions showed precision of over 70%. For negative classifications, all of them demonstrate high precision of around 90%, however, only IBM Watson NLU and Google Cloud Natural Language achieve recall of over 70% and thus can be seen as worth considering for application scenarios w here negative text detection is a major concern. Overall, our study shows that an independent, critical experimental analysis of sentiment analysis services can provide interesting insights into their general reliability and particular classification accuracy beyond marketing claims to critically compare solutions based on real-world data and analyze potential weaknesses and margins of error before making an investment.
European Union (EU) member states consider themselves bulwarks of democracy and freedom of speech. However, there is a lack of empirical studies assessing possible violations of these principles in the EU through Internet censorship. This work starts addressing this research gap by investigating Internet censorship in Spain over 2016-2020, including the controversial 2017 Catalan independence referendum. We focus, in particular, on network interference disrupting the regular operation of Internet services or contents. We analyzed the data collected by the Open Observatory of Network Interference (OONI) network measurement tool. The measurements targeted civil rights defending websites, secure communication tools, extremist political content, and information portals for the Catalan referendum. Our analysis indicates the existence of advanced network interference techniques that grow in sophistication over time. Internet Service Providers (ISPs) initially introduced information controls for a clearly defined legal scope (i.e., copyright infringement). Our research observed that such information controls had been re-purposed (e.g., to target websites supporting the referendum). We present evidence of network interference from all the major ISPs in Spain, serving 91% of mobile and 98% of broadband users and several governmental and law enforcement authorities. In these measurements, we detected 16 unique blockpages, 2 Deep Packet Inspection (DPI) vendors, and 78 blocked websites. We also contribute an enhanced domain testing methodology to detect certain kinds of Transport Layer Security (TLS) blocking that OONI could not initially detect. In light of our experience analyzing this dataset, we also make suggestions on improving the collection of evidence of network interference.