004 Datenverarbeitung; Informatik
Refine
Document Type
- Report (6)
- Article (4)
- Conference Proceeding (4)
- Preprint (1)
- Study Thesis (1)
Institute
- Fachbereich Wirtschaft, Informatik, Recht (16) (remove)
Has Fulltext
- yes (16)
Keywords
- DNS manipulation (1)
- EU (1)
- Geschäftsprozess (1)
- Internet (1)
- Internet censorship (1)
- NLP (1)
- Open Internet (1)
- SaaS (1)
- Simulation (1)
- Werkzeug (1)
Website blocking in the European Union: Network interference from the perspective of Open Internet
(2024)
By establishing an infrastructure for monitoring and blocking networks in accordance with European Union (EU) law on preventive measures against the spread of information, EU member states have also made it easier to block websites and services and monitor information. While relevant studies have documented Internet censorship in non-European countries, as well as the use of such infrastructures for political reasons, this study examines network interference practices such as website blocking against the backdrop of an almost complete lack of EU-related research. Specifically, it performs and demonstrates an analysis for the total of 27 EU countries based on three different sources. They include first, tens of millions of historical network measurements collected in 2020 by Open Observatory of Network Interference volunteers from around the world; second, the publicly available blocking lists used by EU member states; and third, the reports issued by network regulators in each country from May 2020 to April 2021. Our results show that authorities issue multiple types of blocklists. Internet Service Providers limit access to different types and categories of websites and services. Such resources are sometimes blocked for unknown reasons and not included in any of the publicly available blocklists. The study concludes with the hurdles related to network measurements and the nontransparency from regulators regarding specifying website addresses in blocking activities.
Purpose
This study investigates whether the artificial neural network approach, when used on a large organizational soft HR performance dataset, results in a better (R2/RMSE) model compared to the linear regression. With the use of predictive modelling, a more informed base for managerial decision making within soft HR performance management is offered.
Design/methodology/approach
The study builds on a dataset (n > 43 k) stemming from an annual employee MNC survey. It covers several soft HR performance drivers and outcomes (such as engagement, satisfaction and others) that either have evidence of a dual-role nature or non-linear relationships. This study applies the framework for artificial neural network analysis in organization research (Scarborough and Somers, 2006).
Findings
The analysis reveals a substantial artificial neural network model performance (R2 > 0.75) with an excellent fit statistic (nRMSE <0.10) and all drivers have the same relative importance (RMI [0.102; 0.125]). This predictive analysis revealed that the organization has to increase six of the drivers, keep two on the same level and decrease one.
Originality/value
Up to date, this study uses the largest dataset in soft HR performance management. Additionally, the predictive results reveal that specific target values lay below the current levels to achieve optimal performance.
Understanding Website Privacy Policies—A Longitudinal Analysis Using Natural Language Processing
(2023)
Privacy policies are the main method for informing Internet users of how their data are collected and shared. This study aims to analyze the deficiencies of privacy policies in terms of readability, vague statements, and the use of pacifying phrases concerning privacy. This represents the undertaking of a step forward in the literature on this topic through a comprehensive analysis encompassing both time and website coverage. It characterizes trends across website categories, top-level domains, and popularity ranks. Furthermore, studying the development in the context of the General Data Protection Regulation (GDPR) offers insights into the impact of regulations on policy comprehensibility. The findings reveal a concerning trend: privacy policies have grown longer and more ambiguous, making it challenging for users to comprehend them. Notably, there is an increased proportion of vague statements, while clear statements have seen a decrease. Despite this, the study highlights a steady rise in the inclusion of reassuring statements aimed at alleviating readers’ privacy concerns.
Tagungsbeiträge zu aktuellen Entwicklungen in der Forschung und in der Industrie.
Empirical insights into high-promising commercial sentiment analysis solutions that go beyond their vendors’ claims are rare. Moreover, due to ongoing advances in the field, earlier studies are far from reflecting the current situation due to the constant evolution of the field. The present research aims to evaluate and compare current solutions. Based on tweets on the airline service quality, we test the solutions of six vendors with different market power, such as Amazon, Google, IBM, Microsoft, and Lexalytics, and MeaningCloud, and report their measures of accuracy, precision, recall, (macro) F1, time performance, and service level agreements (SLA). For positive and neutral classifications, none of the solutions showed precision of over 70%. For negative classifications, all of them demonstrate high precision of around 90%, however, only IBM Watson NLU and Google Cloud Natural Language achieve recall of over 70% and thus can be seen as worth considering for application scenarios w here negative text detection is a major concern. Overall, our study shows that an independent, critical experimental analysis of sentiment analysis services can provide interesting insights into their general reliability and particular classification accuracy beyond marketing claims to critically compare solutions based on real-world data and analyze potential weaknesses and margins of error before making an investment.
Research on the robustness of networks, and in particular the Internet, has gained critical importance in recent decades because more and more individuals, societies and firms rely on this global network infrastructure for communication, knowledge transfer, business processes and e-commerce. In particular, modeling the structure of the Internet has inspired several novel graph metrics for assessing important topological robustness features of large complex networks. This survey provides a comparative overview of these metrics, presents their strengths and limitations for analyzing the robustness of the Internet topology, and outlines a conceptual tool set in order to facilitate their future adoption by Internet research and practice but also other areas of network science.
Tech Report Dezember 2020 : ein komprimierter Überblick über aktuelle und aufkommende Technologien
(2020)
Die COVID-19-Pandemie hat fast jeden Aspekt des Lebens verändert, vom privaten, also wie Menschen leben und arbeiten bis zum beruflichen, wie beispielsweise Unternehmen mit ihren Kund*innen interagieren oder wie Kund*innen Produkte und Dienstleistungen auswählen und schlussendlich kaufen. Und genau diese Veränderungen bieten Chancen, um mit neuen Anwendungen von Technologien die eigenen Produkte oder Geschäftsmodelle zu innovieren. Aus diesem Grund stellt der letzte Tech Report (Stand Dezember 2020) des Kompetenzzentrums IT-Wirtschaft nicht nur aktuelle und aufkommende Technologien vor, sondern gibt auch einen kurzen Einblick, welche konkreten Innovationen aufgrund der Covid-19 Pandemie entstanden sind. Vielleicht setzt die ein oder andere Anwendung, die in dem Artikel vorgestellt werden, Impulse für eigenen unternehmerische Innovationsideen. Die Basis des Artikels ist das interaktive Tech Radar des Kompetenzzentrums IT-Wirtschaft. Daher gibt der vorliegende Artikel nicht nur einen Einblick in den Aufbau des Radars, sondern fasst auch die zentralen Inhalte zusammen und gibt konkrete Beispiele zur Technologieanwendung.
Die Dokumentation des Foresightprozesses zum Einsatz von Künstlicher Intelligenz in der deutschen Textilindustrie im Jahr 2030 enthält neben der detaillierten Darstellung des methodischen Vorgehens eine Analyse des Status quo der Textilindustrie, zwei mögliche Szenarien zum Einsatz von Künstlicher Intelligenz und entsprechende Handlungsansätze. KMU der Textilindustrie, IT-KMU und Intermediäre sollen für mögliche KI-Anwendungen und damit verbundene Herausforderungen sensibilisiert werden.
Der Foresightprozess in Form einer Szenarioanalyse wurde im Zeitraum Oktober 2019 bis Juni 2020 in enger Abstimmung mit dem Mittelstand 4.0-Kompetenzzentrum Textil vernetzt durchgeführt.