Refine
Document Type
- Doctoral Thesis (7)
- Bachelor Thesis (3)
- Master's Thesis (1)
Has Fulltext
- yes (11)
Is part of the Bibliography
- no (11)
Keywords
- model-based (3)
- BPMN (1)
- Data flow analysis (1)
- Data protection (1)
- Datenfluss (1)
- Datenflussanalyse (1)
- Datenflussmodell (1)
- Datenschutz (1)
- GDPR (1)
- Rückverfolgbarkeit (1)
Für die Entwicklung sicherer Softwaresysteme erweitert UMLsec die UML durch die Definition von Sicherheitsanforderungen, die erfüllt sein müssen. In Klassendiagrammen können Attribute und Methoden von Klassen Sicherheitseigenschaften wie secrecy oder integrity haben, auf die nicht authorisierte Klassen keinen Zugriff haben sollten. Vererbungen zwischen Klassen erzeugen eine Komplexität und werfen die Frage auf, wie man mit der Vererbung von Sicherheitseigenschaften umgehen sollte. Neben der Option in sicherheitskritischen Fällen auf Vererbungen zu verzichten, gibt es im Gebiet der objektorientierten Datenbanken bereits viele allgemeine Recherchen über die Auswirkungen von Vererbung auf die Sicherheit. Das Ziel dieser Arbeit ist es, Ähnlichkeiten und Unterschiede von der Datenbankseite und den Klassendiagrammen zu identifizieren und diese Lösungsansätze zu übertragen und zu formalisieren. Eine Implementierung des Modells evaluiert, ob die Lösungen in der Praxis anwendbar sind.
Data flow models in the literature are often very fine-grained, which transfers to the data flow analysis performed on them and thus leads to a decrease in the analysis' understandability. Since a data flow model, which abstracts from the majority of implementation details of the program modeled, allows for potentially easier to understand data flow analyses, this master thesis deals with the specification and construction of a highly abstracted data flow model and the application of data flow analyses on this model. The model and the analyses performed on it have been developed in a test-driven manner, so that a wide range of possible data flow scenarios could be covered. As a concrete data flow analysis, a static security check in the form of a detection of insufficient user input sanitization has been performed. To date, there's no data flow model on a similarly high level of abstraction. The proposed solution is therefore unique and facilitates developers without expertise in data flow analysis to perform such analyses.
Over the past few decades society’s dependence on software systems has grown significantly. These systems are utilized in nearly every matter of life today and often handle sensitive, private data. This situation has turned software security analysis into an essential and widely researched topic in the field of computer science. Researchers in this field tend to make the assumption that the quality of the software systems' code directly affects the possibility for security gaps to arise in it. Because this assumption is based on properties of the code, proving it true would mean that security assessments can be performed on software, even before a certain version of it is released. A study based on this implication has already attempted to mathematically assess the existence of such a correlation, studying it based on quality and security metric calculations. The present study builds upon that study in finding an automatic method for choosing well-fitted software projects as a sample for this correlation analysis and extends the variety of projects considered for the it. In this thesis, the automatic generation of graphical representations both for the correlations between the metrics as well as for their evolution is also introduced. With these improvements, this thesis verifies the results of the previous study with a different and broader project input. It also focuses on analyzing the correlations between the quality and security metrics to real-world vulnerability data metrics. The data is extracted and evaluated from dedicated software vulnerability information sources and serves to represent the existence of proven security weaknesses in the studied software. The study discusses some of the difficulties that arise when trying to gather such information and link it to the difference in the information contained in the repositories of the studied projects. This thesis confirms the significant influence that quality metrics have on each other. It also shows that it is important to view them together as a whole and suppose that their correlation could influence the appearance of unwanted vulnerabilities as well. One of the important conclusions I can draw from this thesis is that the visualization of metric evolution graphs, helps the understanding of the values as well as their connection to each other in a more meaningful way. It allows for better grasp of their influence on each other as opposed to only studying their correlation values. This study confirms that studying metric correlations and evolution trends can help developers improve their projects and prevent them from becoming difficult to extend and maintain, increasing the potential for good quality as well as more secure software code.
Retrospektive Analyse der Ausbreitung und dynamische Erkennung von Web-Tracking durch Sandboxing
(2018)
Aktuelle quantitative Analysen von Web-Tracking bieten keinen umfassenden Überblick über dessen Entstehung, Ausbreitung und Entwicklung. Diese Arbeit ermöglicht durch Auswertung archivierter Webseiten eine rückblickende Erfassung der Entstehungsgeschichte des Web-Trackings zwischen den Jahren 2000 und 2015. Zu diesem Zweck wurde ein geeignetes Werkzeug entworfen, implementiert, evaluiert und zur Analyse von 10000 Webseiten eingesetzt. Während im Jahr 2005 durchschnittlich 1,17 Ressourcen von Drittparteien eingebettet wurden, zeigt sich ein Anstieg auf 6,61 in den darauffolgenden 10 Jahren. Netzwerkdiagramme visualisieren den Trend zu einer monopolisierten Netzstruktur, in der bereits ein einzelnes Unternehmen 80 % der Internetnutzung überwachen kann.
Trotz vielfältiger Versuche, dieser Entwicklung durch technische Maßnahmen entgegenzuwirken, erweisen sich nur wenige Selbst- und Systemschutzmaßnahmen als wirkungsvoll. Diese gehen häufig mit einem Verlust der Funktionsfähigkeit einer Webseite oder mit einer Einschränkung der Nutzbarkeit des Browsers einher. Mit der vorgestellten Studie wird belegt, dass rechtliche Vorschriften ebenfalls keinen hinreichenden Schutz bieten. An Webauftritten von Bildungseinrichtungen werden Mängel bei Erfüllung der datenschutzrechtlichen Pflichten festgestellt. Diese zeigen sich durch fehlende, fehlerhafte oder unvollständige Datenschutzerklärungen, deren Bereitstellung zu den Informationspflichten eines Diensteanbieters gehören.
Die alleinige Berücksichtigung klassischer Tracker ist nicht ausreichend, wie mit einer weiteren Studie nachgewiesen wird. Durch die offene Bereitstellung funktionaler Webseitenbestandteile kann ein Tracking-Unternehmen die Abdeckung von 38 % auf 61 % erhöhen. Diese Situation wird durch Messungen von Webseiten aus dem Gesundheitswesen belegt und aus technischer sowie rechtlicher Perspektive bewertet.
Bestehende systemische Werkzeuge zum Erfassen von Web-Tracking verwenden für ihre Messung die Schnittstellen der Browser. In der vorliegenden Arbeit wird mit DisTrack ein Framework zur Web-Tracking-Analyse vorgestellt, welches eine Sandbox-basierte Messmethodik verfolgt. Dies ist eine Vorgehensweise, die in der dynamischen Schadsoftwareanalyse erfolgreich eingesetzt wird und sich auf das Erkennen von Seiteneffekten auf das umliegende System spezialisiert. Durch diese Verhaltensanalyse, die unabhängig von den Schnittstellen des Browsers operiert, wird eine ganzheitliche Untersuchung des Browsers ermöglicht. Auf diese Weise können systemische Schwachstellen im Browser aufgezeigt werden, die für speicherbasierte Web-Tracking-Verfahren nutzbar sind.
Data-minimization and fairness are fundamental data protection requirements to avoid privacy threats and discrimination. Violations of data protection requirements often result from: First, conflicts between security, data-minimization and fairness requirements. Second, data protection requirements for the organizational and technical aspects of a system that are currently dealt with separately, giving rise to misconceptions and errors. Third, hidden data correlations that might lead to influence biases against protected characteristics of individuals such as ethnicity in decision-making software. For the effective assurance of data protection needs,
it is important to avoid sources of violations right from the design modeling phase. However, a model-based approach that addresses the issues above is missing.
To handle the issues above, this thesis introduces a model-based methodology called MoPrivFair (Model-based Privacy & Fairness). MoPrivFair comprises three sub-frameworks: First, a framework that extends the SecBPMN2 approach to allow detecting conflicts between security, data-minimization and fairness requirements. Second, a framework for enforcing an integrated data-protection management throughout the development process based on a business processes model (i.e., SecBPMN2 model) and a software architecture model (i.e., UMLsec model) annotated with data protection requirements while establishing traceability. Third, the UML extension UMLfair to support individual fairness analysis and reporting discriminatory behaviors. Each of the proposed frameworks is supported by automated tool support.
We validated the applicability and usability of our conflict detection technique based on a health care management case study, and an experimental user study, respectively. Based on an air traffic management case study, we reported on the applicability of our technique for enforcing an integrated data-protection management. We validated the applicability of our individual fairness analysis technique using three case studies featuring a school management system, a delivery management system and a loan management system. The results show a promising outlook on the applicability of our proposed frameworks in real-world settings.
Since software influences nearly every aspect of everyday life, the security of software systems is more important than ever before. The evaluation of the security of a software system still poses a significant challenge in practice, mostly due to the lack of metrics, which can map the security properties of source code onto numeric values. It is a common assumption, that the occurrence of security vulnerabilities and the quality of the software design stand in direct correlation, but there is currently no clear evidence to support this. A proof of an existing correlation could help to optimize the measurements of program security, making it possible to apply quality measurements to evaluate it. For this purpose, this work evaluates fifty open-source android applications, using three security and seven quality metrics. It also considers the correlations between the metrics. The quality metrics range from simple code metrics to high-level metrics such as object-oriented anti-patterns, which together provide a comprehensive picture of the quality. Two visibility metrics, along with a metric that computes the minimal permission request for mobile applications, were selected to illustrate the security. Using the evaluation projects, it was found that there is a clear correlation between most quality metrics. By contrast, no significant correlations were found using the security metrics. This work discusses the correlations and their causes as well as further recommendations based on the findings.
Nowadays, almost any IT system involves personal data processing. In
such systems, many privacy risks arise when privacy concerns are not
properly addressed from the early phases of the system design. The
General Data Protection Regulation (GDPR) prescribes the Privacy by
Design (PbD) principle. As its core, PbD obliges protecting personal
data from the onset of the system development, by effectively
integrating appropriate privacy controls into the design. To
operationalize the concept of PbD, a set of challenges emerges: First, we need a basis to define privacy concerns. Without such a basis, we are not able to verify whether personal data processing is authorized. Second, we need to identify where precisely in a system, the controls have to be applied. This calls for system analysis concerning privacy concerns. Third, with a view to selecting and integrating appropriate controls, based on the results of system analysis, a mechanism to identify the privacy risks is required. Mitigating privacy risks is at the core of the PbD principle. Fourth, choosing and integrating appropriate controls into a system are complex tasks that besides risks, have to consider potential interrelations among privacy controls and the costs of the controls.
This thesis introduces a model-based privacy by design methodology to handle the above challenges. Our methodology relies on a precise definition of privacy concerns and comprises three sub-methodologies: model-based privacy analysis, modelbased privacy impact assessment and privacy-enhanced system design modeling. First, we introduce a definition of privacy preferences, which provides a basis to specify privacy concerns and to verify whether personal data processing is authorized. Second, we present a model-based methodology to analyze a system model. The results of this analysis denote a set of privacy design violations. Third, taking into account the results of privacy analysis, we introduce a model-based privacy impact assessment methodology to identify concrete privacy risks in a system model. Fourth, concerning the risks, and taking into account the interrelations and the costs of the controls, we propose a methodology to select appropriate controls and integrate them into a system design. Using various practical case studies, we evaluate our concepts, showing a promising outlook on the applicability of our methodology in real-world settings.
Software systems have an increasing impact on our daily lives. Many systems process sensitive data or control critical infrastructure. Providing secure software is therefore inevitable. Such systems are rarely being renewed regularly due to the high costs and effort. Oftentimes, systems that were planned and implemented to be secure, become insecure because their context evolves. These systems are connected to the Internet and therefore also constantly subject to new types of attacks. The security requirements of these systems remain unchanged, while, for example, discovery of a vulnerability of an encryption algorithm previously assumed to be secure requires a change of the system design. Some security requirements cannot be checked by the system’s design but only at run time. Furthermore, the sudden discovery of a security violation requires an immediate reaction to prevent a system shutdown. Knowledge regarding security best practices, attacks, and mitigations is generally available, yet rarely integrated part of software development or covering evolution.
This thesis examines how the security of long-living software systems can be preserved taking into account the influence of context evolutions. The goal of the proposed approach, S²EC²O, is to recover the security of model-based software systems using co-evolution.
An ontology-based knowledge base is introduced, capable of managing common, as well as system-specific knowledge relevant to security. A transformation achieves the connection of the knowledge base to the UML system model. By using semantic differences, knowledge inference, and the detection of inconsistencies in the knowledge base, context knowledge evolutions are detected.
A catalog containing rules to manage and recover security requirements uses detected context evolutions to propose potential co-evolutions to the system model which reestablish the compliance with security requirements.
S²EC²O uses security annotations to link models and executable code and provides support for run-time monitoring. The adaptation of running systems is being considered as is round-trip engineering, which integrates insights from the run time into the system model.
S²EC²O is amended by prototypical tool support. This tool is used to show S²EC²O’s applicability based on a case study targeting the medical information system iTrust.
This thesis at hand contributes to the development and maintenance of long-living software systems, regarding their security. The proposed approach will aid security experts: It detects security-relevant changes to the system context, determines the impact on the system’s security and facilitates co-evolutions to recover the compliance with the security requirements.
In international business relationships, such as international railway operations, large amounts of data can be exchanged among the parties involved. For the exchange of such data, a limited risk of being cheated by another party, e.g., by being provided with fake data, as well as reasonable cost and a foreseeable benefit, is expected. As the exchanged data can be used to make critical business decisions, there is a high incentive for one party to manipulate the data in its favor. To prevent this type of manipulation, mechanisms exist to ensure the integrity and authenticity of the data. In combination with a fair exchange protocol, it can be ensured that the integrity and authenticity of this data is maintained even when it is exchanged with another party. At the same time, such a protocol ensures that the exchange of data only takes place in conjunction with the agreed compensation, such as a payment, and that the payment is only made if the integrity and authenticity of the data is ensured as previously agreed. However, in order to be able to guarantee fairness, a fair exchange protocol must involve a trusted third party. To avoid fraud by a single centralized party acting as a trusted third party, current research proposes decentralizing the trusted third party, e.g., by using a distributed ledger based fair exchange protocol. However, for assessing the fairness of such an exchange, state-of-the-art approaches neglect costs arising for the parties conducting the fair exchange. This can result in a violation of the outlined expectation of reasonable cost, especially when distributed ledgers are involved, which are typically associated with non-negligible costs. Furthermore, the performance of typical distributed ledger-based fair exchange protocols is limited, posing an obstacle to widespread adoption.
To overcome the challenges, in this thesis, we introduce the foundation for a data exchange platform allowing for a fully decentralized fair data exchange with reasonable cost and performance. As a theoretical foundation, we introduce the concept of cost fairness, which considers cost for the fairness assessment by requesting that a party following the fair exchange protocol never suffers any unilateral disadvantages. We prove that cost fairness cannot be achieved using typical public distributed ledgers but requires customized distributed ledger instances, which usually lack complete decentralization. However, we show that the highest unilateral cost are caused by a grieving attack.
To allow fair data exchanges to be conducted with reasonable cost and performance, we introduce FairSCE, a distributed ledger-based fair exchange protocol using distributed ledger state channels and incorporating a mechanism to protect against grieving attacks, reducing the possible unilateral cost that have to be covered to a minimum. Based on our evaluation of FairSCE, the worst-case cost for data exchange, even in the presence of malicious parties, is known, which allows an estimate of the possible benefit and, thus, the preliminary estimate of economic utility. Furthermore, to allow for an unambiguous assessment of the correct data being transferred while still allowing for sensitive parts of the data to be masked, we introduce an approach for the hashing of hierarchically structured data, which can be used to ensure integrity and authenticity of the data being transferred.
The integration of the different stakeholder needs and environmental constraints is the key goal of requirements engineering. This demands collaborations between involved parties, to reach “understandability of the system”, what is particularly challenging for collaborations over different organisations. High quality requirements engineering is the key factor to address these challenges. Requirements are input to all development steps and carry the knowledge to exchange—requirements engineering is an overall life-cycle spanning and in its essence a knowledge management task.
The main goal of the T-Reqs framework presented in this thesis is to enable semantic interoperability and to sustain the knowledge by conceptualization of the requirements engineering process applied to European space projects. T-Reqs’ objective is to formally capture the information carried by the requirements to provide top-shelf inputs for the consecutive system and discipline-specific development tasks, in particular within model-based systems engineering. Emphasis is placed on the nature of relationships that exist among requirements and requirement documents. The T-Reqs formalism addresses the structuring of requirements as well as their potential reuse, e.g., in product line development or even between different projects. This implies an overall System Requirements Specification that is distributed in many specifications documents and involves requirements of different levels of abstractions from abstract goals to implementation details. This thesis especially focuses the specification and validation of such requirements documents.
The T-Reqs traceability model provides a means to trace not only individual requirements,
but also consider relations among views such as documents, taking into account the role
they play for stakeholders, especially in reuse. It is shown, how formalization of dependencies, such as for tailoring of standards, enables automated quality checks to facilitate reviews and enhance completeness and consistency of the overall specification.
Towards the structuring of requirements itself, different syntactic template systems aim to
increase the quality of requirement documentation. Within this thesis a comparative evaluation of these notations is conducted, supporting that claim and differentiating the strength and weaknesses of different approaches. Special emphasis is not only laid on documentation quality, but also the usefulness of these semi-formal notations for integration with model-based development methods. This is achieved through the representation of concepts, which can be managed in special contextualised glossaries.
Overall it can be shown that conceptualization of requirements engineering knowledge can support requirements engineering in different aspects and a holistic approach to integrate different tasks lays the foundation for semantic interoperability spanning organizations and life cycle phases.