Refine
Year of publication
Document Type
- Doctoral Thesis (26)
Has Fulltext
- yes (26)
Keywords
- Computerforensik (4)
- Android (2)
- Memory Forensics (2)
- ARM TrustZone (1)
- ASIL (1)
- Adversarial Examples (1)
- Adversarial Machine Learning (1)
- Android Security (1)
- Anti-Forensics (1)
- Authentizität (1)
Institute
- Technische Fakultät (17)
- Department Informatik (9)
Seit seiner Einführung besticht das deutsche Onlinebanking mit der Sicherheit einer Zwei-Faktor-Authentifizierung. Obwohl für den Zugang zum Onlinebanking Benutzerkennung und Passwort genügen, müssen Transaktionen durch einen zusätzlichen Faktor bestätigt werden. Zu diesem Zweck fordert die Bank traditionell die Eingabe einer Transaktionsnummer, die der Kunde mithilfe seines Sicherungsverfahrens erhält. Das kontinuierliche Festhalten an dieser Trennung von Transaktionsauslösung und -bestätigung war dabei von einer anhaltenden Verbesserung der Sicherheitseigenschaften der Legitimierungsverfahren begleitet.
Dieser Trend droht sich mit der Verfügbarkeit von Smartphones und Tablets jedoch umzukehren und führt im Privatkundengeschäft der Banken zu deutlichen Nutzungs- und Marktverschiebungen. Denn alte wie neue Finanzdienstleister adaptieren eine Strategie, die möglichst alle Prozesse auf innovative Art und Weise durch das Mobilgerät des Kunden abbilden soll. Diese als Mobilebanking bezeichnete Entwicklung begründet nicht nur die Einführung von Banking-Apps und App-basierten Legitimierungsverfahren, sondern auch ein völlig neues Authentifizierungsparadigma, das es im Kontrast zum klassischen Onlinebanking erstmalig ermöglicht, alle Bankgeschäfte von ein und demselben mobilen Endgerät auszulösen und zu bestätigen.
Die Dissertation beschäftigt sich mit den Sicherheitsimplikationen, die sich durch das Aufkommen des Mobilebankings ergeben. Hierbei wird in der Arbeit zunächst festgestellt, dass sich mehr Angriffsmöglichkeiten durch Schadprogramme ergeben, als dies bisher der Fall war. Im Zentrum stehen zwei Angriffe: Erstens erlaubt die Softwareimplementierung dem Angreifer, einen Replikationsangriff durchzuführen, bei dem das App-basierte Legitimierungsverfahren in vollem Umfang auf ein unautorisiertes Gerät kopiert wird. Zweitens wird durch die fehlende Medientrennung zwischen Auslösung und Bestätigung die Echtzeitmanipulation einer nutzerinitiierten Transaktion möglich. Beide Angriffe fußen auf konzeptionellen Defiziten, die darauf zurückzuführen sind, dass die Mobilebanking-Verfahren ohne adäquate Hardwaremöglichkeiten zur Absicherung eingeführt wurden.
Aus diesem Grund versuchen die Banken ihre Apps durch kommerzielle Härtungsprodukte auf Softwareebene zu schützen. Weiterführende Untersuchungen zeigen solchen Lösungen jedoch klare Grenzen auf und machen deutlich, dass auch sie die konzeptionellen Defizite nicht ausgleichen können. Während den etablierten Banken damit zumindest attestiert werden kann, das strukturelle Sicherheitsrisiko zu kennen und adressieren zu wollen, reichen die Probleme bei neuen Marktteilnehmern weiter. Forschungen im Rahmen dieser Dissertation haben bei dem derzeit führenden deutschen Finanz-Start-up gravierende Sicherheitsmängel identifiziert, die ihre Ursache in einer mangelnden Priorität der IT-Sicherheit finden.
Die ermittelten Defizite sind auch in Bezug auf regulatorische Vorgaben zur Sicherheit mobiler Finanzlösungen relevant. Im Rahmen der Zahlungsdiensterichtlinie II hat die Europäische Union Vorgaben auf den Weg gebracht, die ab dem 14. September 2019 in Geltung treten. Die Dissertation beschäftigt sich in diesem Zusammenhang damit, welche Anforderungen an die Sicherheit digitaler Transaktionen allgemein zu stellen sind und konstatiert im Vergleich mit den rechtlichen Vorgaben weitgehende Kompatibilität. Weiterer Untersuchungsgegenstand ist die Richtlinienkonformität gängiger Sicherungsverfahren im Online- und Mobilebanking. Neben der Nichtkonformität listenbasierter Verfahren legt die Arbeit auch eine Unzulänglichkeit von Verfahren auf Basis des SMS-Telekommunikationsdienstes sowie unter der Verwendung von App-basierten Methoden nahe.
Obwohl die Richtlinie für eine Erhöhung des Sicherheitsniveaus bei Bankgeschäften sorgt, identifiziert die Abhandlung weitere Schwachstellen im Transaktionsprozess, die von der Regulierung nicht erfasst werden und ihre Ursache auch in menschlichen Faktoren finden. Die Arbeit setzt sich deshalb in einer Nutzerstudie mit der praktischen Sicherheit von Transaktionen beim Onlinebanking auseinander. Die Studie kommt zum Schluss, dass die Teilnehmer sich oft nicht im Klaren sind, welche Schritte für die Sicherheit essentiell sind, weshalb sie Transaktionsdaten gar nicht oder nur fehlerhaft prüfen. Die Banken sind dabei Teil des Problems, da sie dem Kunden mitunter irreführende Informationen zur Verfügung stellen.
Die Ergebnisse dieser Arbeit haben aufgrund ihrer angewandten Natur neben der Forschung auch Relevanz für die Öffentlichkeit sowie die Aufsichtsbehörden. Viele Beiträge der Dissertation wurden durch die Presse rezipiert, wodurch ein Sicherheitsbewusstsein für Bankgeschäfte in der Gesamtbevölkerung gefördert wird. Darüber hinaus stellt auch das Bundesamt für Sicherheit in der Informationstechnik die Bedeutung der Arbeit heraus, indem Teile davon Erwähnung im Bericht zur Lage der IT-Sicherheit in Deutschland für das Jahr 2018 fanden.
The widespread usage of Information Technology (IT) and the increased connectivity of devices made the systems more attractive for attackers. To keep up with technical developments, users not only have to keep up with the current hard- and software trends but also have to be aware of the IT security threats. Experts therefore conduct “awareness campaigns” to support the users facing these threats. In the first part of this thesis, we analyze the experts’ opinions concerning what these campaigns should achieve. While some argue that knowledge of the threats is enough, others see correct behavior as essential.
We then present a user study on how experts behave in the presence of threats. In particular, we compared the behavior of students from technical faculties when trying to understand clear and obfuscated code. Obfuscation is of importance in this context since attackers often apply these transformations of source code on their malware, to make it more difficult for security analysts to detect the malicious code. Our study shows that even experienced software analysts have to change their analysis methods to overcome the influence of code obfuscation on the understandability of code. This study further shows that there might be differences concerning which type of obfuscation method was applied to the code. This study supports the findings of Ceccato et al. (Empirical Software Engineering 2014) and gives a first empirical proof that the differentiation of resilience and potency of obfuscating transformations proposed by Collberg et al. (Technical Report 148) might be adequate.
Turning to non-expert users, we first investigate buying behavior on eBay. Based on 34 semi-structured interviews, we show which factors of a listing users take into account when buying. Further, we also show which part of the buying processes is affected by the different factors. Our results show that although some users try to prevent fraud by only buying products from “trustworthy” sellers, others completely rely on secure buying options ignoring indicators for the trustworthiness of a seller. This is because some users find it too hard to spot fraud using the current reputation system of eBay. We then show how severe this problem is and how the situation might be improved. In a user- experiment with 40 UK and 41 German participants using an eBay-like reputation system and an interactive visualization by Sänger and Pernul (ARES 2014) we investigated how well users can spot fraudulent sellers. Our results show that the interactive visualization helps users to spot fraud better and has no significant decreases in usability.
After analyzing how users evaluate threats based on data about other users, we investigate how users present themselves online. We researched how Online Social Networks (OSNs) are being used. A new ephemeral way of posting called “Stories” has been integrated to several of these systems lately. Using a survey as well as 22 interviews, we show how users adapted this feature into their everyday posting behavior. This research uncovers the necessary prerequisites for users to post a Story. Further, we detect four types of Story posts which differ in the reasons why and how users post them. Based on utility and privacy concerns of users, we then describe how the past, present and future of users influence their posting behavior, explaining differences concerning the popularity of the OSNs for Story posting.
With Industrial Control Systems being increasingly networked, the need for sound forensic capabilities
for such systems increases, including reliable log file analysis as a vital part of such investigations.
However, manipulating log files is one of the steps a knowledgeable attacker can take to prevent visibility on system events and hiding traces of attacker actions in those systems. Therefore, secure logging is advisable for an effective preparation of digital forensics investigations. In addition,
implementing digital forensics readiness in nuclear power plants allows efficient digital forensics
investigations and proper gathering of digital forensics evidence while minimizing investigation costs.
These capabilities are necessary to adequately prevent and quickly detect any security incident and
perform further digital forensics investigations with complete evidence. If an attacker is able to modify log entries or blocks of log entries, critical digital evidence is lost.
Within this thesis, we first evaluate the presence of digital forensics readiness in critical infrastructures,
including nuclear power plants and briefly discuss existing digital forensics readiness
approaches.
As systems in critical infrastructures are sophisticated, such as ones in nuclear power plants,
adequate preparedness is essential in order to respond to cybersecurity incidents before they happen. Due to the importance of safety in these systems, manual approaches are more favored compared to automated techniques. All required tasks and activities and expected results must be also properly documented. Application Security Controls can be one of the approaches to properly document forensic controls. However, Application Security Controls must be evaluated further to ensure forensic applicability as considerable alternatives also exist. In order to demonstrate the value of such forensic Application Security Controls, we analyze a server system of an Operational Instrumentation & Control system in terms of digital evidence. Based on the analysis results, we derive recommendations to improve the overall digital forensic readiness and the security hardening of Linux server systems in the Operational Instrumentation & Control system.
Then, we introduce our formal system model and type of attackers that can access and manipulate logs and logging device. Here, we also give a brief overview of some existing secure logging
approaches and compare them between each other. The goal is to standardize requirements of secure logging approaches and analyze which unified security guarantees are realized by these existing approaches under strong attacker models.
Later, we extend our secure logging model by using blockchain as secure logging protocol, apply the new model to industrial settings, and build a simple prototype as proof of concept. In an evaluation of the new model and the corresponding prototype, we show the potential, but also the challenges, of this approach.
Further, we take a deeper look into existing algorithms for secure logging and integrate them into a single parameterized algorithm. This log authentication and verification algorithm contains a combination of security guarantees and their parametrization returns the set of previous algorithms.
Even with different file formats and common purpose, log files generally have similar structures. To this end, we evaluated three common log file types (syslog, windows event log and SQLite browser histories). Based on this evaluation, we developed a simple unified representation of log files and perform analysis independently of their format. As visualization of log files is helpful to find proper
evidence, we have developed a simple log file visualization tool. This tool helps to identify evidence of system time manipulation.
Privacy in Smart Grids
(2013)
Electricity grids are evolving to "smart grids". Smart grids employ communication of supply and demand between participants of the grid to
achieve better efficiency, availability, resilience, etc. than traditional grids.
Consumer households are connected to the smart grid using smart metering and demand response. Smart metering
communicates the household's electricity consumption at high resolution to the smart grid. Demand response enables service providers to affect the
household's eletricity demand by the means of different incentives.
The communication of a household's electricity consumption leaks information about the household and its inhabitants. Thus, it poses the
inhabitants' privacy at risk. Existing smart grid deployments address this conflict between utility and privacy
unsatisfactorily. They either accept the privacy risk or forfeit utility.
This dissertation provides an alternative solution to mitigate this conflict. The solution retains utility of the smart
grid without compromising consumer privacy. In particular, this dissertation first identifies that the smart grid poses
a privacy risk to consumers. This privacy risk originates from the collection and central storage of consumption
traces in naive implementations. Once consumption data is centrally stored there are few techniques one can employ to protect it from various
attacks. The only viable, i.e. general and utility-preserving, technique is pseudonymization of stored data in
combination with proper access control. However, this dissertation shows experimentally on real smart metering data that
pseudonymization of consumption data is not effective. This result drives the main idea of the remainder: Consumption data
must not be collected, i.e. leave the household, in the first place.
This dissertation provides three privacy-preserving protocols that support essential smart grid computations
(aggregation, billing, compliance verification) on
consumption data, respectively. For each computation the respective protocol only transports the minimal amount of required
information out of the household. Furthermore, to the service provider that interacts with the household, they guarantee
computation results that are as trustworthy as those from non-private protocols.
Standard procedures in computer forensics mainly describe the acquisition and analysis of persistent data, e.g., of hard drives or attached devices. However, due to the increasing storage capacity of these media and, correspondingly, significantly larger data volumes, creating forensically-sound duplicates and recovering valuable artifacts in time gets more and more challenging. Moreover, with the wide availability of free and easy-to-use encryption technologies, a growing number of individuals actively try to protect personal information against unauthorized access. If a suspect is unwilling to share the respective decryption key such measures can therefore quickly thwart an investigation. Last but not least, many sophisticated malicious applications entirely run in memory to date and do not leave any traces on hard disks anymore. Solely focusing on traditional sources can thus lead to an incomplete or inaccurate picture of an incident. In order to cope with these issues, researchers have proposed alternative investigative strategies and extracting pieces of evidence from a computer's RAM. For this purpose, a so-called memory snapshot is taken and inspected offline on a trusted workstation. These activities known as memory forensics have gained broad attention among practitioners over the last years, primarily because operations are repeatable and may be safely verified by other experts without polluting the system environment as, for instance, in a live response situation.
In this thesis, we give a comprehensive overview of fundamental concepts and approaches for seizing as well as examining volatile information. It consists of two parts: In the first part, we formalize criteria for sound memory imaging and illustrate the characteristics, benefits, and drawbacks of proven acquisition technologies available on the market to date. As we will see, especially for software-based solutions it is difficult to produce reliable memory snapshots, because the system state cannot be effectively frozen during runtime. With the help of an evaluation platform that we have developed in the course of the dissertation period, the performance and quality of software imagers can be thoroughly assessed for the first time.
In the second part of this thesis, we explain how common system compromise and manipulation techniques as they are typically employed by rootkits or other types of intelligent malware can be discovered during memory analysis. We also present rkfinder, a new plug-in for the popular, open source forensic suite DFF that facilitates some of these tasks. Rkfinder implements cross-viewing algorithms for checking the integrity of a machine and detecting possible inconsistencies that indicate the presence of a threat. By automatically highlighting suspicious resources that are likely to have been tampered with, even less experienced investigators are able to identify system areas that require particular attention. Thereby, potential sources of an intrusion can be quickly found and addressed.
Memory forensics has become a powerful tool for the detection and analysis of malicious software. It provides investigators with an impartial view of a system, exposing hidden processes, threads, and network connections, by acquiring and analyzing physical memory. Because malicious software must be at least partially resident in memory in order to execute, it cannot remove all its traces from RAM. However, the memory acquisition process is vulnerable to subversion in compromised environments. Malicious software can employ anti-forensic techniques to intercept the acquisition and filter memory contents while they are copied.
In this thesis, we analyze 12 popular memory acquisition tools for Windows, Linux, and Mac OS X, and study their implementation in regard to how they enumerate and map memory. We find that all of the analyzed programs use the operating system to perform these tasks, and further illustrate this by implementing an open source memory acquisition framework for Mac OS X. In a survey of kernel rootkit techniques, that prevent or filter physical memory access, we show that all 12 tested programs are vulnerable to anti-forensics, because they rely on the operating system for critical functions.
To elliminate this vulnerability, we develop an operating system independent approach that directly utilizes the hardware to enumerate and map memory. By interacting with the PCI controller, we are able to safely avoid memory mapped device buffers while acquiring the entire physical address space. We program the page tables directly to map memory, forcing the MMU to facilitate arbitrary physical memory access from our driver's data segment. We implement our techniques into the open source memory acquisition frameworks Winpmem, Pmem, and OSXPmem, furthering the capabilities of memory acquisition software on the Windows, Linux, and Mac OS X platforms.
Finally, we apply our novel technique to related problems in memory forensics. Memory acquisition software for Linux can only be run on a system with the exact same kernel version and configuration as the system it was compiled on, due to dependencies on kernel data structures. We are able to create a minimal, kernel independent version of our module, which we inject into a compatible host module on the target. By hijacking the hosts data structures, we are able to load the infected module, redirect control flow, and communicate with it using a character device. A second innovative property of our acquisition approach is that, because we can enumerate the location of memory mapped device buffers, we are able to safely access memory regions unknown to the operating system. This allows us to acquire malicious firmware during of the memory acquisition process. We present a survey on firmware code and data in the physical address space, and show how we can capture the BIOS, PCI option ROMs, and the ACPI tables using our approach. We implement plugins for the open source memory analysis framework Volatility, which are able to extract the ACPI tables from memory and analyze them for malicious behavior.
In this thesis, we investigate different possibilities to protect the Android ecosystem better. We focus on protection mechanisms for application developers, and present modern attacks against sandbox-protected applications and the developer’s intellectual property, ultimately providing enhanced approaches for defense against these attacks. Our defensive approaches range from runtime-shielding measures to analysis-impeding obfuscation mechanisms.
First, we take a closer look at communication possibilities of sandboxed applications on Android, namely the UI layer and Android’s inter-process communication. We introduce attacks on applications working through the actors on Android’s UI, starting with overlay windows, accessibility services, input editors, and screen captures. Android’s inter-process communication is the second attack avenue we pursue. It is the primary means of communication for apps to interact with each other despite being sandboxed by the Android system. We show through assessments of the Google Play Store and third-party app stores that attacks on these mechanisms pose a blind-spot in current attack models considered by developers. To provide relief we introduce new protection mechanisms that developers can implement and enhance testing methodologies to consider these attacks in the future.
Second, we direct the reader’s attention towards attacks on the developer’s intellectual property. Due to Android’s open-source nature and openly communicated standards, a trend of repackaging popular applications with malicious enhancements and republishing the malicious app has rooted itself in the malware community. To counteract this development, we present an enhanced centroid-based approach at clone detection and improved analysis-impeding obfuscation mechanisms that build on virtualization-based obfuscation. Our obfuscation approach works on Android’s current runtime environment, as well as the previously employed ‘Dalvik virtual machine’, and can be used to obfuscate critical portions of an application’s functionality against prying eyes. To make valid assumptions about the strength of virtualization-based obfuscation, we conduct a de-obfuscation study on the more mature x86/x64 platform, developing a reverse engineering approach for virtualization-obfuscated binaries.
We analyzed several hundred thousand Android applications during our research with automated approaches and several thousand apps with manual analysis, always opting for a responsible disclosure process of found vulnerabilities by providing developers with at least three months’ due notice before attempting a publication. The tools presented in this thesis are open-sourced under the MIT license, to help in the inclusion of development projects and their extension or further development. With the insights gained through the research for this thesis, we hope to provide developers with the tools and testing approaches they need to make the Android ecosystem more
secure and safe.
The Internet of Things (IoT) brings comfort into the life of users. It is convenient to control the lights at home with an app without leaving the couch or open the front door with a remote control. This comfort, however, comes with security risks as the wireless communication between components often relies on proprietary protocols. Such protocols are designed under size and energy constraints whereby security is often only a secondary factor. Moreover, even when a default protocol such as IEEE 802.11 WLAN with enabled encryption is used, mobile devices such as smartphones can be located threatening the location privacy of users.
This thesis is divided into two main parts. In the first part, we demonstrate how to passively locate a smartphone indoors using IEEE 802.11 WLAN and contribute a geolocation system with a mean accuracy of 0.58m. Subsequently, we analyze how a company can incentivize users with different levels of privacy-awareness to connect to a provided WLAN and give up their location privacy in exchange for certain benefits such as shopping discounts. We model this situation as a Bayesian Stackelberg game to find the company's best strategy.
In the second part, we showcase the challenges that arise for security researchers when investigating proprietary wireless protocols. Software Defined Radios (SDRs) propose a generic way to analyze such protocols operating on frequencies like 433.92 MHz or 868.3 MHz where no default hardware such as a WLAN stick is available. SDRs, however, deliver raw signals that have to be demodulated and decoded before researchers can reverse-engineer the protocol format.
Our main contribution to this process is an open source software called Universal Radio Hacker (URH) which is, to the best of our knowledge, the first complete suite for wireless protocol investigation with SDRs. URH splits down the protocol investigation process into the phases Interpretation, Analysis, Generation and Simulation.
The goal of Interpretation phase is identifying the transmitted bits and bytes by demodulating the signal. Apart from letting users manually adjust demodulation parameters, we contribute a set of algorithms to automatically find these parameters and integrate them into URH.
In Analysis phase, the protocol format is reverse-engineered from the demodulated bits. This is a time-consuming manual process that slows down a security analysis. To address this problem, we design and implement a modular system that automatically finds protocol fields such as addresses and checksums. In combination with the automatic modulation parameter detection this speeds up the security analysis of unknown wireless protocols.
URH enables researchers to perform attacks on stateless and stateful protocols in the Generation and Simulation phase, respectively. In Generation, users can apply a fuzzing to arbitrary data ranges while the Simulation component of URH models protocol state machines and dynamically reacts to incoming messages from investigated devices. In both phases, the software automatically applies modulation and encoding to the bits that should be sent. We demonstrate three attacks on IoT devices that were found and executed with URH. The most complex attack involves opening an AES protected wireless door lock in real-time.
In this thesis, we tackle anti-forensic and rootkit problems in digital forensics. An anti-forensic technique is any measure that prevents a forensic analysis or reduces its quality.
First, we investigate the anti-forensic threat of hard drive firmware rootkits, which can prevent a forensic analyst from acquiring data from the hard drive, thus jeopardizing the forensic analysis. To
this end, we first outline the threat of hard drive firmware rootkits. We then provide a procedure to detect and subvert already published hard disk drive firmware bootkits. We further outline
potential avenues to detect hard drive firmware rootkits nested deeper within the hard disk drive’s so-called Service Area, a special storage on the magnetic platter reserved for use by the firmware.
After addressing the acquisition of persistent data storage in form of hard disk drives, we shift towards acquisition and later analysis of volatile storage, in the form of RAM. To this end, we first
evaluate the atomicity and integrity as well as anti-forensic resistance of different memory acquisition techniques with our novel black-box analysis technique. This black-box analysis technique in which memory contents are constantly changed via our payload application with a traceable access pattern, allows us to measure to which extent current memory acquisition methods satisfy atomicity and integrity when dumping the memory of processes. We also discuss their resistance against
anti-forensics. As a result, we show that cold boot attacks belong to the most favorable memory acquisition techniques.
We then investigate cold boot attacks in more detail. First, we experimentally confirm that cooling the RAM modules prolongs the remanence effect considerably. We then prove also experimentally that transplanting RAM modules from one system to another is possible. We further address the issue scrambling in modern DDR3 technology as well as other proposed countermeasures, such as BIOS passwords and temperature detection. We also show that once a system is cold-booted,
malicious anti-forensic code running on the system stops running immediately and can thus no longer interfere with the memory acquisition. Therefore, we show the practical feasibility of cold
boot attacks as anti-forensic resistant memory acquisition method.
After outlining the anti-forensic resistant acquisition of evidence we address the analysis. To this end, we first revisited the theory of data analysis, especially the concept of essential data in forensic analysis as coined by Carrier in his seminal work “File System Forensic Analysis”. We first extend Carrier’s concept by differentiating different levels of essentiality. We introduce the notion of strictly essential data, which refers to data that is always required to be correct and non-manipulated by all systems to provide a specific functionality, and partially essential, which is only required to be correct and non-manipulated for some systems. We then practically verify both the original
theories and our extensions in experiments. Eventually, we argue that essential data can help to build a trust hierarchy of data encountered during forensic analysis, from which we conclude
that anti-forensic resistant analysis methods must only rely on what we call strictly essential, i.e., trusted, data, otherwise, the analysis is potentially impaired by anti-forensic measures because
non-essential data can be freely manipulated without impacting the correct working of a system.
Last but not least, we tackle a long unsolved problem in forensic memory analysis: Currently, all state-of-the-art digital forensic virtual memory analysis tools ignore unmapped memory pages, i.e., pages swapped out onto persistent storage. This can result in blind spots in which data and thus potential evidence is not analyzed. We fix this by analyzing the Windows NT virtual memory paging via a novel gray-box analysis method. To this end, we place traceable data into virtual memory and force it into both the physical RAM as well as the pagefile stored on persistent storage. We are thus able to reverse engineer the complete virtual address mapping, including the non-mapped pagefile.
We evaluate our analysis results against real world data from Windows 7, 8, and 10 systems in both the 32 and 64-bit variants. By shedding light on this last blind spot of virtual memory analysis
we increase its anti-forensic resistance, because we can now for the first time analyze the virtual address space in its entirety.
Obfuscation technique provides the semantically identical but syntactically distinguished transformation, so that to obscure the source code to hide the critical information while preserving the functionality. In that way software authors are able to prevail the resources e.g. computing power, time, toolset, detection algorithms, or experience etc., the revere engineer could afford. Because the Android bytecode is practically easier to decompile, and therefore to reverse engineer, than native machine code, obfuscation is a prominent criteria for Android software copyright protection. However, due to the limited computing resources of the mobile platform, different degree of obfuscation will lead to different level of performance penalty, which might not be tolerable for the end-user.
In this thesis, we optimize the Android obfuscation transformation process that brings in as much "difficulty" as possible meanwhile constrains the performance loss to a tolerable level. We implement software complexity metrics to automatically and quantitatively evaluate the "difficulty" of the obfuscation results. We firstly investigate the properties of the 7 obfuscation methods from the obfuscation engine Pandora. We evaluate their obfuscation effect with 9 different software complexity metrics, when iteratively apply each obfuscation method multiple times to more than 240 Android applications. We show from the result pictures that the obfuscation methods can exhibit two types of properties: monotonicity or idempotency. For most of the monotonicity obfuscation methods, their variants of the complexity values are constant and stable, which are the foundation for our statistic based algorithm to optimize the complexity results.
To reach the desired complexity, we then design and implement our obfuscation framework which can select the optimum obfuscation techniques for the target complexity and apply them to Android applications, while measure the performance cost. The optimization process in our framework is controlled by the Obfuscation Management Layer, which implements the Naïve Bayesian Classifier algorithm to select the obfuscation techniques.
Our obfuscation framework can transform the result APKs to arbitrary target complexity. Meanwhile, the unpredicted performance loss will be caused by the obfuscation. We compare the discrepancies of CPU cycles of original APKs to their obfuscated versions by dynamic testing, and define them as the performance penalty generated by obfuscation. We evaluate the performance penalty of different obfuscation methods. We find out that some of the obfuscation methods consume significantly more performance at the same time have minimum impact on complexities.
Finally, we statistically measure performance losses of different methods, and calculate them as a special metric in the Naïve Bayesian Classifier algorithm. We therefore can optimize the performance cost of the obfuscation to target a tolerable level. Meanwhile, to maximize the code coverage, we develop an automatic testing tool which are used to generate the testing cases.
For the last twenty years, the Internet extends from digital spheres into the physical world through applications such as smart homes, smart cities, and Industry 4.0. Although this technological revolution of the Internet of Things (IoT) brings many benefits to its users, such as increased energy efficiency, optimized and automated processes, and enhanced comfort, it also introduces new security and privacy concerns.
In the first part of this thesis, we examine three novel IoT security and privacy threats from a technical perspective. As first threat, we investigate privacy risks arising from the collection of room climate measurements in smart heating applications. We assume that an attacker has access to temperature and relative humidity data, and trains machine learning classifiers to predict the presence of occupants as well as to discriminate between different types of activities. The results show the leakage of room climate data has serious privacy implications. As second threat, we examine how the expansion of wide-area IoT infrastructure facilitates new attack vectors in hardware security. In particular, we explore to which extent malicious product modifications in the supply chain allow attackers to take control over these devices after deployment. To this end, we design and build a malicious IoT implant that is inserted in arbitrary electronic products. In the evaluation, we leverage these implants for hardware-level attacks on safety- and security-critical products. As third threat, we analyze the security of ZigBee, a popular network standard for smart homes. We present novel attacks that make direct use of the standard's features, showing that one of its commissioning procedures is insecure by design. In the evaluation of these vulnerabilities, we reveal that attackers are able to eavesdrop key material as well as take-over ZigBee products and networks from a distance of more than 100 meters.
In the second part of this thesis, we investigate how IoT security can be improved. Based on an analysis of the root causes of ZigBee's security vulnerabilities, we learn that economic considerations influenced the security design of this IoT technology. Consumers are currently not able to reward IoT security measures as an asymmetric information barrier prevents them from assessing the level of security that is provided by IoT products. As a result, manufacturers are not willing to invest into comprehensive security designs as consumers cannot distinguish them from insufficient security measures. To tackle the asymmetric information barrier, we propose so-called security update labels. Focusing on the delivering of security updates as an important aspect of enforcing IoT security, these labels transform the asymmetric information about the manufacturers' willingness to provide future security updates into an attribute that can be considered during buying decisions. To assess the influence of security update labels on the consumers' choice, we conducted a user study with more than 1,400 participants. The results reveal that the proposed labels are intuitively understood by consumers, considerably influence their buying decisions, and therefore have the potential to establish incentives for manufacturers to provide sustainable security support.
The field Incident Response within the IT Security is the overall
process of handling an incident which occurs within a computer
network or system. It involves the detection, analysis, remediation, and containment of an attack. This capabilities are necessary in order to adequately respond to attacks against systems and be able to limit the associated risk involved in such a case. In recent years the number of attacks against the Internet increased and more organizations are building up defense capabilities, which are called Computer Emergency Response Teams (CERTs).
However the IT infrastructure is changing rapidly and security teams are confronted with new challenges. Therefore they need to evolve in their maturity which is on one hand organizational wise and on the other hand they improve their technical knowledge. Within this thesis we first give an overview about CERTs, Incident Response, and Digital Forensics and afterwards we describe the current challenges using real world case studies.
Later we discuss our contributions in these fields. One was to
develop a new description standard where security teams can provide information about their constituency, their responsibility, and contact information. This can be used by tools in order to automate some parts of handling incidents or by humans to find the right contact for a system. Next we describe a new organizational model how a CERT may be organized in order to be efficient for today's threat landscape.
We further have a deeper look into how cloud environments are
influencing the handling of incidents. More organizations are moving their IT in such environments, however security teams may loose their control for detecting and responding to attacks. That heavily depends on the deployment model and therefore we discuss the influence of these models to the different defense capabilities and purpose some ideas how this can be solved.
Another contribution is in the field of Memory Forensics, which is nowadays an important topic and more tools are being created for it. Therefore it is important to have a model where you can
categorize the different information contained in memory. We created such a model and discuss the usage using real world scenarios.
Finally we contribute to the field of malicious software
analysis. One urgent problem here is the analysis of malicious
office files and the question which vulnerability is exploited. We created a novel system which analyzes files and is able to determine the exploited vulnerability using the patches provided by the vendors. This approach is decreasing the time to analyze an office file and therefore a security team can faster respond to attacks.
We have implemented and documented Ulix, a Unix-like instructional operating system whose kernel sources consist of 7750 lines of C and Assembler code. The system supports concurrent processes and threads, implements a Round-Robin scheduler, a virtual filesystem with support for hard and floppy disks, the logical Minix filesystem and a /dev filesystem, and it provides mutexes and semaphores. In addition, a user mode library gives access to the system calls via typical Unix functions. Ulix can be executed in the Qemu PC emulator.
While there are several other instructional operating systems with similar features, e. g., Minix, the novelty of our approach lies in using Knuth's Literate Programming technique which puts the focus on documentation with embedded code, rather than code with embedded documentation. The literate program is a book - in this case an introduction to operating system principles which presents the full Ulix source code in a way that follows didactical considerations.
Based on the Ulix source code and the book we developed a "Design and implementation of operating systems" course with a collection of implementation exercises and evaluated it in a course setting.
Part I of the thesis summarizes the research carried out while conceptually designing and implementing Ulix as well as evaluating the Ulix-based course. Part II is the Ulix book.
The Web's functionality has shifted from purely server-side code to rich client-side applications, which allow the user to interact with a site without the need for a full page load. While server-side attacks, such as SQL or command injection, still pose a threat, this change in the Web's model also increases the impact of vulnerabilities aiming at exploiting the client. The most prominent representative of such client-side attacks is Cross-Site Scripting, where the attacker's goal is to inject code of his choosing into the application, which is subsequently executed by the victimized browsers in the context of the vulnerable site.
This thesis provides insights into different aspects of Cross-Site Scripting. First, we show that the concept of password managers, which aim to allow users to choose more secure passwords and, thus, increase the overall security of user accounts, is susceptible to attacks which abuse Cross-Site Scripting flaws. In our analysis, we found that almost all built-in password managers can be leveraged by a Cross-Site Scripting attacker to steal stored credentials. Based on our observations, we present a secure concept for password managers, which does not insert password data into the document such that it is accessible from the injected JavaScript code. We evaluate our approach from a functional and security standpoint and find that our proposal provides additional security benefits while not causing any incompatibilities.
Our work then focuses on a sub-class of Cross-Site Scripting, namely Client-Side Cross-Site Scripting. We design, implement and execute a study into the prevalence of this class of flaws at scale. To do so, we implement a taint-aware browsing engine and an exploit generator capable of precisely producing exploits based on our gathered data on suspicious, tainted flows. Our subsequent study of the Alexa top 5,000 domains shows that almost one out of ten of these domains carry at least one Client-Side Cross-Site Scripting vulnerability.
We follow up on these flaws by analyzing the gathered flow data in depth in search of the root causes of this class of vulnerability. To do so, we first discuss the complexities inherent to JavaScript and define a set of metrics to measure said complexity. We then classify the vulnerable snippets of code we discovered according to these metrics and present the key insights gained from our analysis. In doing so, we find that the reasons for such flaws are manifold, ranging from simple unawareness of developers to incompatibilities between, otherwise safe, first- and third-party code.
In addition, we investigate the capability of the state of the art of Cross-Site Scripting filters in the client, the XSS Auditor, finding that several conceptual issues exist which an attacker to subvert of its protection capabilities. We show that the Auditor can be bypassed on over 80% of the vulnerable domains in our data set, highlighting that it is ill-equipped to stop Client-Side Cross-Site Scripting. Motivated by our findings, we present a concept for a filter targeting Client-Side Cross-Site Scripting, combining taint tracking in the browser in conjunction with taint-aware HTML and JavaScript parsers, allowing us to robustly protect users from such attacks.
We usually scrutinize security of embedded systems under an extraordinarily sophisticated attacker model: the adversary has physical possession of the target and unlimited time to break it. For the defensive side, this forms an exceptionally challenging scenario. This thesis studies fortification of systems against such adversaries. The principal contributions lie in the field of embedded security, where we explore methods of building secure systems in a resource-efficient manner. This allows implementation of our countermeasures on resource-constrained microcontrollers. While these have a detrimental effect on runtime performance, the cost of the hardware itself remains unaffected, thereby providing an attractive and inexpensive alternative to hardware countermeasures. Next, we will briefly outline our contributions.
Attacks such as Differential Power Analysis (DPA) enable adversaries to exploit even the most minute differences in data dependent energy consumption. To make it more difficult for attackers to gain access to secrets within a chip, effective countermeasures need to be employed. One technique, implemented using only software, is described by us as a first contribution. We use binary recompilation to achieve binary code polymorphism. This causes different characteristic emission patterns for each call of a protected cryptographic primitive. Due to extensive and sophisticated pre-calculations which we perform at compile time, execution is extremely fast during runtime.
Since not only power consumption but also timing differences are something that attackers can exploit with great accuracy, we studied detection of timing leaks. Considering the architecture of today's increasingly complex microcontrollers, manual estimation of runtime has become virtually infeasible. Therefore, as a second contribution, we developed a behavioral Cortex-M core emulator which permits cycle-accurate simulation. We show how to incorporate such an emulator in a semi-automatic vetting process. After compilation, all security-relevant routines within the code are analyzed and checked for timing discrepancies.
The complexity of modern microcontroller units (MCUs) is shown from a different angle when considering attackers who can manipulate firmware. Since the reduction of electromagnetic interference (EMI) is an important goal of system designers, many recent MCUs already include software-tunable EMI countermeasures. In our third contribution, we show how these anti-EMI peripherals can be abused to construct covert channels. Unfortunately for the defensive side, these channels operate in the radio frequency domain and thus could be used for wireless transmission of data --- even when the benign application was never intended to perform such communication. We describe how changes in parasitic electromagnetic emission can be used to encode data and what hardware is necessary to recover this data.
To increase the resistance of embedded systems against physical attacks, it is common to use special semiconductors which employ hardware countermeasures. The downside of such integration is that the specialized device usually dictates the exact cryptographic construction. How such hardware can be used nevertheless to augment general-purpose microcontrollers is something we focus on with our fourth contribution. As a demonstration, we incorporate a hardware security module in the handshake of the transport layer security (TLS) protocol. We do so without the need to create a custom cipher suite and without modifying the TLS handshake itself; instead, we use a generic approach by relying on implementation-specific protocol invariants and therefore get around the limitations which would be imposed by nonstandard protocol modifications.
When processors make use of external peripherals, such as dynamic random access memory (DRAM), another attack vector arises: Due to parasitic effects of the physical construction of modern high-density RAM, it is possible that the hardware cannot guarantee data integrity for all bit patterns. To counteract this, a technique commonly used by memory controllers is the scrambling of data to gain an effectively bias-free bitstream on the RAM chip. With our fifth contribution, we show how one such scrambling scheme by Intel works in-depth and how scrambled memory can be descrambled to reveal the original memory content. In the field of forensics, this is highly relevant: When physical memory acquisition, for example by cold-boot attacks, is used to capture a memory image, descrambling of that image is required before it can be analyzed meaningfully. We furthermore discuss how knowledge about scrambler-internal workings may open up possibilities for an attacker to deliberately cause disturbances in RAM.
In this thesis, we investigate possibilities of operating trusted systems within untrusted environments in the presence of strong attackers. We consider attackers with physical access to a system as well as root-level attackers which are able to execute code on the target system at the highest privilege level possible. Our proposed defense mechanisms range from hardware-based trusted computing architectures over hardware-assisted software solutions to pure software-based encryption schemes.
First, we design two hardware-based trusted computing architectures for embedded devices, called Soteria and Atlas, which both protect against root-level attackers and focus on code and data confidentiality. At its heart, Soteria is a lightweight program-counter based memory access control extension which provides integrity, authenticity, and confidentiality guarantees for software modules. To guarantee the confidentiality of code at any given point in time against all software attacks, we design a scheme consisting of initially encrypted software modules and loader modules. Mutual integrity checks between software modules and the loader ensure that a software module is only decrypted if the system behaves with integrity. Atlas protects software modules with the help of an encryption unit placed between the cache and main memory. Because Atlas alone only guarantees confidentiality, but no integrity, it is usually combined with traditional memory protection mechanisms managed by the operating system. In case of a compromised operating system, however, Atlas still maintains confidentiality for protected modules.
We then exploit existing hardware-based trusted computing architectures of general purpose devices by presenting two hardware-assisted software solutions built on top of them. However, we also demonstrate how secret information can be inferred from such an architecture if the attacker assumptions are violated. We leverage the Trusted Platform Module (TPM) to support the boot process for consumer devices in order to provide secure full disk encryption. In detail, we implement an authentication protocol which defends against traditional evil maid attacks with repeated physical access. Trust is bootstrapped from an external active USB drive which verifies the boot process by utilizing sealed nonces. The underlying principle is that the user and the computer are mutually authenticating each other with the help of this active USB drive. While with the TPM the whole software stack is part of the Trusted Computing Base (TCB), Intel's Software Guard Extensions (SGX) allow to dynamically establish roots of trust on general purpose hardware. Although not being designed to work in kernel mode, we found a way to deploy SGX to shield kernel components from each other and user-mode applications, even in the event of a full system compromise. However, we also show that SGX is generally vulnerable against cache attacks by practically demonstrating an access driven cache attack on AES when running inside an SGX enclave. In fact, the attack surface for side-channels increases dramatically in the scenario of SGX due to the power of root-level attackers, for example, by exploiting the accuracy of Intel's Performance Monitoring Counters (PMC) which are usually restricted to kernel code.
Finally, we demonstrate that certain security guarantees can also be given when trust is not rooted in hardware by describing two software-based memory encryption solutions to mitigate memory disclosure attacks. RamCrypt targets the problem by providing mechanisms to transparently encrypt whole process address spaces and has been developed as an operating system kernel patch. It can be deployed and enabled on a per-process basis without recompiling user-mode applications. In every enabled process, data is only stored in cleartext for the moment it is processed, and otherwise stays encrypted in RAM. Because encrypting process address spaces is operating system specific, we also present HyperCrypt, a hypervisor-based solution which encrypts the entire kernel and user space while being transparent for the guest operating system and all applications running on top of it.
Im Rahmen dieser Arbeit wird ein Verfahren zur Verifikation existentieller und universeller temporallogischer Formeln in erweiterten endlichen Zustandsmaschinen vorgestellt. Da aufgrund der Turing-Vollständigkeit letzterer die Möglichkeit systematischer, automatisierter Verifikationsmethoden ausscheidet, besteht der Ansatz in dieser Arbeit aus
einer Kombination zwischen einer formalen und einer informalen, heuristikbasierten Herangehensweise. Dazu wird das Verifikationsproblem in ein Überdeckungsproblem transformiert, indem zunächst ausgehend von der zu untersuchenden Maschine und der zu verifizierenden Formel eine neue erweiterte endliche Zustandsmaschine konstruiert wird. Anschließend werden bezüglich dieser neuen Maschine strukturelle Testüberdeckungskriterien definiert, für die bewiesen wird, dass deren Erfüllbarkeit Rückschlüsse über die Gültigkeit der betrachteten Formel erlauben.
Zur tatsächlichen Erfüllung der eingeführten Überdeckungskriterien wird ein auf bereits bekannten Arbeiten basierendes, heuristisches Verfahren zur automatisierten Generierung entsprechender kriteriumerfüllender Testdaten entwickelt.
Um seine Praktikabilität zu demonstrieren, wurde das in der Arbeit entwickelte Verfahren in Form eines Werkzeugs implementiert und zur Verifikation eines Systems bestehend aus autonomen, kooperierenden Robotern angewendet.
Die IT-Forensik gehört zu den Standard-Ermittlungsbereichen und Standardansätzen der Strafverfolgungsbehörden weltweit. Während sich klassische Wissenschaften, wie die Daktyloskopie, längst anhand von standardisierten Vorgehensweisen und Modellen zu forensischen Wissenschaften mit fundierten theoretischen Überlegungen auf der einen Seite und dem praktischen Einsatz auf der anderen Seite entwickelt haben, zeigen sich in der IT-Forensik offensichtliche Defizite.
Grundlegende wissenschaftliche Forschung konkurriert mit pragmatischen technischen Ansätzen. In vielen Teilbereichen werden theoretische Überlegungen ”auf dem Reißbrett“ entworfen, die in die Praxis nicht umgesetzt werden können, während in anderen Bereichen praktisch ohne eine fundierte forensische Grundlage gearbeitet wird.
Die vorliegende Arbeit fokussiert den Teilbereich der Sicherungsphase IT-forensischer Untersuchungen in Hinblick auf gezielte Selektionen im Bereich der Sicherung und des Löschens dedizierter Datenobjekte.
Im theoretischen Bereich werden die Ursprünge der forensischen Wissenschaften aus der Kriminalistik sowie deren wissenschaftliche und juristische Grundlagen betrachtet. Aufbauend hierauf werden die allgemein verwendeten Begriffe der Authentizität und der Integrität für analoge und digitale Beweismittel übergreifend klar definiert, bevor diese auf das Spezialgebiet der digitalen Forensik übertragen werden. Juristische Rahmenbedingungen, wie die StPO oder das Elfes-Urteil, gehen hierbei in die theoretischen Überlegungen zur Definition in einem Ermittlungsprozess geeigneter (erlaubter und relevanter) Datenmengen ein.
Hauptaugenmerk dieser Arbeit ist die theoretische Definition von Rahmenbedingungen einer forensisch korrekten selektiven Sicherung und der Untersuchung der Anwendbarkeit einer solchen alternativen Sicherungsmethode in der Praxis. Hierfür wurden die aktuell zur Verfügung stehenden forensischen Standard-Tools anhand eines Testaufbaus untersucht und gegen aufgestellte Kriterien evaluiert. Ergänzend zur selektiven Sicherung wird der Themenkomplex des selektiven Löschens beleuchtet. Juristischen Forderungen werden technische Möglichkeiten gegenübergestellt. Anhand eines implementierten Prototypen und einer existierenden forensischen Software wird aufgezeigt, dass selektives Löschen in Theorie und Praxis möglich ist.
Cyber-physical systems (CPS) security, as a prevalent concern in all
digital industries, must be implemented on different levels of
abstraction. For example, the development of top-down approaches,
e.g., security models and software architectures is equivalent in
importance to the development of bottom-up solutions like the design
of new protocols and languages. This thesis combines research in the
field of CPS security from both approaches and contributes to the
security models of the two lighthouse examples automotive software
engineering and general password security.
Most existing countermeasures against cyberattacks, e.g., the use of message
cryptography, concentrate on concrete attacks and do not consider the
complexity of the various access options offered by modern cyber-physical systems. This is
mainly due to a solution-oriented approach to security problems. The
model-based technique SAM (Security Abstraction Model) adds to the early
phases of (automotive) software architecture development by explicitly
documenting attacks and managing them with the appropriate security
countermeasures. It additionally establishes the basis for comprehensive
security analysis techniques, e.g., already available attack assessment
methods. SAM thus contributes to an early, problem-oriented and
solution-ignorant understanding combining key stakeholder knowledge. This
thesis provides a detailed overview of SAM and the resulting analyses of our
evaluation show that SAM puts the security-by-design principle into practice
by enabling collaboration between automotive system engineers, system
architects and security experts. The application of SAM aims to reduce costs,
improve overall quality and gain competitive advantages. Based on our
evaluation results, SAM is highly suitable, comprehensible and complete to be
used in the industry.
The bottom-up approach focuses on the area of password hardening encryption
(PHE) services as introduced by Lai et al.~at USENIX 2018. PHE is a password-based
key derivation protocol that involves an oblivious external crypto service
for key derivation. The security of PHE protects against offline brute-force
attacks, even when the attacker has full access to the data server.
The obvious evolution of PHE is the extension of the protocol to use multiple
rate-limiters (guardians) to mitigate the single point of failure introduced by
the original scheme.
In the second part of this thesis, a general overview of the motivation and
use cases of PHE is given, along with a new formalization of the protocol to
help the mentioned scalability and availability issues. Moreover, an implementation
of the resulting threshold-based protocol is briefly explained and evaluated. Our
implementation is furthermore tested and evaluated in a novel use case featuring
password hardened encrypted email.
Machine learning, also known as artificial intelligence, has become a much-researched topic in recent years. Many everyday life applications in a wide variety of fields make use of these powerful self-learning systems. Among such applications are safety-critical software systems, such as autonomous driving systems. However, like any computer system, machine learning systems are not safe from attacks by organizations with malicious intentions.
To analyze how dangerous attacks are to safety-critical systems, we estimate the threat that attacks pose to the systems that contain machine learning and humans, such as road users, if the systems are not secured against attacks. We evaluate attacks on machine learning systems and subsystems in autonomous vehicles and combine both evaluations to assess the actual danger that attacks pose to autonomous vehicles. We find that many attacks are already mitigated by the distributed nature of embedded systems and security measures in place as of today. The greatest threat is posed by attacks that require access to only the inputs and outputs of the machine learning system. These include adversarial example attacks that manipulate inputs to provoke false outputs.
We also conduct interviews with industry experts to analyze how machine learning systems are currently developed in practice and identify areas for potential and need for improvement. As a result of this analysis, we set up a list of requirements that can help create more secure machine learning systems.
Machine learning systems are sensitive to small changes in the input data. For example, when images are slightly manipulated in a specific way they are misclassified even though they were classified correctly before the manipulations were applied. These altered images are called adversarial examples and pose a serious threat. This work deals with this form of attack in more detail and analyzes how the computation of manipulated images can be sped up with the help of masks. We propose an algorithm that selects random pixels in the mask, manipulates them and merges the changes that have the biggest influence on the output of the machine learning system regarding the attackers' goal to create the adversarial example.
We run several experiments using different types and sizes of masks and find that masks can indeed have a positive impact on the effectiveness and efficiency of the attack. In addition, it may be possible to add masks to existing adversarial example attack algorithms, which also improves them. We show this by running experiments using other attack algorithms. We also discuss prerequisites under which an improvement of attack algorithms by using masks is possible.
We combine the various small perturbations that turn images into adversarial examples into a universal adversarial perturbation. This is a special modification that does not cause misclassification for only one image, as is the case with adversarial examples but causes misclassification of multiple images. Our experiments show that the universal adversarial perturbations we compute cause misclassification for a large number of images, but the changes in the images need to be very strong, making them easy for a human to detect. Therefore, universal adversarial perturbations need to be obscured differently. For that we use masks, for example, to perturb only the border of the image. These manipulations could be seen as a decorative element. We also see that it is difficult to compute universal adversarial perturbations that cause misclassification for 100% of the images in a dataset.