Refine
Year of publication
- 2021 (2)
Document Type
- Doctoral Thesis (2)
Language
- English (2)
Has Fulltext
- yes (2)
Keywords
- ASIL (1)
- Adversarial Examples (1)
- Adversarial Machine Learning (1)
- CVSS (1)
- Machine Learning (1)
- Safety (1)
- Security (1)
- Technical Debt (1)
- Universal Adversarial Perturbations (1)
Institute
Cyber-physical systems (CPS) security, as a prevalent concern in all
digital industries, must be implemented on different levels of
abstraction. For example, the development of top-down approaches,
e.g., security models and software architectures is equivalent in
importance to the development of bottom-up solutions like the design
of new protocols and languages. This thesis combines research in the
field of CPS security from both approaches and contributes to the
security models of the two lighthouse examples automotive software
engineering and general password security.
Most existing countermeasures against cyberattacks, e.g., the use of message
cryptography, concentrate on concrete attacks and do not consider the
complexity of the various access options offered by modern cyber-physical systems. This is
mainly due to a solution-oriented approach to security problems. The
model-based technique SAM (Security Abstraction Model) adds to the early
phases of (automotive) software architecture development by explicitly
documenting attacks and managing them with the appropriate security
countermeasures. It additionally establishes the basis for comprehensive
security analysis techniques, e.g., already available attack assessment
methods. SAM thus contributes to an early, problem-oriented and
solution-ignorant understanding combining key stakeholder knowledge. This
thesis provides a detailed overview of SAM and the resulting analyses of our
evaluation show that SAM puts the security-by-design principle into practice
by enabling collaboration between automotive system engineers, system
architects and security experts. The application of SAM aims to reduce costs,
improve overall quality and gain competitive advantages. Based on our
evaluation results, SAM is highly suitable, comprehensible and complete to be
used in the industry.
The bottom-up approach focuses on the area of password hardening encryption
(PHE) services as introduced by Lai et al.~at USENIX 2018. PHE is a password-based
key derivation protocol that involves an oblivious external crypto service
for key derivation. The security of PHE protects against offline brute-force
attacks, even when the attacker has full access to the data server.
The obvious evolution of PHE is the extension of the protocol to use multiple
rate-limiters (guardians) to mitigate the single point of failure introduced by
the original scheme.
In the second part of this thesis, a general overview of the motivation and
use cases of PHE is given, along with a new formalization of the protocol to
help the mentioned scalability and availability issues. Moreover, an implementation
of the resulting threshold-based protocol is briefly explained and evaluated. Our
implementation is furthermore tested and evaluated in a novel use case featuring
password hardened encrypted email.
Machine learning, also known as artificial intelligence, has become a much-researched topic in recent years. Many everyday life applications in a wide variety of fields make use of these powerful self-learning systems. Among such applications are safety-critical software systems, such as autonomous driving systems. However, like any computer system, machine learning systems are not safe from attacks by organizations with malicious intentions.
To analyze how dangerous attacks are to safety-critical systems, we estimate the threat that attacks pose to the systems that contain machine learning and humans, such as road users, if the systems are not secured against attacks. We evaluate attacks on machine learning systems and subsystems in autonomous vehicles and combine both evaluations to assess the actual danger that attacks pose to autonomous vehicles. We find that many attacks are already mitigated by the distributed nature of embedded systems and security measures in place as of today. The greatest threat is posed by attacks that require access to only the inputs and outputs of the machine learning system. These include adversarial example attacks that manipulate inputs to provoke false outputs.
We also conduct interviews with industry experts to analyze how machine learning systems are currently developed in practice and identify areas for potential and need for improvement. As a result of this analysis, we set up a list of requirements that can help create more secure machine learning systems.
Machine learning systems are sensitive to small changes in the input data. For example, when images are slightly manipulated in a specific way they are misclassified even though they were classified correctly before the manipulations were applied. These altered images are called adversarial examples and pose a serious threat. This work deals with this form of attack in more detail and analyzes how the computation of manipulated images can be sped up with the help of masks. We propose an algorithm that selects random pixels in the mask, manipulates them and merges the changes that have the biggest influence on the output of the machine learning system regarding the attackers' goal to create the adversarial example.
We run several experiments using different types and sizes of masks and find that masks can indeed have a positive impact on the effectiveness and efficiency of the attack. In addition, it may be possible to add masks to existing adversarial example attack algorithms, which also improves them. We show this by running experiments using other attack algorithms. We also discuss prerequisites under which an improvement of attack algorithms by using masks is possible.
We combine the various small perturbations that turn images into adversarial examples into a universal adversarial perturbation. This is a special modification that does not cause misclassification for only one image, as is the case with adversarial examples but causes misclassification of multiple images. Our experiments show that the universal adversarial perturbations we compute cause misclassification for a large number of images, but the changes in the images need to be very strong, making them easy for a human to detect. Therefore, universal adversarial perturbations need to be obscured differently. For that we use masks, for example, to perturb only the border of the image. These manipulations could be seen as a decorative element. We also see that it is difficult to compute universal adversarial perturbations that cause misclassification for 100% of the images in a dataset.