Refine
Document Type
- Article (8)
- conference proceeding (article) (3)
- Part of a Book (2)
Has Fulltext
- no (13)
Reviewed
- Begutachtet/Reviewed (12)
Institute
Is part of the Bibliography
- yes (13)
Automated decision-making algorithms are increasingly prevalent in consumer-facing industries, particularly in insurance risk assessments. The traceability of these decisions is crucial for trust, acceptance, and individual autonomy. While the General Data Protection Regulation (GDPR) grants individuals the right to information about such decisions, the implementation of this right remains under-researched from a usable privacy perspective. This study employs a qualitative exploratory approach with 12 participants exercising their right to be informed about automated decision-making with German household insurers. Through interviews and observations, we investigate consumer requirements and prevailing implementation practices. Our findings unveil actual process design practices that may undermine the usability and efficacy of this data subject right. By identifying these concerns and correlating them to existing deceptive patterns, our research contributes to usable security by alerting process designers, data protection authorities, and enterprises to the significance of user-centric implementations. Furthermore, this study advances research on GDPR data subject rights, emphasizing the need for secure and usable interfaces in the context of automated decision-making systems. Our work highlights the practical challenges of safeguarding usable implementation of regulatory compliance in the realm of data protection.
Consumers rely heavily on online user reviews when shopping online and cybercriminals produce fake reviews to manipulate consumer opinion. Much prior research focuses on the automated detection of these fake reviews, which are far from perfect. Therefore, consumers must be able to detect fake reviews on their own. In this study we survey the research examining how consumers detect fake reviews online.
Automated decision-making algorithms are increasingly prevalent in consumer-facing industries, particularly in insurance risk assessments. The traceability of these decisions is crucial for trust, acceptance, and individual autonomy. While the General Data Protection Regulation (GDPR) grants individuals the right to information about such decisions, the implementation of this right remains under-researched from a usable privacy perspective. This study employs a qualitative exploratory approach with 12 participants exercising their right to be informed about automated decision-making with German household insurers. Through interviews and observations, we investigate consumer requirements and prevailing implementation practices. Our findings unveil actual process design practices that may undermine the usability and efficacy of this data subject right. By identifying these concerns and correlating them to existing deceptive patterns, our research contributes to usable security by alerting process designers, data protection authorities, and enterprises to the significance of user-centric implementations. Furthermore, this study advances research on GDPR data subject rights, emphasizing the need for secure and usable interfaces in the context of automated decision-making systems. Our work highlights the practical challenges of safeguarding usable implementation of regulatory compliance in the realm of data protection.
Recently, there is a growing trend of using generative AI systems and tools for fostering and protecting online collaborative communities. Yet, existing AI tools may introduce new risks and even harm to diverse communities’ online safety. How to better maximize the novel opportunities of AI and mitigate its emerging risks and harm for our future online safety is a critically needed discussion for the HCI community. Featuring experts from both industry and academia, the goal for this panel is to promote interdisciplinary, community-wide discussions and collective reflections on important questions and considerations at the unique intersection of AI and online communities, including but not limited to: how the design of AI systems may discourage existing online harm but also invite new online harm in various online spaces; how different populations, cultures, and communities may perceive and experience AI’s new roles for their online safety; and what new strategies, principles, and directions can be envisioned and identified to better design future AI technologies to protect rather than harm various online communities.
Much is known about social engineering strategies (SE) during the attack phase, but little is known about the post-attack period. To address this gap, we conducted 17 narrative interviews with victims of cyber fraud. We found that while it was seen to be important for victims to act immediately and to take countermeasures against attack, they often did not do so. In this paper, we describe this "delay" in victims' responses as entailing a period of doubt and trust in good faith. The delay in victim response is a direct consequence of various SE techniques, such as exploiting prosocial behavior with subsequent negative effects on emotional state and interpersonal relationships. Our findings contribute to shaping digital resistance by helping people identify and overcome delay techniques to combat their inaction and paralysis.
In the digital context, there are increasingly points at which users encounter data protection topics, especially to provide consent. At the same time, companies are heavily involved in handling data in a legally compliant manner and obtaining permission to process data. Entire industries have built up around “managing” user consent. However, a significant part of the added value of such solutions is to promise a high consent rate: That is, to apply designs that are lawful, but still nudge users to disclose data as much as possible. This does not have to happen through deceptive design [18, 37, 62](see also the chapter “The Hows and Whys of Dark Patterns: Categorizations and Privacy”) but can also work by achieving transparency in terms of privacy and security safeguards (low risks) and presentation of added value (large benefit)[24, 51]. Nevertheless, it is usually done in the interest of the data processor that in turn can be to the detriment of a free decision by the customer aka data subject. The data-driven economy fosters exactly such unbalanced relations between the entities that gather and process personal information and the individuals who are often unaware of the extent and the significance of the processing [20]. By large, there are three ways of influencing said imbalance: First, by the increasing business incentive for companies to collect and make use of (especially: personal) data, actors become more likely to engage in more excessive data collection practices. Second, by the increasing complexity and opaqueness of algorithms used, it is becoming more difficult to explain data processing, especially to non-tech-
While the recent discussion on Art. 25 GDPR often considers the approach of data protection by design as an innovative idea, the notion of making data protection law more effective through requiring the data controller to implement the legal norms into the processing design is almost as old as the data protection debate. However, there is another, more recent shift in establishing the data protection by design approach through law, which is not yet understood to its fullest extent in the debate. Art. 25 GDPR requires the controller to not only implement the legal norms into the processing design but to do so in an effective manner. By explicitly declaring the effectiveness of the protection measures to be the legally required result, the legislator inevitably raises the question of which methods can be used to test and assure such efficacy. In our opinion, extending the legal compatibility assessment to the real effects of the required measures opens this approach to interdisciplinary methodologies. In this paper, we first summarise the current state of research on the methodology established in Art. 25 sect. 1 GDPR, and pinpoint some of the challenges of incorporating interdisciplinary research methodologies. On this premise, we present an empirical research methodology and first findings which offer one approach to answering the question on how to specify processing purposes effectively. Lastly, we discuss the implications of these findings for the legal interpretation of Art. 25 GDPR and related provisions, especially with respect to a more effective implementation of transparency and consent, and provide an outlook on possible next research steps.
Gaps in knowledge lead to misunderstandings and uncertainty, including when dealing with IT. However, existing security and data protection advice has not yet been able to resolve these uncertainties. Even experts sometimes find it difficult to assess what is actually relevant for users. In order to raise awareness and understanding, relevant content from security and data protection advice materials should be identified and prioritized for users, take away uncertainties and thus enable successful usable security & privacy.
Data protection risks play a major role in data protection laws and have shown to be suitable means for accountability in designing for usable privacy. Especially in the legal realm, risks are typically collected heuristically or deductively, e.g., by referring to fundamental right violations. Following a user-centered design credo, research on usable privacy has shown that a user-perspective on privacy risks can enhance system intelligibility and accountability. However, research on mapping the landscape of user-perceived privacy risks is still in its infancy. To extend the corpus of privacy risks as users perceive them in their daily use of technology, we conducted 9 workshops collecting 91 risks in the fields of web browsing, voice assistants and connected mobility. The body of risks was then categorized by 11 experts from the legal and HCI-domain. We find that, while existing taxonomies generally fit well, a societal …
Recently, there is a growing trend of using generative AI systems and tools for fostering and protecting online collaborative communities. Yet, existing AI tools may introduce new risks and even harm to diverse communities’ online safety. How to better maximize the novel opportunities of AI and mitigate its emerging risks and harm for our future online safety is a critically needed discussion for the HCI community. Featuring experts from both industry and academia, the goal for this panel is to promote interdisciplinary, community-wide discussions and collective reflections on important questions and considerations at the unique intersection of AI and online communities, including but not limited to: how the design of AI systems may discourage existing online harm but also invite new online harm in various online spaces; how different populations, cultures, and communities may perceive and experience AI’s new roles for their online safety; and what new strategies, principles, and directions can be envisioned and identified to better design future AI technologies to protect rather than harm various online communities.