Refine
Document Type
- Part of a Book (1)
- Working Paper (1)
Has Fulltext
- no (2)
Is part of the Bibliography
- no (2)
Artificial Intelligence (AI) presents a new landscape for humanity. Both what we can do, and the impact of our ordinary actions is changed by the innovation of digital and intelligent technology. In this chapter we postulate how AI impacts contemporary societies on an individual and collective level. We begin by teasing apart the current actual impact of AI on society from the impact that our cultural narratives surrounding AI has. We then consider the evolutionary mechanisms that maintain a stable society such as heterogeneity, flexibility and cooperation. Taking AI as a prosthetic intelligence, we discuss how—for better and worse—it enhances our connectivity, coordination, equality, distribution of control and our ability to make predictions. We further give examples of how transparency of thoughts and behaviours influence call-out culture and behavioural manipulation with consideration of group dynamics and tribalism. We next consider the efficacy and vulnerability of human trust, including the contexts in which blind trust in information is either adaptive or maladaptive in an age where the cost of information is decreasing. We then discuss trust in AI, and how we can calibrate trust as to avoid over-trust and mistrust adaptively, using transparency as a mechanism. We then explore the barriers for AI increasing accuracy in our perception by focusing on fake news. Finally, we look at the impact of information accuracy, and the battles of individuals against false beliefs. Where available, we use models drawn from scientific simulations to justify and clarify our predictions and analysis.
The prevalence of artificial intelligent agents carrying out morally salient decisions is growing. The decisions made by such agents as autonomous cars or weapon systems, may have life and death consequences. We argue that the decision-making algorithms of all agents whose decisions have high societal impact should be transparent [6]; to ensure human-agent interaction is fully informed, consensual, and of maximum benefit to the society. Importantly, the literature also indicates we
may perceive and respond to morally salient decisions made by a machine differently to the same decision made by a human [5, 4, 3].
We present here a virtual reality simulation of a self-driving car we developed, in which users experience moral dilemmas. In our two studies, we investigate the perceptions of a morally salient decision; first as moderated by the type of the agent, artificial or natural (human), and then with the implementation of transparency. Specifically, inspired by the Moral Machine research programme [2, 1], we used social value as a moral framework. The agent chooses to hit a pedestrian on either the left or right side of a zebra crossing dependent on dimensions of occupation, body-size and
gender. Participants gave feedback after each scenario.
Contrary to past findings, participants in this current study were distressed at the principle of decision-making based on attributes such as social value. In questionnaire responses and postexperiment conversation the majority reported preferring such decisions to be made at random.
This raises important insights into how we implement moral frameworks. We suggest that the disparity between preferences from the current study and past work is due to the virtual reality methodology we used. Specifically, we note a distinction between emotional vs. rational decisionmaking, which was supported by an extension survey we conducted. Consistent with expectations,
the self-driving car was perceived as significantly less morally culpable and human-like than the
human-driver. The transparency implementation led to a further significant reduction in perceived
human-likeness, and also to reduced perceptions of intentionality. The reduction in moral culpability has disturbing possible connotations, though it may also be helpful for correct attribution of accountability. Promisingly, our transparency implementation significantly improved participants’
understanding of the self-driving cars decision.
We suggest companies implementing moral frameworks do not take crowd-sourced preferences at face value, but explore the methodology used. Additionally, our work supports transparency as a mechanism to calibrate our mental models of autonomous agents.