TY - JOUR A1 - Rauwolf, Paul A1 - Bryson, Joanna T1 - Expectations of Fairness and Trust Co-Evolve in Environments of Partial Information JF - Dynamic Games and Applications N2 - When playing one-shot economic games, individuals often blindly trust others, accepting partnerships without any information regarding the trustworthiness of their partner. Consequently, they risk deleterious pacts. Oddly, when individuals do have information about another, they reject partnerships that are not fair, despite the fact that such offers are profitable—individuals costly punish. Why would one reject profitable partnerships on the one hand, but risk unknown offers on the other? Significant research has gone into explaining the contexts where blind trust or costly punishment provides an evolutionary advantage; however, both behaviours are rarely considered in tandem. Here we demonstrate that both behaviours can simultaneously be revenue maximizing. Further, given the plausible condition of partially obscured information and partner choice, trust mediates the generation of costly punishment. This result is important because it demonstrates that the evolutionary viability of trust, fairness, and costly punishment may be linked. The adaptive nature of fairness expectations can best be explained in concert with trust. Y1 - 2018 U6 - https://doi.org/10.1007/s13235-017-0230-x VL - 8 IS - 4 SP - 891 EP - 917 ER - TY - JOUR A1 - Bryson, Joanna T1 - Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics JF - Ethics and Information Technology N2 - The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the theoretical biology of sociality and autonomy to explain our moral intuitions. From this grounding I extend to consider possible ethics for maintaining either human- or of artefact-centred societies. I conclude that while constructing AI systems as either moral agents or patients is possible, neither is desirable. In particular, I argue that we are unlikely to construct a coherent ethics in which it it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to. Y1 - 2018 U6 - https://doi.org/10.1007/s10676-018-9448-6 VL - 20 IS - 1 SP - 15 EP - 26 ER - TY - RPRT A1 - Wilson, Holly A1 - Bryson, Joanna A1 - Theodorou, Andreas T1 - Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation N2 - The prevalence of artificial intelligent agents carrying out morally salient decisions is growing. The decisions made by such agents as autonomous cars or weapon systems, may have life and death consequences. We argue that the decision-making algorithms of all agents whose decisions have high societal impact should be transparent [6]; to ensure human-agent interaction is fully informed, consensual, and of maximum benefit to the society. Importantly, the literature also indicates we may perceive and respond to morally salient decisions made by a machine differently to the same decision made by a human [5, 4, 3]. We present here a virtual reality simulation of a self-driving car we developed, in which users experience moral dilemmas. In our two studies, we investigate the perceptions of a morally salient decision; first as moderated by the type of the agent, artificial or natural (human), and then with the implementation of transparency. Specifically, inspired by the Moral Machine research programme [2, 1], we used social value as a moral framework. The agent chooses to hit a pedestrian on either the left or right side of a zebra crossing dependent on dimensions of occupation, body-size and gender. Participants gave feedback after each scenario. Contrary to past findings, participants in this current study were distressed at the principle of decision-making based on attributes such as social value. In questionnaire responses and postexperiment conversation the majority reported preferring such decisions to be made at random. This raises important insights into how we implement moral frameworks. We suggest that the disparity between preferences from the current study and past work is due to the virtual reality methodology we used. Specifically, we note a distinction between emotional vs. rational decisionmaking, which was supported by an extension survey we conducted. Consistent with expectations, the self-driving car was perceived as significantly less morally culpable and human-like than the human-driver. The transparency implementation led to a further significant reduction in perceived human-likeness, and also to reduced perceptions of intentionality. The reduction in moral culpability has disturbing possible connotations, though it may also be helpful for correct attribution of accountability. Promisingly, our transparency implementation significantly improved participants’ understanding of the self-driving cars decision. We suggest companies implementing moral frameworks do not take crowd-sourced preferences at face value, but explore the methodology used. Additionally, our work supports transparency as a mechanism to calibrate our mental models of autonomous agents. Y1 - 2018 UR - https://researchportal.bath.ac.uk/en/publications/perceptions-of-moral-dilemmas-in-a-virtual-reality-car-simulation ET - Paper presented at IA Symposium ER - TY - JOUR A1 - Gaudl, Swen E. A1 - Bryson, Joanna T1 - The extended ramp model: A biomimetic model of behaviour arbitration for lightweight cognitive architectures JF - Cognitive Systems Research N2 - In this article, we present an idea for a more intuitive, low-cost, adjustable mechanism for behaviour control and management. One focus of current development in virtual agents, robotics and digital games is on increasingly complex and realistic systems that more accurately simulate intelligence found in nature. This development introduces a multitude of control parameters creating high computational costs. The resulting complexity limits the applicability of AI systems. One solution to this problem it to focus on smaller, more manageable, and flexible systems which can be simultaneously created, instantiated, and controlled. Here we introduce a biologically inspired systems-engineering approach for enriching behaviour arbitration with a low computational overhead. We focus on an easy way to control the maintenance, inhibition and alternation of high-level behaviours (goals) in cases where static priorities are undesirable. The models we consider here are biomimetic, based on neuro-cognitive research findings from dopaminic cells responsible for controlling goal switching and maintenance in the mammalian brain. The most promising model we find is applicable to selection problems with multiple conflicting goals. It utilizes a ramp function to control the execution and inhibition of behaviours more accurately than previous mechanisms, allowing an additional layer of control on existing behaviour prioritization systems. Y1 - 2018 U6 - https://doi.org/10.1016/j.cogsys.2018.02.001 VL - 50 SP - 1 EP - 9 ER - TY - CHAP A1 - Wilson, Holly A1 - Rauwolf, Paul A1 - Bryson, Joanna ED - Shackelford, Todd K. T1 - Evolutionary Psychology and Artificial Intelligence: The Impact of Artificial Intelligence on Human Behaviour T2 - The SAGE Handbook of Evolutionary Psychology N2 - Artificial Intelligence (AI) presents a new landscape for humanity. Both what we can do, and the impact of our ordinary actions is changed by the innovation of digital and intelligent technology. In this chapter we postulate how AI impacts contemporary societies on an individual and collective level. We begin by teasing apart the current actual impact of AI on society from the impact that our cultural narratives surrounding AI has. We then consider the evolutionary mechanisms that maintain a stable society such as heterogeneity, flexibility and cooperation. Taking AI as a prosthetic intelligence, we discuss how—for better and worse—it enhances our connectivity, coordination, equality, distribution of control and our ability to make predictions. We further give examples of how transparency of thoughts and behaviours influence call-out culture and behavioural manipulation with consideration of group dynamics and tribalism. We next consider the efficacy and vulnerability of human trust, including the contexts in which blind trust in information is either adaptive or maladaptive in an age where the cost of information is decreasing. We then discuss trust in AI, and how we can calibrate trust as to avoid over-trust and mistrust adaptively, using transparency as a mechanism. We then explore the barriers for AI increasing accuracy in our perception by focusing on fake news. Finally, we look at the impact of information accuracy, and the battles of individuals against false beliefs. Where available, we use models drawn from scientific simulations to justify and clarify our predictions and analysis. Y1 - 2020 SN - 9781526489166 PB - SAGE Publications Ltd CY - London ER - TY - CHAP A1 - Bryson, Joanna A1 - Theodorou, Andreas ED - Toivonen, Marja ED - Saari, Eveliina T1 - How Society Can Maintain Human-Centric Artificial Intelligence T2 - Human-Centered Digitalization and Services N2 - Although not a goal universally held, maintaining human-centric artificial intelligence is necessary for society's long-term stability. Fortunately, the legal and technological problems of maintaining control are actually fairly well understood and amenable to engineering. The real problem is establishing the social and political will for assigning and maintaining accountability for artifacts when these artefacts are generated or used. In this chapter we review the necessity and tractability of maintaining human control, and the mechanisms by which such control can be achieved. What makes the problem both most interesting and most threatening is that achieving consensus around any human-centred approach requires at least some measure of agreement on broad existential concerns. Y1 - 2019 SN - 978-981-13-7725-9 SN - 978-981-13-7724-2 SP - 305 EP - 323 PB - Springer ER - TY - CHAP A1 - Rotsidis, Alexandros A1 - Theodorou, Andreas A1 - Bryson, Joanna A1 - Wortham, Robert H. T1 - Improving Robot Transparency: An Investigation With Mobile Augmented Reality T2 - Paper presented at The 28th IEEE International Conference on Robot & Human Interactive Communication, New Delhi, India N2 - Autonomous robots can be difficult to understand by their developers, let alone by end users. Yet, as they become increasingly integral parts of our societies, the need for afford- able easy to use tools to provide transparency grows. The rise of the smartphone and the improvements in mobile computing performance have gradually allowed Augmented Reality (AR) to become more mobile and affordable. In this paper we review relevant robot systems architecture and propose a new software tool to provide robot transparency through the use of AR technology. Our new tool, ABOD3-AR provides real-time graphical visualisation and debugging of a robot’s goals and priorities as a means for both designers and end users to gain a better mental model of the internal state and decision making processes taking place within a robot. We also report on our on-going research programme and planned studies to further understand the effects of transparency to naive users and experts. Y1 - 2019 UR - https://researchportal.bath.ac.uk/en/publications/improving-robot-transparency-an-investigation-with-mobile-augment ER - TY - JOUR A1 - Wortham, Robert H. A1 - Gaudl, Swen E. A1 - Bryson, Joanna T1 - Instinct: A Biologically Inspired Reactive Planner for Intelligent Embedded Systems JF - Cognitive Systems Research N2 - The Instinct Planner is a new biologically inspired reactive planner, based on an established behaviour based robotics methodology and its reactive planner component—the POSH planner implementation. It includes several significant enhancements that facilitate plan design and runtime debugging. It has been specifically designed for low power processors and has a tiny memory footprint. Written in C++, it runs eciently on both Arduino(Atmel AVR) and Microsoft VC++ environments and has been deployed within a low cost maker robot to study AI Transparency. Plans may be authored using a variety of tools including a new visual design language, currently implemented using the Dia drawing package. Y1 - 2019 U6 - https://doi.org/10.1016/j.cogsys.2018.10.016 VL - 57 SP - 207 EP - 215 ER - TY - CHAP A1 - Bryson, Joanna ED - Dubber, Markus ED - Pasquale, Frank ED - Das, Sunit T1 - The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation T2 - The Oxford Handbook of Ethics of AI N2 - Artificial intelligence (AI) is a technical term often referring to artifacts used to detect contexts for human actions, or sometimes also for machines able to effect actions in response to detected contexts. Our capacity to build such artifacts has been increasing, and with it the impact they have on our society. This does not alter the fundamental roots or motivations of law, regulation, or diplomacy, which rest on persuading humans to behave in a way that provides sustainable security for humans. It does however alter nearly every other aspect of human social behaviour, including making accountability and responsibility potentially easier to trace. This chapter reviews the nature and implications of AI with particular attention to how they impinge on possible applications to and of law. Y1 - 2019 SN - 9780190067397 U6 - https://doi.org/10.1093/oxfordhb/9780190067397.013.1 PB - Oxford University Press ER - TY - CHAP A1 - Bryson, Joanna T1 - The Past Decade and Future of AI’s Impact on Society T2 - Towards a New Enlightenment? A Transcendent Decade N2 - Artificial intelligence (AI) is a technical term referring to artifacts used to detect contexts or to effect actions in response to detected contexts. Our capacity to build such artifacts has been increasing, and with it the impact they have on our society. This article first documents the social and economic changes brought about by our use of AI, particularly but not exclusively focusing on the decade since the 2007 advent of smartphones, which contribute substantially to “big data” and therefore the efficacy of machine learning. It then projects from this political, economic, and personal challenges confronting humanity in the near future, including policy recommendations. Overall, AI is not as unusual a technology as expected, but this very lack of expected form may have exposed us to a significantly increased urgency concerning familiar challenges. In particular, the identity and autonomy of both individuals and nations is challenged by the increased accessibility of knowledge. Y1 - 2019 SN - 9788417141219 VL - 11 PB - BBVA ER - TY - BOOK A1 - Theodorou, Andreas A1 - Bryson, Joanna A1 - Bandt-Law, Bryn T1 - The Sustainability Game: AI Technology as an Intervention for Public Understanding of Cooperative Investment T3 - IEEE CONFERENCE ON GAMES (COG)[8848058] IEEE N2 - Cooperative behaviour is a fundamental strategy for survival; it positively affects economies, social relationships, and makes larger societal structures possible. People vary, however, in their willingness to engage in cooperative behaviour in a particular context. Here we examine whether AI can be effectively used to to alter individuals' implicit understanding of cooperative dynamics, and hence increase cooperation and participation in public goods projects. We developed an intervention---the Sustainability Game (SG)---to allow players to experience the consequences of individual investment strategies on a sustainable society. %, when personal well being, communal space, and resources limitations are taken into consideration. Results show that the intervention significantly increases individuals' cooperative behaviour in partially anonymised public goods contexts, but enhances competition one-on-one. This indicates our intervention does improve transparency of the systemic consequences of individual cooperative behaviour. Y1 - 2019 SN - 9781728118840 ER - TY - CHAP A1 - Wortham, Robert H. A1 - Bryson, Joanna ED - Prescott (et al.), Tony J. T1 - Communication T2 - Living Machines: A Handbook of Research in Biomimetic and Biohybrid Systems N2 - From a traditional engineering perspective, communication is about effecting control over a distance, and its primary concern is the reliability of transmission. This chapter reviews communication in nature, describing its evolution from the perspective of the selfish gene. Communication in nature is ubiquitous and generally honest, and arises as much from collaboration as manipulation. We show that context and relevance allow effective communication with little information transfer, particularly between organisms with similar capacities and goals. Human language differs fundamentally from the non-verbal communication we share with other animals; robots may need to accommodate both. We document progress in AI capacities to generate synthetic emotion and to sense and classify human emotion. Communication in contemporary biomimetic systems is between robots in swarm robotics, but also between robot and human for both autonomous and collaborative systems. We suggest increased future emphasis on capacities to receive and comprehend signs, and on the pragmatic utility of communication and cooperation. Y1 - 2018 SN - 9780199674923 U6 - https://doi.org/10.1093/oso/9780199674923.003.0033 SP - 312 EP - 326 ER -