Artificial Driving Intelligence and Moral Agency: Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the Trolley Dilemma

Cunneen, Martin and Mullins, Martin and Murphy, Finbarr and Gaines, Seán (2019) Artificial Driving Intelligence and Moral Agency: Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the Trolley Dilemma. Applied Artificial Intelligence, 33 (3). pp. 267-293. ISSN 0883-9514

[thumbnail of Artificial Driving Intelligence and Moral Agency Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the.pdf] Text
Artificial Driving Intelligence and Moral Agency Examining the Decision Ontology of Unavoidable Road Traffic Accidents through the Prism of the.pdf - Published Version

Download (2MB)

Abstract

The question of the capacity of artificial intelligence to make moral decisions has been a key focus of investigation in robotics for decades. This question has now become pertinent to automated vehicle technologies, as a question of understanding the capacity of artificial driving intelligence to respond to unavoidable road traffic accidents. Artificial driving intelligence will make a calculated decision that could equate to deciding who lives and who dies. In calculating such important decisions, does the driving intelligence require moral intelligence and a capacity to make informed moral decisions? Artificial driving intelligence will be determined by at very least, state laws, driving codes, and codes of conduct relating to driving behaviour and safety. Does it also need to be informed by ethical theories, human values, and human rights frameworks? If so, how can this be achieved and how can we ensure there are no moral biases in the moral decision-making algorithms? The question of moral capacity is complex and has become the ethical focal point of this technology. Research has centred on applying Philippa Foot’s famous trolley dilemma. We claim that before applications attempt to focus on moral theories, there is a necessary precedent to utilise the trolley dilemma as an ontological experiment. The trolley dilemma is succinct in identifying important ontological differences between human driving intelligence and artificial driving intelligence. In this paper, we argue that when the trolley dilemma is focused upon ontology, it has the potential to become an important elucidatory tool. It can act as a prism through which one can perceive different ontological aspects of driving intelligence and assess response decisions to unavoidable road traffic accidents. The identification of the ontological differences is integral to understanding the underlying variances that support human and artificial driving decisions. Ontologically differentiating between these two contexts allows for a more complete interrogation of the moral decision-making capacity of the artificial driving intelligence.

Item Type: Article
Subjects: Research Scholar Guardian > Computer Science
Depositing User: Unnamed user with email support@scholarguardian.com
Date Deposited: 30 Jun 2023 04:43
Last Modified: 23 Jan 2024 04:14
URI: http://science.sdpublishers.org/id/eprint/1193

Actions (login required)

View Item
View Item