When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest
Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of e...
        Saved in:
      
    
          | Main Author: | |
|---|---|
| Format: | Article | 
| Language: | English | 
| Published: | 
            Elsevier
    
        2024-08-01
     | 
| Series: | Computers in Human Behavior: Artificial Humans | 
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2949882124000574 | 
| Tags: | 
       Add Tag    
     
      No Tags, Be the first to tag this record!
   
 | 
| _version_ | 1846141700836163584 | 
    
|---|---|
| author | Anja Bodenschatz | 
    
| author_facet | Anja Bodenschatz | 
    
| author_sort | Anja Bodenschatz | 
    
| collection | DOAJ | 
    
| description | Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection. | 
    
| format | Article | 
    
| id | doaj-art-c4097b2d4f0942198f5b7575da98b4a6 | 
    
| institution | Kabale University | 
    
| issn | 2949-8821 | 
    
| language | English | 
    
| publishDate | 2024-08-01 | 
    
| publisher | Elsevier | 
    
| record_format | Article | 
    
| series | Computers in Human Behavior: Artificial Humans | 
    
| spelling | doaj-art-c4097b2d4f0942198f5b7575da98b4a62024-12-04T05:15:06ZengElsevierComputers in Human Behavior: Artificial Humans2949-88212024-08-0122100097When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interestAnja Bodenschatz0Faculty of Computer Science, Technische Hochschule Ingolstadt, Esplanade 10, 85049, Ingolstadt, Germany; TUM School of Social Sciences and Technology, Technical University of Munich, Richard-Wagner-Str. 1, 80333, Munich, Germany; Faculty of Management, Economics and Social Sciences, University of Cologne, Albertus-Magnus-Platz, 50923, Cologne, Germany; Faculty of Computer Science, Technische Hochschule Ingolstadt, Esplanade 10, 85049, Ingolstadt, Germany.Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.http://www.sciencedirect.com/science/article/pii/S2949882124000574Autonomous systemsEthical dilemmasDecision randomizationSelf-sacrificeVignette studyGender differences | 
    
| spellingShingle | Anja Bodenschatz When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest Computers in Human Behavior: Artificial Humans Autonomous systems Ethical dilemmas Decision randomization Self-sacrifice Vignette study Gender differences  | 
    
| title | When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest | 
    
| title_full | When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest | 
    
| title_fullStr | When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest | 
    
| title_full_unstemmed | When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest | 
    
| title_short | When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest | 
    
| title_sort | when own interest stands against the greater good decision randomization in ethical dilemmas of autonomous systems that involve their user s self interest | 
    
| topic | Autonomous systems Ethical dilemmas Decision randomization Self-sacrifice Vignette study Gender differences  | 
    
| url | http://www.sciencedirect.com/science/article/pii/S2949882124000574 | 
    
| work_keys_str_mv | AT anjabodenschatz whenownintereststandsagainstthegreatergooddecisionrandomizationinethicaldilemmasofautonomoussystemsthatinvolvetheirusersselfinterest |