Jun 21, 2022
Shengkang Chen, Vidullan Surendran, Alan R. Wagner, Jason Borenstein, Ronald C. Arkin
Title: Towards Ethical Behavior in Human-Robot Interaction - A Tentative Approach
Developing an ethical architecture for robots designed to emulate human decision-making processes presents a challenging conundrum. This article delves into a study conducted by Shengkang Chen and her team at Georgia Tech and Penn State University that aims to address this challenge.
The study primarily involved surveying both regular adults (denoted as 'folk') and ethical experts on what they considered ethical behavior. Two distinct scenarios were analyzed where deception could be a factor: pill-sorting with an older adult and game playing with a child. The surveys aimed not only at understanding human ethical decisions but also in assisting the development of an ethical architecture for robots.
The prospect of robotic deception is a point of contention among researchers. It may be deemed as either detrimental to the human user's experience or advantageous. The results of this survey were expected to provide an answer to which ethical robotic behaviors involving deception were acceptable.
The study focused on two different scenarios with differing risk factors associated with them. Pill sorting is a critical task for an older adult, where any error can lead to serious consequences, making it a high-risk task. Conversely, a game of Connect Four with a child is considered a low-risk task as the worst consequence could be the child's frustration.
The participants for the study consisted of regular adults and ethics experts who were asked to respond to a set of questions under the model of ethical morality or formal ethical frameworks respectively.
Upon analyzing the data collected, significant differences were observed regarding the propensity to deceive between the two scenarios. When subjects demonstrated minor performance deficiencies, only the Ethics of Care framework showed significant differences between the two scenarios. When subjects had significant performance deficiencies, all ethical frameworks aside from Social Justice Theory demonstrated differences in responses for the scenarios.
The findings suggested that when subjects had substantial performance deficiencies, ethical frameworks were significantly more conservative about using deception in the high-risk task (pill sorting) than in the low-risk task (game playing). The differences were more pronounced with greater performance deficiencies, indicating that the performance of tasks can greatly influence the decision to deceive within each ethical framework.
These results have potential implications in the development of ethical architectures for robots, especially in tasks that involve significant risk and performance deficiencies. Future research may consider exploring other aspects such as risk levels and demographics to contribute to the development of ethical behavior in robots.
Overall, the study provides an essential step towards more ethically guided decision-making in human-robot interaction scenarios. However, there is still much ground to cover in making robots capable of responding in a way that is consistent with human ethical decision-making processes.
Sign up to AI First Newsletter
StegNet: Mega Image Steganography Capaci...
ByPin Wu, Yang Yang, X...
Jun 17, 2018
Characterizing Video Responses in Social...
ByFabricio Benevenuto,...
Apr 30, 2008
Toward Ethical Robotic Behavior in Human...
ByShengkang Chen, Vidu...
Jun 21, 2022
Morita Equivalence of C^*-Crossed Produc...
ByNandor Sieben