A Decision-Making Model for Ethical (Ro)bots

FieldValue
dc.contributor.authorAlaieri, Fahad
dc.contributor.authorVellino, Andre
dc.date.accessioned2018-02-12T16:38:29Z
dc.date.available2018-02-12T16:38:29Z
dc.date.issued2017
dc.identifier.urihttp://ieeexplore.ieee.org/document/8250122/
dc.identifier.urihttp://hdl.handle.net/10393/37240
dc.description.abstractAutonomous bots and robots (we label “(ro)bots”), ranging from shopping assistant chatbots to self-driving cars are already able to make decisions that have ethical consequences. As more such machines make increasingly complex and significant decisions, we need to know that their decisions are trustworthy and ethically justified so that users, manufacturers and law- makers can understand how these decisions are made and which ethical principles were brought to bear in making them. Understanding how such decisions are made is particularly important in the case where a (ro)bot is a self-improving, self- learning type of machine whose choices and decisions are based on past experience, given that they may not be entirely predictable ahead of time or explainable after the fact. This paper presents a model that decomposes the stages of ethical decision making into their elementary components with a view to enabling stakeholders to allocate the responsibility for such choices.
dc.language.isoen
dc.subjectDecision making
dc.subjectmachine ethics
dc.subjectautonomy
dc.subjecttrust
dc.subjectresponsibility
dc.titleA Decision-Making Model for Ethical (Ro)bots
dc.typeConference Proceeding
dc.identifier.doihttps://doi.org/10.1109/IRIS.2017.8250122
CollectionScience informatique et génie électrique - Publications // Electrical Engineering and Computer Science - Publications

Files