Alaieri, FahadVellino, Andre2018-02-122018-02-122017http://ieeexplore.ieee.org/document/8250122/http://hdl.handle.net/10393/37240https://doi.org/10.20381/ruor-21512Autonomous bots and robots (we label “(ro)bots”), ranging from shopping assistant chatbots to self-driving cars are already able to make decisions that have ethical consequences. As more such machines make increasingly complex and significant decisions, we need to know that their decisions are trustworthy and ethically justified so that users, manufacturers and law- makers can understand how these decisions are made and which ethical principles were brought to bear in making them. Understanding how such decisions are made is particularly important in the case where a (ro)bot is a self-improving, self- learning type of machine whose choices and decisions are based on past experience, given that they may not be entirely predictable ahead of time or explainable after the fact. This paper presents a model that decomposes the stages of ethical decision making into their elementary components with a view to enabling stakeholders to allocate the responsibility for such choices.enDecision makingmachine ethicsautonomytrustresponsibilityA Decision-Making Model for Ethical (Ro)botsConference Proceedinghttps://doi.org/10.1109/IRIS.2017.8250122