Black Box Ethics: Why the Rights to Explanation and to be Forgotten are Ethically Critical Components for Vulnerable Populations

Description
Title: Black Box Ethics: Why the Rights to Explanation and to be Forgotten are Ethically Critical Components for Vulnerable Populations
Authors: Trites, Allison
Date: 2019-04-30
Abstract: Automated decision-making systems are becoming more and more prevalent in our society. These systems are often purported by their developers to remove human bias from decision-making in a variety of sectors from financial systems (e.g., credit card and mortgage applications) to Human Resources (e.g., screening resumes and matching competencies for employers). While the assertion of bias reduction is a commonly cited benefits of these automated decision-making systems, recent research and scandals in the news demonstrate that it is not safe to assume that these systems are free from bias. There is a growing body of evidence indicating that the algorithms informing automated decision-making systems are influenced by the fallible humans who develop them. The result is that a range of biases is being observed in the field of automated decision-making: from feedback loops built into algorithms where data are collected that support existing theories, to racial discrimination — knowingly (overt discrimination) or unknowingly (social assumptions). This creates a new reality in which biases are potentially just as prevalent in algorithm-made decisions as in human-made decisions. The risks to citizens, however, may be heightened due to the incorrect assumption that these algorithms are unbiased. The cloaked nature — and increasing omnipotence of — algorithms adds a further level of complexity to this situation. Many algorithms are developed by private companies and individuals who which cite proprietary rights — or ‘trade secrets’ — as protections and justifications for confidentiality around their algorithms. These rights enable developers to keep the rationales used for making decisions, built into the algorithms, in a ‘black box’. This black box shields the algorithms and their resulting decisions away from scrutiny and criticism. As a result, citizens, governments and other advocates are unable to fully assess the algorithms and their impact on societies. This thesis will do the following: the situation around and important concepts in Artificial Intelligence (AI) and Automated Decision-Making Systems (ADMS) will be introduced and defined, focusing on Black box algorithms and the problems surrounding these systems. From here, the case study will be introduced, introducing the reader to specific aspects of the concerns around black box algorithms, as an example to be kept in mind for the remainder of the thesis, culminating in the subsequent analysis at the end of the paper. Following this, a feminist bioethical framework on vulnerability and consent will be outlined and reviewed. After the introduction of the case study and proposed framework, current Canadian legislation surrounding data collection and privacy will be reviewed, specifically the Personal Information Protection and Electronic Documents Act (PIPEDA) and a working paper from the Office of the Privacy Commissioner of Canada, providing the reader a Canadian-based context. Next, the concepts of the RtE and then the RtbF will be introduced and discussed in relation to the collection and storage of personal information. At this point, an analysis will be conducted, bridging the RtE and the RtbF with the feminist bioethical framework on vulnerability and consent and culminating in a discussion on and a revisiting of the case study. Using the case study, the paper will conclude how the RtE and the RtbF work to best support vulnerable populations and their ability to achieve autonomy and how the application could be adapted to best accommodate this objective.
URL: http://hdl.handle.net/10393/39126
http://dx.doi.org/10.20381/ruor-23374
CollectionThèses Saint-Paul // Saint Paul Theses
Files