Acta Cogitata: An Undergraduate Journal in Philosophy


In this paper I explore a situation under-explored by AI researchers where those who deploy decision-making algorithms unintentionally launder their moral agency to algorithms through anthropomorphic ascriptions of their underlying architecture. Often, this kind of agency laundering occurs rather innocently, by attempting to render an otherwise opaque system transparent through simplified and analogous explanations intended to enhance the decision subject’s understanding. Consequently, when unintentional agency laundering happens, the decision subject’s agency to seek recourse for adverse outcomes is undermined in the process of laundering the data controller’s moral agency to a non-agent. This paper explores this situation as it pertains to traditional philosophical accounts of responsibility, explanation, and knowledge and engages in recent literature from AI ethics. The paper proposes that explanation can be a mechanism that closes responsibility gaps in AI. However, only if explanations do not invoke unintentional agency laundering.

Included in

Philosophy Commons