Abstract: The application of novel artificial intelligence (AI) techniques is expected by many to have a revolutionary influence on modern healthcare, promising better, cheaper and more efficient care for patients. However, some remain skeptical, pointing to issues of unfairness, capriciousness, and a lack of transparency that would prohibit the use of such techniques in critical sectors. In particular, a lack of accountability for the actions suggested and taken by AI systems is troubling, as patients may incur great risks at the mercy of such systems. At the same time, overly zealous public accountability and oversight may stifle innovation and delay or even hinder the development of life-saving technology. By drawing on an empirical case study conducted at a Scandinavian AI company, this article situates accountability in the context of applied AI for healthcare and studies developers’ critical responses to and reflections on three widely discussed mechanisms for accountability in AI and the broader socio-technical systems: (1) Ethical principles (2) certification standards; and (3) explanation methods. By doing so, we contribute to closing a gap in the literature on accountable AI between theoretical discussions on these mechanisms and their ability to produce and ensure accountability, and the practical level of AI design and development. We argue that the AI developers’ encounter with these mechanisms illustrate important flaws in the current operationalisation and implementation of accountability as an ethical and societal value which, if left unchecked, risk undermining the pursuit of accountable AI. By bringing attention to these flaws, the paper signals where further work is needed in order to build effective accountability systems for AI.
Biography: Anne Henriksen is a visiting PhD researcher at the Centre for Digital Governance, coming from Aarhus University in Denmark where she studied media studies with a specialisation in data-driven media and digital sociology. In her research project, she is studying some of the entangled issues related to the development and use of AI as a knowledge production method and decision-making technology in healthcare, as well as the governance of AI. She is using an ethnographic case study in which she is following a group of developers in a Scandinavian AI company. She is focusing on the issues that these developers face in the design and development process and their efforts to make AI applications workable and reliable. She is using these empirical data to study the governance of AI in the context of the actual development practice surrounding AI in order to add to the current debate on how to govern AI algorithmic systems.