
The public sector’s obligation to protect citizens from the potential harms of AI technology is at odds with its own interest to increase the efficiency of public service delivery by using algorithms.
This dilemma and its related power asymmetries were discussed in the first online session of the CIVICA PhD Seminar Series on Public Sector Digital Transformation on October 27 with Gianluca Misuraca.
Recent advancements in artificial intelligence in fields such as automated decision-making and image, speech, and text recognition have led to an increased use of these technologies in everyday products and services. Consequently, the demand for government regulations to prevent AI applications from harming citizens has recently become a central subject of public debate. However, the future of AI governance is not a one-way street, and these debates often overlook the fact that the public sector is more and more also beginning to experiment with the use of AI technologies to boost its own efficiency. This double bind – the governing of algorithms, while governing by algorithms – might ultimately intensify existing power asymmetries in two ways.
First and foremost, the application of AI technologies in public sector institutions might intensify power asymmetries between government and citizens. One example is an AI profiling system implemented by the Polish Ministry of Labour and Social Policy to optimise the distribution of unemployment services. Although the system was originally intended to inform rather than replace the decisions of frontline workers, it was found that government officials ended up sticking to the system’s recommendations 99 percent of the time. This has shifted the balance of power away from citizens: While civil servants might still be able to explain and justify their – and hence the state’s – decisions, the sheer complexity of algorithmic decision-making is often much less “explainable” to ordinary citizens, and the reasons behind why an individual may or may not be granted a certain service might remain obscure. As a result of these issues, the profiling system was ruled unconstitutional by the Polish Constitutional Tribunal and its use discontinued.
On the other hand, power imbalances could also be reinforced within the public sector by the use of AI technologies in administrative processes. For example, this might happen through a reduction of administrative employees’ discretion by supporting or even replacing their decision-making power with algorithms – an effect also observed in the Polish case. Additionally, AI technologies could potentially even be used to oversee or evaluate public servants’ work. In both these situations, artificial intelligence would tilt the power balance to the designers and programmers of the AI system and leave less discretionary leeway for front-line workers. Particularly in cases where civil servants or citizens (as in the above example) are not aware of such practices, this can be especially worrisome from a democratic perspective. The fact that governments are also the major regulating entities of AI technology only adds to the complexity of the issue.
Seminar participants discussed various strategies to handle this double bind. One suggestion was for governments to design their AI systems with a humane user focus, one which ensures that the perspectives and behaviours of citizens and/or civil servants are understood and that the system is implemented and used true to its intent. Furthermore, the state’s use of AI technologies should be made as transparent as possible – both with regard to transparency about which services or decisions make use of such systems as well as transparency about how and which data are being processed in those systems. Finally, educating citizens about the advantages and limitations of AI technologies in general was also seen as an important pillar to ensure that the current knowledge – and thus power – asymmetries between AI experts and laypersons are counterbalanced. Strategies such as these could help governments to deal responsibly with their double bind when it comes to AI governance, mitigating potential harm while taking advantage of opportunities to improve efficiency in public service.
This blog post is part of the CIVICA PhD Seminar Series on Public Sector Digital Transformation organised by Hertie School’s Centre for Digital Governance and Bocconi University’s Department of Social and Political Science. The insights highlighted have been based on the first seminar session’s discussion of Gianluca Misuraca’s keynote “Shaping Digital Europe 2040 – Artificial Intelligence & Public Sector Innovation in a Data-Driven Society” as well as his paper “AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings”. We would like to thank the participants for sharing their views and ideas.
More literature on this topic:
- Misuraca G., Van Noordt C. (2020) AI Watch - Artificial Intelligence in public services, EUR 30255 EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-19540-5 (online), doi:10.2760/039619 (online), JRC120399.
- Misuraca G., Viscusi G. (2020) AI-Enabled Innovation in the Public Sector: A Framework for Digital Governance and Resilience. In: Viale Pereira G. et al. (eds) Electronic Government. EGOV 2020. Lecture Notes in Computer Science, vol 12219. Springer, Cham. https://doi.org/10.1007/978-3-030-57599-1_9
- Bryson J.J., Theodorou A. (2019) How Society Can Maintain Human-Centric Artificial Intelligence. In: Toivonen M., Saari E. (eds) Human-Centered Digitalization and Services. Translational Systems Sciences, vol 19. Springer, Singapore. https://doi.org/10.1007/978-981-13-7725-9
Watch Gianluca Misuraca's keynote “Shaping Digital Europe 2040 – Artificial Intelligence & Public Sector Innovation in a Data-Driven Society”: