The adoption of machine-learning algorithms in government decision-making is often seen as a potential source of unintended negative consequences such as the violation of privacy or the introduction of biases.
This can undermine citizens' trust in the system and in the government. On November 17 the second online session of the CIVICA PhD Seminar Series on Public Sector Digital Transformation with Albert Meijer discussed ways to counteract those tendencies and why values play such an important role in doing so.
Whether in areas like predictive policing, tax fraud detection or even garbage collection, the use of machine and deep learning algorithms for government decision-making promises unprecedented increases in efficiency and effectiveness. Yet, the introduction of these technologies is still considered risky, as they can be prone to delivering biased or simply wrong outcomes. In this regard, the stakes are high: If citizens become aware of such undue practices, their trust in government – an important component of government legitimacy – will erode. It is therefore essential to look beyond the purely technical aspects of algorithm implementation and consider how government organisations adapt their structures and working routines to enable the usage of algorithms.
It is important to realise that within this adaption process, also referred to as ‘algorithmisation’, societal values function as both inputs and outputs. On the output side, it is rather self-explanatory that any algorithmically supported government decision will either obey or disobey certain values. The usage of algorithms for predictive policing might, for example, enhance police’s bias against certain minority groups and thus be against the value of non-discrimination. Conversely, algorithm-based tax fraud detection might help catch tax evaders and thus uphold the value of fairness in society. However, whether or not these systems comply with important values depends on how exactly they are designed and implemented in the organisational context. In this sense, values become an important input to algorithmisation: If governments are not aware of exactly which values their algorithmically supported decisions are meant to comply with, it is practically impossible to ensure that they ultimately will. Value-sensitive algorithmisation is thus key to maintaining the trust of citizens.
Deciding which values should be designed into and upheld by an algorithm supported system is, of course, far from simple. Oftentimes there are sets of competing values; for example, when surveillance systems foster security with the help of face detection algorithms, they may simultaneously violate citizens’ privacy. In order to responsibly deal with these value clashes and design systems citizens can trust, governments need to acknowledge that algorithmisation is not only a task for technology experts. Rather, responsible algorithmisation requires teams of different skillsets and backgrounds in order to critically, collaboratively, and reflectively make value-sensitive design choices. Furthermore, these choices should be informed by actively engaging users (aka. citizens) in the design process. In some cases, it might even be necessary to lead a broad societal debate about which values the system should uphold. This might even go as far as asking citizens to consider such fundamental questions as, ‘how do we ultimately want to organise our democracy and society?’
In sum, the potential of using algorithms in government to enhance decision-making should be harnessed carefully in order to not inhibit citizens’ trust. Algorithmisation needs to be a value-sensitive process which embraces various types of expertise and actively involves users and stakeholders. This would eventually help the public sector ensure that the design, implementation, and use of algorithms is responsible and continues to foster citizens’ trust in government.
This blog post is part of the CIVICA PhD Seminar Series on Public Sector Digital Transformation organised by Hertie School’s Centre for Digital Governance and Bocconi University’s Department of Social and Political Science. The insights highlighted have been based on the second seminar session’s discussion of Albert Meijer’s keynote “Algorithmic Government – Rethinking Bureaucracy in the Age of Algorithms” as well as his forthcoming book chapter “Responsible and accountable algorithmization: How to generate citizen trust in governmental usage of algorithms”. We would like to thank the participants for sharing their views and ideas.
More literature on this topic:
- Schuilenburg, M., & Peeters, R. (Eds.). (2020). The Algorithmic Society: Technology, Power, and Knowledge. Routledge.
- Lorenz, L. (2019). The algocracy: Understanding and explaining how public organizations are shaped by algorithmic systems. MSc Thesis, Utrecht University.
- Meijer, A., & Wessels., M. (2019). Predictive policing: Review of benefits and drawbacks. International Journal of Public Administration, 42(12), 1031-1039.
Watch Albert Meijer’s keynote “Algorithmic Government – Rethinking Bureaucracy in the Age of Algorithms”: