Blog
17.12.2020

Trustworthy AI starts with trustworthy data: Governing automation in the public sector

The increasing use of automation (machine learning) comes with a risk of failures which threatens to erode citizens’ trust in government.

This considered, one of the root causes of these failures is actually putting too much trust in the data they employ. The importance of avoiding blind trust in data while constantly checking data against reality in the public sector was discussed during the fourth webinar of the CIVICA PhD Seminar Series on Public Sector Digital Transformation which took place on November 26th with Marijn Janssen.

Public sector organisations are increasingly using big and open linked data (BOLD) combined with machine learning and other forms of Artificial Intelligence (AI) for decision-making. These technologies are being used in several sectors, including criminal justice, higher education, employment, credit access and others. The automation of data processing, otherwise referred to as ‘machine learning’, is actually not a new technology – rather, it has been around for decades. However, what is new is the increasing application and complexity of algorithms, which are prone to risks and failures.

Some of the risks of automation include the violation of privacy, allowing for discrimination or biases, yielding undesired outcomes, and making decisions that can adversely impact citizens. A more specific example of automation failures discussed during this session of our seminar series was the discrimination of economically disadvantaged students in higher education admissions processes due to algorithms taking into account the low academic achievements and school reputations of their parents. Many similar examples of failures and biases exist in different sectors and contexts. However, the problem seems to be not only the complexity that machine learning algorithms add to decision-making, but rather also the danger of blindly trusting the data which algorithms employ. Therefore, in order for citizens to trust in the use of automation, organisations must ensure that their data is trustworthy to begin with. Going forward, it is critical for public sector organisations to do regular ‘reality checks’ and avoid blindly trusting data. As pointed out by keynote speaker Marijn Janssen, data represents only a reflection of reality at one specific point in time or place, and it should not be taken to represent reality in general.

These threats and failures of automation directly impact the trust of citizens in their government. In order to increase trustworthiness in data and e-services, data governance needs to be employed at an appropriate level. Data governance has been defined as ‘the exercise of authority and control (planning, monitoring, and enforcement) over the management of data assets.’ Hence, in order to increase trustworthiness in Artificial Intelligence, it is necessary to start by continuously checking, monitoring, questioning and putting data into context. Having an appropriate level of governance is certainly easier said than done. Nonetheless, it is important to create awareness of these issues and to use generalized guidelines to inform governance policy. A starting point could be the EU Ethics guidelines for trustworthy AI.

In sum, although artificial intelligence in the public sector has been around for some time, it is still a challenge to convince citizens to accept and trust in its different applications and services. An essential part of increasing trust in these technologies and systems is creating awareness of the different possible threats and issues and increasing data knowledge and transparency.

This blog post is part of the CIVICA PhD Seminar Series on Public Sector Digital Transformation organised by Hertie School’s Centre for Digital Governance and Bocconi University’s Department of Social and Political Science. The insights highlighted have been based on the fourth seminar session discussion of Marijn Janssen’s keynote “Trustworthy Digital Government” as well as his paper “Data governance: Organizing data for trustworthy Artificial Intelligence”. We would like to thank the participants for sharing their views and ideas.

More literature on this topic: 

  • Janssen, M., & Van Der Voort, H. (2016). Adaptive governance: Towards a stable, accountable and responsive government. Government Information Quarterly. Volume 33, Issue 1, 2016, Pages 1-5.
  • Brous, P., & Janssen, M. (2020). Trusted Decision-Making: Data Governance for Creating Trust in Data Science Decision Outcomes. Administrative Sciences, 10(4), 81.
  • Janssen, M., Hartog, M., Ricardo, M., Ding, A., & Kuk, G. (2020). Will algorithms blind people? The effect of explainable AI and decision-makers' experience on AI-supported decision-making in government. Social Science Computer Review.