Event highlight
03.05.2021

Panel launches UK bias study in Germany, discusses the responsible use of algorithms in the public sector

Panelists examined the recommendations of the Centre for Data Ethics and Innovation’s publication on what role the public sector can or should play in both tackling harms and promoting benefits caused by the use of algorithmic systems.

In a presentation and panel discussion, the Centre for Digital Governance, the Centre for Data Ethics and Innovation (CDEI) and the British Embassy in Berlin co-hosted an event on April 21st to discuss the findings of a review from the CDEI on harms caused by bias in algorithmic decision making.

For this event, Lara Macdonald (Senior Policy Advisor, CDEI) presented key recommendations from the report. Panelists included Roger Taylor (Chair, CDEI), Julia Gundlach (Project Manager, Ethics of Algorithms Project, Bertelsmann Stiftung), and Judith Peterka* (Analyst, Policy Lab Digital, Work & Society, Federal Ministry of Labour and Social Affairs). Hertie School Professor of Ethics and Technology Joanna Bryson moderated this panel discussion, and Frances Wood (Regional Director, Science & Innovation Network l Europe, Russia, Israel and Turkey based at the British Embassy Berlin) introduced the event

The CDEI report primarily examines algorithmic decision making in four examples: two from the public sector, policing and local government; and two from the private sector: financial services and recruitment. The report explores how bias can enter algorithmic systems, such as through unrepresentative training data or a reliance upon accurate information that reflects historical biases. 

From the many recommendations the report provides, Lara Macdonald highlighted the need for governments to support programmes that increase diversity in the technological sector and for the UK Government to develop a mandatory transparency obligation on public sector use of algorithms. Rather than proposing a new regulatory authority to specifically oversee algorithmic decision-making, the report suggests drawing on, and clarifying where necessary, existing digital legislation and building the competencies of preexisting sector-specific regulators to address new challenges brought about by AI. 

Panelists supported the conclusion that existing regulatory agencies should strengthen their capacity towards the use of algorithms in decision making processes. The panel observed that AI is becoming ubiquitous, and as such, competence needs to be widely distributed.

Roger Taylor noted that when regulating algorithmic decision making, the emphasis should be placed on the decision making, as the goal of reducing bias is the same whether an algorithm is used or not. As the UK already has a horizontal regulatory body examining bias (the Equality and Human Rights Commission), the understanding of the context in which algorithmic decision-making is being used to AI regulation is more important than the technical knowledge a new horizontal body could bring. 

Providing insight into the use of algorithms in the public sector, Julia Gundlach emphasised the need to operationalise the values society wants to see in algorithmic decision-making. Without a clear definition of concepts, there is a high degree of variability between regulators. She pointed out that even basic algorithmic decision making such as allocating children to child care facilities has been contested in Germany even though – or possibly even because – they are fostering fairness and inclusivity. In this case, algorithmic systems are not so much a technical problem but a political and social one, as the systems make the procedures more transparent. Roger Taylor added that international agreement on the mechanisms used to measure how algorithmic systems are behaving is still needed.

In the German context, panelist Judith Peterka shared that according data from the German AI Observatory, only 21% of AI talent in Germany is female, and that diversity in the workforce is essential to mitigate biases in algorithmic decision-making. She also noted that some of the citizens reach out to the labour and social administration when they are in crisis (lost a job or are disabled, for example) and might share very sensitive personal information. Thus, algorithmic systems need to maintain high data protection standards and high levels of transparency. They must also be embedded in human systems, supporting the workers in these departments who will need to be making human contact.    

* Disclaimer: Judith Peterka expressed her personal views during this panel discussion.


If you would like to learn more about the work of the various panelists on this issue, feel free to take a look at the following:

●    ExamAI - Auditing AI
●    AI indicators
●    “AUTOMATING SOCIETY” (2021) 
●    CDEI blog posts on AI Assurance:
○    The need for effective AI assurance
○    User needs for AI assurance
○    Types of assurance in AI and the role of standards