The use of algorithmic systems in public service delivery has become the subject of recent criticism and offline protests.
From the unfair use of a grading algorithm in UK to the errors of a robo-debt algorithm in Australia, such systems are being challenged because they make citizens more visible to the State, but not the other way around.
Amid the COVID-19 pandemic, the year 2020 saw several instances of citizen activism challenging the role of automated decision making (ADM) systems in public service. In June, Australian citizen Jenny Cao filed a lawsuit against the federal government over mistakes made by an automated debt notice program. Only two months later, Curtis Parfitt Ford, an A-level student at a high school in London, created a petition against the UK’s educational examination assessment agency for its unfair grading system amid the pandemic. And in September 2020 widespread public outcry shut down the launch of a loosely-defined “civility score” pilot application in Suzhou, a city in Eastern China.
Each of the ADM systems at the heart of these protests were initially meant to augment social protections, but ended up causing significant offline harm. While issues of algorithmic bias are not really a new phenomenon, there are two distinct aspects to consider in these examples. One, unlike with black box systems where harms may be obscured, the failures of these algorithms were visible to a large number of people and they caused tangible harm. Second, unlike worker-led protests in the past against for-profit platforms like Uber or retail giants such as Target, these were algorithmic systems managed by the State for the purpose of public service.
These incidents represent a challenge to the so-called “digital welfare state” which has emerged in many countries over recent years. In an attempt to emulate the innovative approaches of private technology companies, digital welfare states can involve algorithm-based predictive policing, electronic voting, or surveillance through facial recognition programs. In India and Kenya, it has even taken the form of national identity systems employing biometric identification. While automation and, thus, increased efficiency of public service delivery may be the driving force behind such digital transformation initiatives, it isn’t too farfetched to imagine a future where administrative decision-making is also delegated to artificial intelligence. This has already been witnessed in New Zealand, where the AI chatbot “SAM” was programmed to simulate citizen interactions during the 2017 elections.
Though the use of AI in public service delivery can take a multitude of forms, what they all have in common is that they make citizens more visible to their governments—but not the other way around. A 2016 White House report highlighted that Big Data analytics could pose a serious threat to civil rights if left unchecked to use personal information to determine access to public services in housing, credit, employment, education and the marketplace. As Virginia Eubanks, author of Automating Inequality observes, one of the great benefits of such tools for states is their veneer of neutrality and objectivity. However, in practice, these systems can be prone to bias and failure, and the resulting exclusion from public services can have severe consequences. For example, the UK’s automated social welfare system ‘Universal Credit’ faced criticism in 2019 from the UN Special Rapporteur due to a poorly designed algorithm which caused citizens to go hungry, fall into debt and experience psychological distress (as outlined in a recent report from Human Rights Watch).
Accountability for such algorithmic systems is therefore essential, as highlighted in a UN report on the use of digital technologies for welfare programs. In the above-described incidents in Australia and the UK, the panoptic control of the State was successfully contested only once a larger audience managed to visiualise the faulty outcomes that bore tangible social and human costs. However, there may be several such instances where the harms caused may not be always detectable by the public eye. Meaningful appeals and audit processes need to be written into the policies which allow for such critical computer-generated determinations, rather than leaving citizens to rely on activism and legislative escalation tactics.
Author of “Weapons of Math Destruction” Cathy O’Neil states that algorithms are opinions embedded in code. These opinions in the form of Big Data processes tend to codify the past and not invent the future. Thus, when these opinions find currency in public service delivery, they can have the potential to significantly impact citizen’s lives, whether it be students’ educations or economic burdens for taxpayers. These processes therefore need to uphold fundamental societal values above all else, lest they perpetrate inequalities and hardships even further.
States thus need to devote substantial time and effort in ensuring that their digital welfare state models ensure the provision of fundamental human rights through appropriate regulation. The design of algorithmic systems in public service requires transparency audits and broad-based inputs into the policymaking processes that determine their adoption. Mark Zuckerberg’s famous motto “Move fast and break things” might have been effective in the field of private tech, but, as the recent protests have demonstrated, it is clear that States cannot afford to follow the mantra “compute first, ask questions later.”