Brief
18.01.2024

How public organisations can use AI in anti-corruption: What we know so far and why we need to learn more about it

In light of the growing uptake of AI systems in the public sector, former visiting PhD scholar Carolina Gerli provides a state of the art on the potential and challenges of AI in public organisations to tackle corruption and advances pathways for future research on the topic.

The problem
It is no secret that corruption constitutes a severe threat to democracy. Corrupt practices – such as bribery, embezzlement and fraud – lead to economic losses, public and private sector dysfunctionality, and inequality, thus triggering a vicious cycle that negatively affects society and erodes public trust. What’s more, corruption mirrors the complexity of our evolving and intricately connected global landscape, adapting through increasingly entangled forms. The recent Qatargate scandal in the EU Parliament is an illustrative example, as it sheds light on the intricate interplay between bribes, fraud, and money laundering. That is why we need innovative ways to curb corruption.

AI-ACTs as a solution
Due to the evolving complexity of corrupt practices, using Artificial Intelligence (AI) in the public sector to combat corruption is gaining momentum. Indeed, public organisations are experiencing a real rush to AI. They are growing their interest in experimenting with and adopting AI systems to improve their daily work because AI streamlines operations and optimises information flows, thereby achieving greater efficiency and effectiveness. AI anti-corruption technologies (AI-ACTs) can be considered as socio-technical assemblages based on the use of AI techniques; their ultimate scope is to address corruption, while their more immediate aim is to address various types of corruption and related behaviours, ranging from petty corruption to grand corruption (Mattoni et al., forthcoming). AI-ACTs allow for the fast processing of large volumes of data and thus represent a game changer in anti-corruption: thanks to their features, these technologies can promote transparency, reduce the discretion of public officials, and mediate government-citizen interactions.

Research shows that AI can prove helpful as both a preventive tool in the fight against corruption and a detective tool for misconduct. As a preventive tool, AI can predict corruption patterns with accuracy. For instance, scholars have used a neural network approach to develop an early warning system to predict corruption using political and economic factors (e.g., economic growth, political party endurance). Recent studies have also shown that Machine Learning (ML) can successfully predict conflicts of interest in public procurement and cartel behaviours. As a detective tool, AI can detect anomalies that are symptoms of corruption and related phenomena. While initially used only to detect fraud, scholars have increasingly deployed AI-ACTs in public procurement. Amongst others, studies show that deep image features can detect fake suppliers, and ML models can successfully detect collusive interactions between private companies and public authorities, as well as other corruption red flags.

AI-ACTs in practice
Encouraging results in academic research have aroused the interest of policymakers. In recent years, experimentations of AI-ACTs in public organisations have been growing within and without the EU. In Brazil, applications of this kind have been proliferating. For instance, in 2015, the Office of the Comptroller General created “Alice”, a bot mixing data mining, ML, and regular expression techniques to help curb corruption in public procurement, embezzlement, graft, and anti-competitive practices by analysing tenders, bid submissions, and contracts. Again in Brazil, the World Bank piloted the Governance Risk Assessment System (GRAS) tool – launched in November 2023 – that uses advanced data analytics to improve the detection of fraud, corruption, and collusion risks in government contracting. In China, public authorities developed the “Zero Trust” system, in which ML was used to predict the risk of public officials engaging in corrupt practices; unfortunately, the system was allegedly abandoned for being too efficient (Chen, 2019).

Within the EU, public organisations have been experimenting with and adopting AI-ACTs at the supranational, transnational, and national levels. At the supranational level, the European Anti-Fraud Office (OLAF) claimed to use natural language processing to identify possibly suspicious language in the exchange of emails by combining keywords that might signal corruption red flags. At the transnational level, the Datacros tool, developed by the Transcrime research centre, has been adopted by public authorities in several countries (e.g., Romania, France, Lithuania) to conduct risk assessments and detect anomalies in firms’ ownership structures that can flag risks of collusion, corruption, and money laundering. Lastly, one successful example at the national level is the ProZorro system in Ukraine. Launched as an e-procurement system in 2016 by the Ukrainian government, the tool was enhanced in partnership with international organisations, businesses, and civil society organisations, creating an AI monitoring system that helps detect violations in public procurement data and prevents misuse of public funds. By contrast, a national application with controversial outcomes is SyRI, an algorithm used by the Dutch government to combat social welfare fraud by cross-referencing personal data from citizens; it turned out, however, that SyRI was used primarily in low-income neighbourhoods and exacerbated bias and discrimination, which in 2020 led the Dutch Court of The Hague to order its immediate halt.

Challenges and risks
Nevertheless, using AI in public organisations is defined by interdisciplinary challenges, which can lead to severe risks if underestimated. First, AI systems need minimal technical requirements ­– above all, access to large amounts of reliable and standardised data – and organisational resources, such as funding and expertise; these conditions should not be taken for granted in the public sector. Second, policies and regulations are needed to ensure AI is procured, designed, developed, and used in a responsible manner; AI regulation is, however, still in the early stages. Third, AI tends to be context-specific: how it is designed and implemented may reflect different socio-cultural backgrounds. Finally, by its very nature, AI disputes core ethical notions, such as human intervention, moral responsibility, and the trade-off between private and public interests. If public organisations overlook these challenges, they incur the risks associated with the misuse of AI. Not only can the design, development, and implementation of AI-ACTs be corrupted itself (especially in autocracies), but the need for these technologies in public organisations also constitutes a profitable business opportunity for IT private sector actors.

Current debate over AI-ACTs
Due to the challenges surrounding the use of AI in the public sector, recent experimentations with AI-ACTs in public organisations have sparked debate over the role of AI in anti-corruption. Generally speaking, scholars agree that AI can be a “positive shock” (Colonnelli et al., 2020) in the monitoring capacity of public organisations, especially concerning risk assessment and third-party due diligence. However, a broad consensus exists on the auxiliary role of AI in the fight against corruption: AI should complement existing anti-corruption efforts rather than replace them entirely. As Etzioni and Etzioni (2017) underline, AI systems should be “partners” rather than “minds”. Furthermore, scholars have argued that a critical success factor of AI-ACTs is the overall ecosystem in which they are designed, developed, and implemented, defined by data input, design of algorithms, and political and socio-cultural institutions in place.

Yet, the current theoretical debate on the topic suffers from scant, outdated, and fragmented information about existing AI-ACTs within the EU. Information about these tools is complex to retrieve from official and unofficial sources, and details about their design, development, and implementation process are almost absent. Unavoidably, this theoretical and empirical gap on AI-ACTs increases the risk of public organisations adopting AI to curb corruption as being guided by the hopes of techno-solutionism rather than on the grounds of empirical evidence.

Future steps
Taking stock of the state of the art is essential to understanding how AI can successfully contribute to the fight against corruption and related phenomena. As we look ahead, this blog post envisions two research pathways: one for immediate empirical investigation and another for long-term theoretical exploration. In the short term, sound empirical research must be conducted on how public organisations experiment with and adopt AI-ACTs. This will cast light on the drivers and roadblocks behind AI-ACTs adoption and the changes occurring once they are in place. Over the long run, policymaking should formulate guidance on how to adopt AI to curb corruption responsibly, building upon the theoretical and empirical knowledge of AI-ACTs. To do that, research must delve into the possible side effects that may occur once these tools are in place. What if the outsourcing of AI-ACTs to private IT companies by public organisations greases the wheels of further corruption? Undoubtedly, the relationship between AI and corruption calls for careful consideration in the years to come.

Teaser photo by champpixs on istockphoto.