AI Governance: The ‘regulatory bottleneck’ in the Global South

Countries in the Global South are racing to adopt AI technologies, but too often, discussions on the regulatory and administrative challenges have taken a backseat. AI governance demands deliberation in the Global South, or else it could become an Achilles' heel.  

Artificial intelligence (AI) is being advertised to solve complex global challenges such as water, food and health security, but the debate around AI governance, particularly with regards to the regulation of these systems, is not prioritised in the developing world. With the number of AI-based applications increasing and creating a need to account for the heterogeneity of the local population and inevitable data drift, the critical question of AI regulation remains open. In these discussions, AI’s potential solutions are outshining the challenges to implementation such as the need for legitimate institutions and regulatory frameworks to govern these technologies or the provision of adequate human capacity and infrastructure to help monitor its impact on human rights. Thus, it is essential to develop and establish an AI governance knowledge framework for the Global South that projects the key regulatory requirements and essentials for the successful adoption of AI systems and technologies. 

The vast potential of AI solutions to address entrenched global challenges makes it too enticing for the developing world to pass up on hopping on the band wagon. Yet most of the regulation- focussed discussion about research and development (R&D) in AI technologies - algorithms design, data digitisation, digital infrastructure development, and ethical and responsible use of AI debates – is only happening in the developed world. For example, 193 countries have ratified the Ethics of Artificial Intelligence by UNESCO. However, it is mostly the developed economies, and particularly EU countries, that have spearheaded policy discussions and regulatory debates on implementation of ethical and responsible AI.  

AI Regulation: An Untouched Milestone 

The unethical use of AI systems carries the risk of exacerbating existing socioeconomic inequalities in society if the dataset is biased. In addition, the unregulated use of AI applications without proper risk assessments can lead to significant impact on the lives of the citizens. One potential risk involves the increasing volumes of data collected by these systems, something currently happening with the Real-Time Digital Authentication of Identity (RTDAI) in the state of Telangana in India, utilising the power of facial recognition. This system, currently in its pilot phase, is being used to authenticate and deliver certificates to pensioners, using facial recognition and demographic information mapping. It has been demonstrated that in the pilot phase, the success rate of self-authentication system has been 93%. The problem is the AI models and algorithms have not been tested or altered to accommodate this increasing number of datasets, and on the contrary, it is being suggested that the system success rate would reach 96% once more data is fed into the algorithms without analysing if the design of the algorithms is capable of potentially taking in half a million pensioner records or not. This mindset that more datapoints will generate better and more accurate results for an AI system is baseless without verifiable results and requires and AI algorithmic development approach.  

A further core issue is potential bias in AI systems, which could drive practices such as censorship, unjust distribution of financial services, incorrect identification or prosecution of individuals and misuse in surveillance. The responsible use of AI and foundational ethical principles such as transparency, accountability, fairness, privacy, and security must be made essential in adoption of any AI system. Absence of such regulatory and legal AI frameworks multiplies the risks associated with the use of AI. All policy discussions around AI regulatory frameworks must ensure citizens’ rights. For example, any AI regulation, must requires a specifically trained regulatory workforce that understands the deployment and thinking behind AI systems.  

 Thus, fundamentally the first step towards adoption of any AI-based systems in the public sphere requires robust and tested AI algorithmic frameworks that inculcates a nationalised data protection regime, puts the fundamental right of citizens such as privacy and security of datasets first, and addresses the accounting issues as well as the compounding volume problem within the datasets. Most developing nations do not have a legal, institutional or an administrative AI specific regulation in place, and the debates are largely centred in silos on data governance, data protection and privacy. An AI regulatory specific debate on algorithmic bias, ethical and responsible use of AI is still missing at large. The next section presents the four key components required for policy discussions around AI.   

AI regulatory fundamentals for the Global South 

Governance of AI systems is a complex discussion and developing nations must invest in basic conceptual and normative frameworks that are best suited to their individual requirements. Identifying the risks associated with AI and serving the needs of a modern heterogenous society must be the driving factor while curating and discussing an AI governance framework approach.   

The following investments in capacity building are key to creating a minimal viable working AI regulatory framework:  

  1. A team of AI system regulators that oversee and understand the overall nature of an AI solution or application and can monitor, evaluate and assess the AI system deployment 

  1. Sector specific guidelines for best practices in machine learning. These should be adopted and inferred from the developing countries and sector-specific partnerships must be created to advance sector specific use of AI solutions. For example, Good Machine Learning Practice for Medical Device Development: Guiding Principles could provide a good example for the Global South to consider in developing their own guidelines. 

  1. An AI ethics committee to balance the risk and benefits to the local population in the development of AI algorithms to ensure and prevent adverse impact on minorities. Localised continuous quality oversight mechanism is needed throughout the life cycle of the AI application in question.  

Furthermore, specific partnerships could help facilitate the developmental agenda of the Global South and support the development of an AI regulatory framework by setting up interoperable processes that underline the baseline benefits of these systems. One such partnership has been spearheaded by the SMART Africa Alliance. This alliance, currently as a flagship project of South Africa in collaboration with the German Development Cooperation, is working on an artificial intelligence project with one key objectives being to develop policy frameworks ready for AI.  As AI requires investment and time in research, technical expertise and infrastructure, a state must identify and prioritise its medium and long-term AI goals and should request consultancy or advisory services from international development organisations, global philanthropies, global financial institutions, international charities as well as expert working groups and tech giants in facilitating the process.   

Similar partnerships would allow countries in the Global South to draw upon cross-sectoral lessons and exchange knowledge with developed nation states that have already tested and implemented some form of AI governance frameworks. These multilateral developmental partnerships with countries such as Singapore, US, Canada EU, and Japan would help to integrate a multi-layered ethical and responsible use of AI governance practices and understand sector-specific AI systems requirements. 

To help overcome constraints in resources and capacity in understanding the fundamentals of AI governance, regional governance frameworks would be economically relevant for the Global South. A common minimum AI governance architecture that underlines the requirements of a regional block would be a good start. In some regions in the Global South, this is already happening and could provide a model for other areas. For example, the South Asian Association for Regional Cooperation (SAARC), the economic and political organisation of eight countries, presents a strong regional body that has created multiple areas of cooperation and such specialised associations could take up ownership for standardising AI practices and processes in these countries. The partnership model approach led by regional associations would best serve interests and address the regional aspirations aligned to the UN’s Sustainable Development Goals (SDGs). 

Image: Randall Bruder, source: Unsplash