Blog
02.08.2021

Risk-based approaches to AI governance, Part 1

This is the first blog post in a two-part series on risk-based approaches to AI governance. This part discusses the recent advancements in AI technologies that have led to new commercial applications with potentially adverse social implications.

Over the last five years, 117 initiatives worldwide have published AI ethics principles. Despite a skewed geographical scope (91 of these initiatives come from Europe and North America), the proliferation of such initiatives on AI ethics principles is paving the way for building global consensus on AI governance. Notably, the 38 OECD Member States have adopted the OECD AI Recommendation, the G20 has endorsed these principles, and the Global Partnership on AI is operationalising them. UNESCO is furthermore developing a Recommendation on the Ethics of AI that 193 countries may adopt in 2021.

An analysis of different principles revealed a high-level consensus around eight themes: (1) privacy, (2) accountability, (3) safety and security, (4) transparency and explainability, (5) fairness and non-discrimination, (6) human control of technology, (7) professional responsibility, and (8) the promotion of human values. However, these ethical principles are criticised for lacking enforcement mechanisms. Companies often commit to AI ethics principles to improve their public image yet give little follow-up on implementing them, an exercise termed as "ethics washing”. Evidence also suggests that knowledge of ethical tenets has little or no effect on whether software engineers factor ethics into the development of their products or services.

Defining principles is certainly essential, but it is only a first step to developing ethical AI governance. There is a need for mid-level norms, standards and guidelines at the international level that may inform regional or national regulation to translate principles into practice. This two-part blog will discuss the need for AI governance to evolve past the “ethics formation” stage by implementing concrete and tangible steps, such as developing technical benchmarks and adopting risk-based regulation.

Recent Advances in AI Technologies
Artificial Intelligence is developing rapidly. The 2021 AI Index report notes four crucial technical advances that hastened the commercialisation of AI technologies:

  • AI-Generated Content: AI systems can generate high-quality text, audio and visual content to a level that it is difficult for humans to distinguish between synthetic and non-synthetic content.
  • Image Processing: Computer vision has seen immense progress in the past decade and is fast industrialising in applications that include autonomous vehicles.
  • Language Processing: Natural Language Processing (NLP) has advanced such that AI systems with language capabilities now have economic value through live translations, captioning, and virtual voice assistants.
  • Healthcare and biology: DeepMind’s AlphaFold solved the decades-old protein folding problem using machine learning techniques.

These technological advances have social implications as well as economic value. For instance, the technology generating synthetic faces has rapidly improved. As shown in Figure 1, in 2014, AI systems produced grainy faces, but by 2017, they were generating realistic synthetic faces. Such AI systems have led to the proliferation of ‘deepfake’ pornography that overwhelmingly targets women and has the potential to erode people’s trust in the information and videos they encounter online. Some actors misuse the deepfake technology to spread online disinformation, resulting in adverse implications for democracy and political stability. Such developments have made AI governance a pressing matter.

https://arxiv.org/pdf/1802.07228.pdf

Challenges of AI Governance
These rapid advancements in the field of AI technologies have brought the need for better governance to the forefront. In thinking about AI governance, many governments worldwide are concerned with enacting regulation that does not stifle innovation yet also provides adequate safeguards to protect human rights and fundamental freedoms.

Technology regulation is complicated because, until a technology has been extensively developed and widely used, its impact on society is difficult to predict. However, once a technology is deeply entrenched and its effect on society is understood better, it becomes more challenging to regulate. This tension between providing free and unimpeded technology development while regulating adverse implications is termed the “Collingridge dilemma”.

David Collingridge, the author of the Social Control of Technologies, notes that when regulatory decisions have to be made before a technology’s social impact is known, continuous monitoring can help mitigate unexpected consequences. Collingridge’s guidelines for decision-making under ignorance can inform AI governance as well. These include choosing technology options with (1) low costs of failure, (2) short response times for responding to unanticipated problems, (3) low costs of remedying unintended errors, and (4) cost-effective and efficient monitoring.

Technical benchmarks for evaluating AI systems
Quantitative benchmarks are also necessary to address the ethical problems related to bias, discrimination, lack of transparency, and accountability in algorithmic decision-making. The Institute of Electrical and Electronics Engineers (IEEE), through its Global Initiative on Ethics of Autonomous and Intelligent Systems, is developing technical standards to address bias in AI systems. Similarly, in the United States, the National Institute of Standards and Technology (NIST) is developing standards for explainable AI based on principles that call for AI systems to provide reasons for their outputs in a manner that is understandable to individual users, explain the process used for generating the output, and deliver their decision only when the AI system is fully confident.

Going back to our previous example, there is already significant progress in introducing benchmarks for the regulation of facial recognition technology. Facial recognition systems have a large commercial market. They are used for various tasks, including law enforcement and border controls. These tasks involve detecting visa photos, matching photos in criminal databases, and detecting and removing child abuse images online.

However, facial recognition systems have been the cause of significant concern due to high error rates in detecting faces and impinging on human rights. Biases in such systems have adverse consequences for individuals, such as being denied entry at borders or being wrongfully incarcerated. In the United States, the NIST’s Face Recognition Vendor Test provides a benchmark to compare different commercially available facial recognition systems' performances by operating their algorithms on different image datasets.

Defining benchmarks for ethical principles is an important step, however, in line with the Collingridge Dilemma, it needs to be complemented by risk assessments to mitigate adverse social impacts. Risk assessments would allow for the application of risk-proportionate AI regulation instead of a reliance on blanket rules that may hinder technological development with unnecessary compliance burdens. The next blog post in this two-part series will engage with some potential risk-based approaches to AI regulation.
 

This is a revised version of a post first published on the Centre for Communication Governance, National Law University Delhi’s blog. Its content is an outcome of ongoing research at the Centre for Communication Governance on AI and emerging tech.

The author would like to thank Jhalak Kakkar, Nidhi Singh and Moana Packo for their helpful feedback.

Image: CC Maxim Hopman, source: Unsplash