Risk-based approaches to AI governance, Part 2

This is the second blog post in a two-part series on risk-based approaches to AI governance. This second part will engage with potential risk-based approaches to AI regulation.

The previous post discussed the recent advancements in AI technologies that have led to new commercial applications with potentially adverse social implications. It also considered the challenges of AI governance and discussed the role of technical benchmarks for evaluating AI systems. This post will explore the different AI risk assessment approaches and will conclude with a discussion of what the next steps for national AI governance initiatives might entail.

AI Risk Assessment Frameworks
Risk assessments can help identify which AI systems need to be regulated.  Risk is determined by the severity of the impact of a problem and the probability of its occurrence. For example, the risk profile of a facial recognition system to unlock a personal mobile phone would differ from a facial recognition system used by law enforcement. The former may be overall beneficial as it adds a privacy-protecting security feature. In contrast, the latter could have chilling implications on the freedom of expression and privacy. Therefore, the risk score for facial recognition systems is relative to their use and deployment context. This section will discuss some of the approaches followed by various bodies in developing risk assessment frameworks for AI systems.

The EU
The European Commission’s legislative proposal on Artificial Intelligence classifies AI systems by four levels of risk and outlines risk proportionate regulatory requirements. The categories proposed by the EU include:

  1. Unacceptable Risk: The EC has proposed a ban on applications like social credit scoring systems and real-time remote facial recognition systems in public spaces.
  2. High Risk: AI systems that harm the safety or fundamental rights of people are categorised as high-risk. The proposal prescribes some mandatory requirements for high-risk AI systems. 
  3. Limited Risk: When the risks associated with the AI systems are limited, only transparency requirements are prescribed.
  4. Minimal Risk: When the risk level is identified as minimal, there are no mandatory requirements, but the developers of such AI systems may voluntarily choose to follow industry standards.

In Germany, the Data Ethics Commission has proposed a five-layer criticality pyramid that requires no regulation at a low-risk level to a complete ban at high-risk levels (see Figure 2). The EU approach is similar to the German approach but differs in the number of levels.

Figure 2: Criticality pyramid and risk-adapted regulatory system for the use of algorithmic systems (Source: Opinion of the Data Ethics Commission)

The UK
The AI Barometer Report of the Centre for Data Ethics and Innovation identifies some common risks associated with AI systems and some sector-specific risks. The common risks include:

  1. Algorithmic bias and discrimination
  2. Lack of explainability of AI systems
  3. Regulatory capacity of the State
  4. Breach in data privacy due to failure in user consent
  5. Loss of public trust in institutions due to problematic AI and data use

The report identified that the severity of common risks varies across different sectors like criminal justice, financial services, health & social care, digital & social media, and energy & utilities. For example, algorithmic bias leading to discrimination is considered high-risk in criminal justice, financial services, health and social media but medium risk in energy and utilities. The risk assignment, in this case, was done through expert discussions. The UK’s approach has a strong sector specific focus. The overall sector level risk is ascertained based on a combination of multiple AI risk criteria.

The preliminary classification of AI systems developed by the OECD Network of Experts’ working group on AI classification has four dimensions:

  1. Context includes stakeholders that deploy an AI system, the stakeholders impacted by its use and the sector in which an AI system is deployed.
  2. Data and inputs to an AI system influence the system's outputs based on the data classifiers used, the source of the data, its structure, scale, and how it was collected.
  3. The type of algorithms used in AI systems has implications for transparency, explainability, autonomy and privacy.
  4. The kind of task to be performed and the type of output expected range from forecasting, content personalisation to detection and recognition of voice or images.

Applying this classification framework to different cases, from facial recognition systems and medical devices to autonomous vehicles, allows us to understand the risks under each dimension and design appropriate regulation. In autonomous vehicles, the context of transportation and its significant risk of accidents increase the risk associated with its AI systems, and they are therefore considered a high-risk category requiring robust regulatory oversight.

Next steps in Risk-Adaptive Regulation for AI
The four approaches to risk assessment discussed above are systematic attempts to understand AI-related risks and develop a foundation for downstream regulation that can address risks without being overly prescriptive. With these examples in mind, national level initiatives could improve their AI governance by focusing on the following:

  1. AI Benchmarking: AI systems need continuous development and updating of technical benchmarks to assess their performance under different contexts with respect to AI ethics principles.
  2. Risk Assessments of AI applications: Risk assessments of AI systems require development of use cases of different AI applications under different combinations of contexts, data and inputs, AI models and outputs.
  3. Systemic Risk Assessments: There is a need for systemic risk assessment in contexts where AI systems interact with one another.  For example, in financial markets, different AI algorithms interact with each other, and in certain situations, their interactions could cascade into a market crash.

Once AI risks are better understood, proportional regulatory approaches should be developed and subjected to Regulatory Impact Analysis (RIA). The OECD defines RIA as a “systemic approach to critically assessing the positive and negative effects of proposed and existing regulations and non-regulatory alternatives”. RIAs can guide governments in understanding if the proposed regulations are effective and efficient in achieving the desired objective. Such impact assessments are good regulatory practice and will become increasingly relevant as more countries work towards developing their own national AI legislations.

Given the globalised nature of different AI services and products, countries should also develop their national level regulatory approaches to AI in conversation with one another. Importantly, these dialogues at the global and national level must be multistakeholder driven to ensure that different perspectives inform any ensuing regulation. Collectivised knowledge and coordination will lead to overall benefits by ensuring AI develops in a manner that is both ethically aligned and provides a stable environment for innovation and interoperability.


This is a revised version of a post first published on the Centre for Communication Governance, National Law University Delhi’s blog. Its content is an outcome of ongoing research at the Centre for Communication Governance on AI and emerging tech.

The author would like to thank Jhalak Kakkar, Nidhi Singh and Moana Packo for their helpful feedback.

Image: CC Sašo Tušar, source: Unsplash