Generative AI is fast becoming the primary way for the government to deliver services and make decisions. But without public approval and acceptance – a “social license” – the use of AI could harm public trust in government. Setting up a framework for accountability will help ensure that AI underpins democratic values, not undermines them.
Imagine a world where algorithms make life-altering decisions that affect you personally. This is rapidly becoming our reality as artificial intelligence (AI) permeates public administration, raising critical questions about trust and accountability. In 2022, the UK's Department for Work and Pensions faced intense scrutiny when its AI-driven fraud detection system wrongly accused thousands of benefit claimants of cheating. This incident underscores a crucial challenge: how can governments harness AI's power without undermining public trust?
One solution is to establish and maintain a "social license" for AI-driven administrative processes. This requires stakeholders and the general public to accept and approve of AI use in governance in exchange for the promise of responsibility, benefits, and a social contract. Securing this license is crucial for the legitimate use of AI in public service delivery and decision-making.
To achieve this, policymakers must address four key dimensions of AI governance: transparency, incentives, regulation, and participation. Transparency involves making algorithmic decision-making understandable to both administrators and citizens. The UK’s Algorithmic Transparency Recording Standard exemplifies this approach. It helps public sector bodies provide clear information about the algorithmic tools they use and why they’re using them. Aligning institutional priorities with ethical AI development requires performance metrics that reward ethical practices and penalise unethical ones. The regulatory landscape must evolve to accommodate AI-driven processes while preserving due process, as seen in the EU's proposed AI Act.
The digital ethics researchers Jakob Mökander and Luciano Floridi propose ethics-based auditing (EBA), a governance mechanism designed to operationalise ethical commitments in AI systems and automated decision-making processes, as a way to assess whether an entity's behaviour is consistent with moral principles. Their industry case study from 2022 reveals that the main challenges in implementing EBA mirror classical governance issues, such as harmonising standards across decentralised organisations and driving internal communication and change management.
Bias audits, diverse training data, and explainable AI methods are key
While the challenges highlighted by Mökander and Floridi emphasise the complexities of ethics-based auditing in decentralised organisations, Estonia's success in using AI to analyse patient data and support clinical decision-making in a nationwide electronic health record system provides an encouraging example. But Estonia has a small population and relatively homogeneous society. Experiences in the US, with its much larger and more diverse population, highlight the complexity of AI adoption in public administration in different contexts.
Larger countries face more significant challenges in implementing AI systems. The experiences of US cities with predictive policing algorithms illustrate these difficulties. The Los Angeles Police Department, for example, discontinued its predictive policing program in 2025 due to concerns about racial bias and lack of transparency. This underscores the importance of addressing potential biases and implementing robust oversight mechanisms when deploying AI in diverse urban environments.
To mitigate this bias, public agencies should adopt strategies that include regular bias audits, diverse training data, and explainable AI methods. Mökander and Floridi show that the US Algorithmic Accountability Act of 2022 offers a pragmatic approach to balancing the benefits and risks of automated decision systems, though further revisions are needed.
Effective AI governance must also address the tension between technical complexity and public accountability. This involves creating AI ethics committees, developing clear guidelines, and engaging stakeholders from diverse backgrounds. The city of Barcelona's AI strategy, for example, outlines seven governing principles for local AI projects, including human supervision, technical robustness, and data privacy.
Balancing innovation with regulation
As AI continues to transform public administration, securing and maintaining a social license is critical for harnessing its potential while also safeguarding democratic values. Policymakers must prioritize transparency, accountability, and public engagement in AI governance. Technologists should focus on developing explainable AI methods and conducting rigorous bias audits. Civil society organisations play a crucial role in mobilising grassroots efforts to push for global norms and influence national regulations.
To implement this framework effectively, public agencies need to implement comprehensive AI impact assessments, clear transparency protocols, ethics review boards, regular algorithmic audits, channels for ongoing public feedback, AI literacy training, performance metrics aligned with ethical practices, and the regular review of governance policies.
However, challenges remain, particularly in the Global South. Many developing countries face a regulatory bottleneck due to a lack of AI-specific legal and institutional frameworks. The Compute North vs. The Compute South concept highlighted by Mökander and Floridi emphasises the need to consider global disparities in AI governance capabilities. Balancing innovation with regulation is crucial, as overly restrictive policies can stifle progress, while inadequate oversight can lead to unethical practices.
Looking ahead, key areas for further research and policy development include robust frameworks for algorithmic auditing, international standards for AI governance in the public sector, and long-term societal impact studies. Mechanisms for cross-sector collaboration in AI ethics are also essential. The tension between public engagement and technical expertise warrants careful consideration, as striking the right balance between democratic oversight and technical proficiency will be an ongoing challenge.
Emerging AI technologies, such as large language models and quantum computing, are poised to further transform public administration. By 2025, experts predict that generative AI will be the primary way of delivering government services globally, promising higher quality, greater accuracy, and lower costs. However, this rapid advancement also raises new ethical concerns and governance challenges that governments, policymakers and AI developers must address proactively to prioritise public trust. Developing a social license for AI in public administration is essential for ensuring that AI-driven governance is ethical, transparent, and accountable.
The time for action is now – only through collaborative, inclusive, and adaptive approaches can we build AI systems that truly serve the public interest and reflect our shared societal values. Ultimately, the goal is not just to implement AI in public administration, but to do so in a way that strengthens the relationship between citizens and their government, fostering a more responsive, efficient, and trustworthy public sector.
Photo by Maxim Hopman on Unsplash