Why the Public Sector can't do without Explainable and Responsible AI
The public sector stands at a turning point with the rapid rise of artificial intelligence. As AI systems increasingly shape how public services are delivered, the question is not whether we use AI, but how we do so responsibly. This article outlines the importance of building an ethical infrastructure around AI, with a central role for explainable AI. By focusing on transparency, accountability, and monitoring, public organizations can ensure that AI serves democratic values and reinforces trust between government and society.

The public sector is undergoing a major transformation due to the growing adoption of artificial intelligence (AI). AI implementations for process optimization, data-driven policymaking, the use of AI assistants for citizen support, and the deployment of machine learning for risk identification and prediction are becoming increasingly prominent in public service delivery and internal operations. In his recent book Our Artificial Future, AI ethicist Joris Krijger describes this as a “churning artificial current” in which society has found itself. The key question he raises is: “What do we want from AI, and what does AI want from us?” This question is highly relevant to the impending transformation of public services and government operations due to the increasing use of advanced AI systems.
Since these “prediction machines” (AI systems) cannot be understood outside of the social and political context in which they are developed and deployed — think, for example, of ethical risks for stakeholders when a system is in use, or the invisible human labor preceding ready-to-use AI — Krijger argues that we must actively shape the “AI society” we are heading toward. We can and must confront the ethical risks of AI use, both to maximize its benefits in service of citizens and public values, and to protect those who face new risks (citizens and consumers), while holding those who benefit (companies and institutions) accountable.
Krijger therefore advocates building and strengthening the “ethical infrastructure” within organizations that enables “moral resilience.” What does this mean? It refers to shaping internal processes and the organizational ecosystem in such a way that responsible AI has a fixed role. This includes:
- Training AI-literate staff who can recognize ethical dilemmas;
- Establishing communication channels around AI;
- Involving the right stakeholders in the evaluation and development of AI systems;
- And most importantly for Krijger: publicly justifying ethical choices around AI deployment so that individuals and organizations can hold one another accountable.
For public institutions, the responsible implementation of AI and the design of procedures to guide and enforce responsible use are not optional — they are a democratic obligation.
The need for Explainable AI in the Public Sector
One of the fundamental issues this “churning artificial current” raises concerns transparency, accountability, and trust. How can we deploy AI systems if we cannot understand how they work or be accountable for their outcomes? In this context, explainable AI becomes a critical success factor in building the ethical infrastructure to support responsible AI implementation.
Public institutions are responsible for upholding legality, equality, and legitimacy in their services. AI systems operating without adequate accountability or oversight can undermine these core values and damage the social contract between citizens and government. That’s why it is essential to explain what AI systems do and how they are embedded in service delivery and internal operations.
Machine learning models — whether task-specific deep learning models or large language models (LLMs) such as ChatGPT or Claude — are complex and inherently imperfect. When can we trust AI outputs? What do we do when errors occur? And how can we explain the functioning of these systems to stakeholders in an understandable way?
The field of “explainable AI” addresses these challenges by offering insight into how AI systems arrive at specific outcomes. This must go beyond technical transparency, take into account the intended audience, and dismantle the illusion of objectivity in AI systems.
Highberg’s three principles of Explainable AI
According to the AI specialists at Highberg, explainable AI — as part of an organization’s ethical infrastructure — should be based on three practical principles:
1. Transparency
This means providing understandable access to basic information about the system, the logic behind or outcomes generated by it, and how these relate to the system’s inputs. For public institutions, transparency is essential to fulfill their duty to inform stakeholders. It allows organizations to offer insights into how AI works — from individual citizens to regulators — and makes the rationale for using AI more comprehensible.
2. Accountability
Transparency alone does not build trust — in fact, an overload of information can cause distrust. That’s why proactive accountability is required. Organizations must justify how decisions stem from AI outcomes. Public organizations, in particular, must take responsibility for the consequences of system results. This principle enables democratic and human oversight over automated processes and ensures public services remain subject to internal and external checks that can enforce responsible use.
3. Monitoring
Even with transparency and accountability, how can we ensure things don’t go wrong? And how do we deal with system failures? That’s why the third principle is monitoring. Monitoring involves continuously analyzing system outputs and related decisions so that organizations can detect, mitigate, and prevent undesirable effects like bias and discrimination. For public organizations, this is vital to safeguard equal treatment and avoid systematic disadvantage of specific groups.
These three principles ensure that explainable AI shapes both technical implementation and strategic direction. It preserves the human-centered and ethically accountable nature of AI use.
It’s time for the public sector to work together on building the ethical infrastructure around AI. Explainable AI systems are a conditio sine qua non for linking an organization’s moral resilience to the benefits AI can offer.
This summary was initially created by Claude 4 Sonnet, then fully rewritten and given a new introduction by the author.
In Summary: Public Sector — Responsible & Explainable AI
AI is increasingly being used in the public sector, for example in policy-making, service delivery, and risk management. This brings ethical risks with it. According to AI ethicist Joris Krijger, it is therefore crucial that organizations build a strong ethical infrastructure, with explainable AI at its core. Explainable AI enables transparency, accountability, and monitoring, and is essential for safeguarding trust, equality, and democratic oversight.
Want to know more?
Contact AI-consultant Thomas Mollema.
Related Insights
