AI ethics for public organizations: A introduction series
In the public debate, artificial intelligence (AI) is seen as means to increase worker productivity and provide better and more health care. AI could also speed up and automate administrative processes, distribute energy and resources more efficiently, and better detect natural hazards. AI is simultaneously a technology and a social promise. Public organizations are raising questions about AI. Should they do something with AI? How does AI relate to the problems that they must solve?
Ethical use of AI in the public sector
Exploring the possibilities of AI is something to start with right now. But a public organization must do so responsibly: For AI in the public sector, ethical development and deployment is a ‘must have’ rather than a ‘could have.’
Using technology and data responsibly is of course something organizations operating in the digital sphere strive to do. But in the case of AI, the risks and benefits involved are not to be underestimated. Ethics can be sidelined in organizations as ‘vague,’ ‘unpractical’ or ‘superfluous if the relevant legislation is adhered to.’ But such superstitions should be avoided. Numerous ethical aspects are namely inherent to the usage of AI systems. Think of the stakeholders involved in or affected by data collection and whose choices are factored into the design phase. Blindness to these ethical dimensions leads to harmful outcomes and benefits that remain unreaped.
This article kickstarts a introduction series by Highberg on the topic of AI ethics, aimed at public organizations.
Making AI ethics accessible
The past few years have seen a proliferation of toolkits, guidelines, and principles regarding values such as explainability, transparency, justice, and accountability.1 But for governmental and public organizations, not all these values are always equally relevant. The new AI Act prohibits some applications of AI such as facial recognition, social scoring and resumé selection2 because they are too ‘high-risk'. Still, legally acceptable AI systems come with ethical challenges of their own. These challenges range from algorithmic bias and discrimination to misrepresentation of information and the harmful consequences of misprediction.
The problem we see is that practitioners lack time to determine which principles, values and guidelines apply to their AI use case. ‘Taking the ethical perspective’ is therefore easier said than done. Ethics must be practiced and is not absolved by the formulation of codes of conduct or resolutions alone. But luckily these difficulties can be overcome. This series of introductions serves the need to make the ethics of AI accessible for public stakeholders.
What we cover in the introduction series
In the coming weeks, we will present the basics of AI ethics. We establish why, how, where and when ethical reflection should be practiced for any AI use case:
- In the first introduction, we discuss how to understand ‘AI ethics’.
- The second introduction discusses the interplay between the mitigation of ethical risks and harnessing AI’s potential.
- In the final introduction, we prescribe when and where ethical reflection can come into play in the process of designing, deploying, and evaluating AI systems.
Since establishing AI ethics is challenging, Highberg can support your organization in realizing responsible, accountable, and ethical AI. Can’t you wait for the next installment of this series? Contact one of our consultants in AI ethics or responsible AI to learn more about the workshops and talks we provide on the ethics of AI to get your organization’s employees up to speed.
References
1. Tomasz Hollanek, “The ethico‑politics of design toolkits: responsible AI tools, from big tech guidelines to feminist ideation cards,” AI and Ethics (2024): 1-10. See for example: OECD, “Recommendation of the Council on Artificial Intelligence”, (2024), https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449; and UNESCO, “Consultation Paper on AI Regulation”: Emerging Approaches Across the World,” (16 August 2024), https://unesdoc.unesco.org/ark:/48223/pf0000390979. For the discussion of general risks and risks particular to the situation of the Netherlands, see: AutoriteitPersoonsgegevens, “AI & Algorithmic Risks Report Netherlands - Summer 2024,” (18 July 2024), https://autoriteitpersoonsgegevens.nl/en/documents/ai-algorithmic-risks-report-netherlands-summer-2024.
2. European Commission, “AI Act,” https://artificialintelligenceact.eu/the-act/.