Home

Data & AI

Responsible AI

Responsible AI

"Making an impact with AI within the boundaries of what is permissible, feasible, and desirable."

Make AI worth it: AI should be designed and used responsibly, in a way that respects human rights, and values without the need for technical compromises.

We support organizations in the responsible deployment of artificial intelligence (AI). We achieve this by offering clarity and guidance on legal, technical, and ethical considerations. We offer concrete guidelines for the design, development, deployment and use of AI systems according to the 'sensible-by-design' principles. Our expertise enables our clients to fulfill their legal obligations and meet ethical responsibilities by conscious decisions in every stage of the AI lifecycle. Together we improve transparency and accountability for responsible AI systems.

Laura Natrop
Laura Natrop

Get in touch

Sabine Steenwinkel-den Daas
Sabine Steenwinkel-den Daas

Get in touch

Placeholder text

Full circle implementation of responsible AI

We guide organizations in establishing and implementing a systematic approach to manage algorithms, commonly known as 'algorithm governance.' Within this governance framework, clear agreements are made to consciously and responsibly use algorithms across all levels of the organization. These agreements encompass the basic definition of AI in your organization, minimum documentation requirements, decisions to be made at various stages of the AI lifecycle, and acceptable conditions.

It is crucial for employees within an organization to have a good understanding of what AI is and to be aware of their own role in the responsible use of AI. This awareness is essential for the successful implementation of algorithm governance. We support organizations in increasing this awareness, fostering enthusiasm, and facilitating the necessary changes.


"Highberg has been actively assessing algorithms for over five years."

An integral part of algorithm governance is the assessment of risks according to the guidelines outlined in the AI regulation. These guidelines include additional measures for algorithms deemed 'high-risk.' These measures involve conducting a Fundamental Rights Impact Assessment (FRIA, NL: IAMA), registering the algorithm in a dedicated registry, implementing an effective AI management system, and performing a specific evaluation of the quality and functioning of an AI system. Highberg has been actively involved in assessing algorithms for over five years, including code validation and advisory services. We have qualified FRIA facilitators and specialists familiar with the international standard ISO 42001 for AI management systems.

Our Responsible Use of AI Propositions

placeholder

A Highberg customer story

Highberg supports the municipality of Rotterdam in the responsible use of algorithms and AI systems.

Highberg (formerly: VKA) has supported the municipality of Rotterdam in the responsible use of algorithms and transparency regarding this through management of the algorithm register and the practical implementation of and providing improvement suggestions towards algorithm governance.

Customized advice and implementation

Together with our clients, we determine exactly what they need. This way, we ensure that we only carry out the tasks that your organization truly requires. Our goal is to collaborate with you on the responsible deployment of AI. We aim for your organization to eventually be able to independently implement and at least manage the products and services we offer. 

placeholder

Gaining control over responsible AI deployment with ISO 42001

In this whitepaper, we explain how setting up a management system (based on ISO 42001) helps your organization to be in control of AI and comply with the AI Act.

Related insights

divider