Transparency with algorithm registry?

This article has been translated from Dutch with the help of AI: In an era where algorithms exert increasing influence on our daily lives, maintaining control over the latest developments and being transparent about them is more important than ever. This leads us to a crucial question: when is the openness and transparency surrounding these algorithms sufficient? The answer is not simple, because transparency goes beyond just registering the algorithm. In this article, we delve deeper into the steps needed to start working on transparent and responsible use of algorithms at 5 maturity levels.

placeholder

Transparency: More than just providing information.

To start, we need to explore the concept of transparency. It is not merely keeping an algorithm registry and making this information available. Transparency goes beyond that; it involves comprehensibility, accountability, impact assessment, and the quality of information.

Firstly, it is about effectively explaining the context in which algorithms are used and why (comprehensibility). It is also about demonstrating that the algorithm does what it was developed for (accountability). Furthermore, ensuring that the decision-making process for usage is clear (impact assessment) and all aspects are auditable (quality of information).

This is not a one-time job: it also involves periodic evaluation and monitoring to ensure that algorithms still do what they are supposed to do and that the information about them is complete, current, and correct.

In short, it's about "doing what you say and saying what you do."

Level 1: Inventory and risk classification

The foundation for transparency lies in knowing the algorithms used within your organization and who is responsible for the processes in which they are employed. This requires a clear definition of algorithms (scope) to conduct a thorough inventory of all algorithms in use and under development. A framework for managing your AI responsibly can be found in ISO42001.

The AI Act imposes stricter rules for 'high-risk' AI applications. With a thorough inventory, you can accurately assess where AI is involved and what the risk level is. This reduces the chance of missing 'high-risk' applications.

The inventory reveals information on the processes in which the algorithm is used and what the algorithm 'calculates.' With the gathered information, the basis for both registration and risk classification is established.

Risk classification provides the opportunity to focus on transparency regarding 'high-risk' AI applications and helps in taking appropriate measures when needed, such as collecting additional information and creating insight.

Purpose of algorithm registry

"The Dutch government wants the government to use algorithms responsibly. People must be able to trust that algorithms comply with society's values and norms. And there must be an explanation of how algorithms work. The government does this by checking algorithms before use for how they work and for possible discrimination and arbitrariness. When the government is open about algorithms and their application, citizens, organizations and media can follow it critically and check whether it follows the law and the rules." - algoritmes.overheid.nl

Level 2: Maintain insight

With information about the context, purpose, impact, risk classification, and associated measures in an algorithm registry, you have reached level 1 - the foundation. However, transparency is an ongoing process. Periodic checks and monitoring of algorithms are essential to ensure that they continue to meet the specified requirements.

To achieve level 2 - insight, it is necessary to establish periodic checks and monitoring. Periodic checks on the accuracy of information, assumptions made, and the appropriateness of the approach help to continuously improve and make adjustments where necessary. It is recommended to determine the minimum expected performance for each AI system and when your algorithms underperform.

Performance requirements must be continuously monitored so that deviations from the norm can be addressed adequately - including stopping an algorithm if necessary. It is also advisable to periodically check the performance requirements themselves – do the requirements still align with our core values?

Level 3: Create verification

Both internal and external assessments are required for both algorithm governance and technology to achieve level 3 – verifiable. At this level, you can demonstrate that your organization takes responsibility for algorithms, and that every 'high-risk' application is deployed under control.

Algorithm governance must be evaluated periodically for practical effectiveness, and there must be periodic assessments by second- and even third-line checks to ensure that AI applications still perform as intended and assumptions are still correct. In other words: Does our governance make it easy to do the right things, to stay in conversation, and to be transparent about the algorithms we use? And do these algorithms technically perform in the best way within our constraints and requirements?

For example, an Fundamental Rights Impact Assessment (FRIA, IAMA in Dutch) is a recommended measure to periodically monitor - from the development phase through production - whether systems are in line with public values and laws and regulations. There are also various frameworks and tools to assess algorithms; see here for how we assess AI systems and algorithms.

Level 4: Public access to the code

Publishing the code of used algorithms is a crucial step towards complete transparency, level 4 out of 5. It allows independent experts to evaluate the functioning of and the changes in the algorithmic system. It also provides confidence to the public that there are no secrets. A published algorithm registry should include the option to provide a link to the code.

However, an additional risk rises here. It is not intended, for example, to unintentionally disclose personal data or information about the organization’s servers. Therefore, establish guidelines on how, when, and which elements of the code can be made public.

Level 5: Full transparency

Being fully transparent in all aspects – level 5 – provides a more comprehensive opportunity for external parties, also known as public control, to understand the algorithms. In addition to providing access to (part of) the code, data is also required here.

There are several challenges here if an algorithm is not developed based on open data. The data often cannot be made public because of sensitive information and personal data or because identification of individuals is possible in a few steps.

An alternative is to generate synthetic data [Sd1] with the condition that the AI application must be able to work with it, and this set must fully guarantee the privacy of the original data and not expose systems to external threats.

Willy Tadema's 5-star model focuses on the level of transparency per algorithm: "A description in the algorithm registry is a good first step – or star. This model provides insight into the context in which the algorithm is used. If additional data is also published, such as the results of an audit or the monitoring data of an algorithm, more stars are added. This additional data allows experts, journalists, and other interested parties to truly investigate how the algorithm works." To achieve these stars for each algorithm, you must be prepared for the maturity levels described in this opinion article.

Conclusion

Striving for public control (complete openness and transparency) on algorithm use, including AI systems, is a noble goal, but it requires effort and dedication from all parties involved. It starts with understanding your algorithms and regularly assessing information, algorithmic performance, and the validity of the assumptions made. Setting up internal and external assessment on your AI systems and, lastly, sharing the code and data.

At Highberg, we understand the complexity of this challenge and are ready to support you at every step of this journey. We have a structured approach to support the responsible use of AI and transparency. We would love to collaborate with you to understand and manage the impact of AI systems on your organization.

placeholder

Want to know more?

Read here how we supported the municipality of Rotterdam in the joruney towards AI transparancy. 

Gerelateerde insights

divider