Written Evidence Submitted by Leonardo Tessler [1]  [2]





The notion of AI explainability has been misunderstood by law. We propose to replace the Principle of AI Explainability with a Principle of AI Understandability.




  1. Introduction

We believe that the legal regulation of AI through the Principle of AI Explainability is misplaced in law. This principle aims to make AI more understandable, but its content – providing useful information (explanations) to users – does not seem to be aligned with other AI principles and, mainly, with the purpose of the AI norm. Thus, we propose replacing the Principle of AI Explainability with a Principle of AI Understandability, which would not aim to provide explanations but to ensure the necessary technical means to make AI more understandable to users.

2. The Principle of AI Explainability

We start from the notion of the Principle of AI Explainability, as established by article 1.3 of the OECD Principles on AI Convention (OECD, 2022), to which the UK is a signatory.

The Principle of AI Explainability is part of a system of principles whose norm’s purpose is to ensure responsible and trustworthy AI.

Within this system of principles, the Principle of AI Explainability has the role of guiding AI actors to provide users with meaningful information about the AI system and, in particular, about automated decision-making, allowing the users to challenge it, if necessary.

3. Meaningful information and explanations

Although it is provided for by article 1.3, the term “meaningful information” is not defined by the OECD Convention. However, the document prepared by the OECD Observatory, on the justifications of the OECD Convention, clarifies that the Principle of AI Explainability aims to provide explanations to the AI ​​users (OECD.AI Policy Observatory, 2022b):

Therefore, when AI actors provide an explanation of an outcome, they may consider providing – in clear and simple terms, and as appropriate to the context – the main factors in a decision, the determinant factors, the data, logic or algorithm behind the specific outcome, or explaining why similar-looking circumstances generated a different outcome. This should be done in a way that allows individuals to understand and challenge the outcome while respecting personal data protection obligations, if relevant.

Thus, we assume that, for the OECD Convention, providing meaningful information to AI users means providing them with explanations about how the AI ​​system works in general and how automated decisions are made in specific cases.

4. The inadequacy of the Principle of AI Explainability

4.1. The duty to provide explanations

The OECD Convention has acknowledged several legal principles related to AI and established a normative system: all the principles acknowledged therein must work together to achieve a responsible and trustworthy AI.

However, we note that the content of the Principle of AI Explainability, unlike the other AI principles, is not directly related to any characteristic or particularity of AI, nor to the purposes of the AI ​​norm.

This duty of providing explanations is, in fact, a general contract obligation, applicable to any area of ​​knowledge, due to the contractual principles of good faith and information: the parties must provide each other with the necessary information so that the contract and its consequences are well understood. In this sense, for example, every product and service provider has the duty to provide information, explanations, and advice to the consumer; the same happens with professionals, such as doctors in relation to their patients and so on. Therefore, as we can see, it is not necessary to acknowledge a new legal principle, in a specific AI regulation, in order to impose upon AI actors a legal obligation that already exists.

It should be noted that the other principles acknowledged by law, such as the principles of robustness, safety, security, and transparency[3], are linked to some technical specificity of AI models. Even the most general principles such as accountability, inclusive growth, sustainable development, well-being, and human-centered AI are directly linked to the effects of AI development and technical implementation.

Ultimately, we could even say that this contractual obligation to provide explanations to users would already be included in the Principle of Accountability, which imposes on AI actors duties of preventive actions for the reduction or elimination of their responsibilities, including the obligation to provide users with all explanations concerning their decision-making[4].

The OECD.AI observatory clarifies the content of the Principle of Accountability (OECD.AI Policy Observatory, 2022):

Generally speaking, “accountability” implies an ethical, moral, or other expectation (e.g., as set out in management practices or codes of conduct) that guides individuals’ or organisations’ actions or conduct and allows them to explain reasons for which decisions and actions were taken. In the case of a negative outcome, it also implies taking action to ensure a better outcome in the future [emphasis added].

Therefore, in our understanding, the Principle of AI Explainability should impose upon AI actors duties that ensure users certain technical AI model characteristics, so that it could be aligned to other AI principles and contribute to the purpose of the AI norm.

4.2. The AI trustworthiness

It is also claimed that explanations would contribute to a better AI trustworthiness[5].

However, we will see below that this premise does not seem to be true and the issue of the obligation to provide explanations with the aim to ensure AI understanding should not be addressed alongside the issues concerning AI trustworthiness.

First of all, we should distinguish the notion of “AI reliability” from the notion of “AI trustworthiness” A reliable AI depends on verifying certain technical attributes of the AI ​​models that make it reliable. In this sense, a safe, robust and accurate model, which comes as close as possible to correctness, is a reliable AI model.

AI trustworthiness has to do with users' adherence to this technology[6]. This adherence requires the user to make a decision. More recent theories on trust linked to decision-making, such as those inspired by Bayesian theory, have successfully addressed this issue in an objective way, moving away from the idea that the bond of trust would be established from the subjective feeling of someone who adheres to the technology (Beisbart, 2012). According to this theory, trust is established from the objective judgment of someone, who, faced with a scenario of uncertainties and risks, calculates, even if intuitively, the probabilities of the advantages and disadvantages that could be obtained by adhering to the technology.

Thus, we can say that it is the AI reliability that allows for establishing a relationship of trust with the user: the more AI models are accurate, the lower the risks, and the more users will trust the technology. In this sense, for instance, it seems obvious that someone who decides to travel by plane makes this decision based on the degree of correctness of this technology and not because someone explained to him or her how the plane flies. The explanation, in this context, is complementary information that can contribute to decision-making, but it is not an indispensable component for adherence.

Thus, we can state that AI trustworthiness can be achieved from legal provisions that impose on the AI actors proof of degrees of AI correctness, according to the risks involved in its use.

We, therefore, conclude that AI trustworthiness depends on the degree of correctness of the AI ​​and not on explanations. Incidentally, it would even be paradoxical to state that explanations could ensure AI trustworthiness. Note that for explanations to ensure AI trustworthiness, they need to be certain and the AI model needs to be reliable. However, if the AI ​​model is reliable, it will already be trustworthy.

Thus, if we want to clearly discuss the AI understanding issue involved in the Principle of AI Explainability, we should adopt at least 3 measures:

a) acknowledge that the issue is not of explanations, but of AI understanding – this is the value that the AI norm intends to ensure;

b) separate the discussion about AI understanding from the discussion about AI reliability and AI trustworthiness;

c) and discuss AI understanding assuming that AI is reliable and trustworthy (it would be useless to demand AI understanding from an AI model that is not reliable and trustworthy).


5. Understanding and the Principle of AI Understandability

We start from the premise that AI understanding is a value that the AI norm aims to ensure: the reason why we want to understand AI is that by understanding this technology we can control it and make it evolve.

In this sense, we believe that the Principle of Human-Centered AI is the principle of the AI norm that seeks to materially ensure AI understanding. In this sense, the OECD.AI observatory clarifies the scope of the Principle of Human-Centered AI (OECD.AI Policy Observatory, 2022):

AI should be developed consistent with human-centred values, such as fundamental freedoms, equality, fairness, rule of law, social justice, data protection and privacy, as well as consumer rights and commercial fairness.

(…) It is therefore important to promote “values-alignment” in AI systems (i.e., their design with appropriate safeguards) including capacity for human intervention and oversight, as appropriate to the context.

We can see that the Principle of Human-Centered AI is concerned with protecting the fundamental rights of users, but also seeks to guarantee human intervention in the technology, as well as the supervisions of its behavior. Understanding AI is therefore an interest of the entire AI audience – that is, the set of users who use this technology with the most varied interests (Barredo Arrieta et al., 2019).

On the other hand, the Principle of AI Understandability would then have the instrumental function of ensuring the technical means for understanding AI models: as we know, not all AI models are understandable. Thus, this principle would impose upon the AI actors not only to use understandable AI models, but also to develop technologies that render certain AI models understandable. When we speak here of understandable models, we are talking about models whose decision-making processes are understandable; and incomprehensible models, those whose decision-making processes are not understandable.

Explainable-AI is the current initiative dedicated to making models understandable through AI model explainability techniques. However, although explainability is a characteristic of explainable AI models, understandable models have a broader connotation, and they are much closer to the value sought by the norm. AI understanding is not limited to the Explainable-AI techniques. AI understanding can also be achieved from interpretations on AI model that in several situations doesn’t need neither explainability techniques to be understandable (such as, for example, decision tree models), nor contractual explanations. Actually, explanation is only one of the means to achieve understanding.

It should be noted that understanding is the result of a process that requires interpretive movements from those who want to understand a given phenomenon. In some circumstances, this movement is not enough for a satisfactory understanding, and explanations are needed to help them in their interpretative tasks (Gadamer, 2004).

For these reasons, acknowledging a Principle of AI Understandability seems more appropriate than a Principle of AI Explainability.

6. The duty to make explicit the decision-making process of AI models

We said above that we justify the AI understanding because we want to control the technology and make it evolve. However, for that to happen, we need to understand the decision-making process of the model in a specific case.

The development of an AI model architecture and its decision-making processes depends on several aspects. Some models are built in a way in which their decision-making process is explicit, such as the decision tree models. However, in several cases it is recommended to use models whose decision-making process is not explicit, such as models based on neural networks, which harm user’s interest in getting a understandable AI[7].

The Principle of AI Understandability would then establish the AI actors' commitment in:

a)      to give preference, whenever it is possible, to the use of AI models with explicit decision-making processes[8];

b)     to develop technical means to make explicit the decision-making process of some AI models;

c)      technically justify to the user the reasons for which the AI actor chose a model whose decision process is not explicit, and why he did not have the means to make it explicit.

We observed that, in many cases, even if it is explicit, the decision-making process is not understandable by all types of users. A model that explains a mathematical function can allow a programmer to understand the decision-making process, but it would not be understandable, for example, by a doctor who seeks to understand how the AI made a certain decision in the analysis of a medical image.

This task of adjusting the explanations to the user profile is also an obligation of the AI ​​actor. However, it is not an obligation that stems from the Principle of AI Understandability, but from the general contractual principle of providing an explanation.

Thus, in our view, there would be coordination in the AI ​​norm between the Principle of Human-Centered AI, which imposes a material need to understand AI; the Principle of AI Understandability, which instrumentally imposes the duty to make the decision-making processes of AI models explicit; and the general contractual obligation to provide explanations, which imposes on AI actors the duty to adapt the explanations obtained from AI models to the user's profile, in order for the users to obtain the correct understanding of AI.

As we stated above, there is no room in the Principle of AI Understandability’s regulation for discussions about the certainty of explanations, nor the relationship between explanation and AI trustworthiness.

What matters for the Principle of AI Understandability is that AI actors always act in order to ensure the technical means to make the decision-making process of AI models explicit, allowing users to understand AI.

7. Conclusion

We aim to demonstrate that AI explainability has been misunderstood by law and the content of the Principle of AI Explainability has turned out to be inadequate.

This situation has caused confusion and enormous difficulty in regulating the behavior of AI actors, in a way that law could effectively ensure AI understanding among users.

Finally, we propose replacing the Principle of AI Explainability with the Principle of AI Understandability, which, by imposing on AI actors the duty of ensuring the technical means for making the decision-making process of AI models explicit, could effectively ensure the interest of users in understanding AI.

8. Bibliography

Barredo Arrieta, A., Díaz-Rodríguez, N., del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2019). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Beisbart, C. (2012). A Rational Approach to Risk? Bayesian Decision Theory. In S. Roeser (Ed.), Handbook of Risk Theory : Epistemiology, Decision Theory, Ethics, and Social Implications of Risk (p. 1183). Springer Berlin Heidelberg. https://doi.org/10.1007/978-94-007-1433-5

Doshi-Velez, F., Kortz, M., Budish, R., Klein, B., Bavitz, C., Gershman, S., O’Brien, D., Shieber, S., Waldo, J., Weinberger, D., & Wood, A. (2017). Accountability of AI under the law: The role of explanation. ArXiv, December 2016. https://doi.org/10.2139/ssrn.3064761

Gadamer, H.-G. (2004). Truth and method (3th ed.). Continuum.

Hamon, Ronan., Junklewitz, Henrik., Sanchez, Ignacio., & European Commission. Joint Research Centre. (2020). Robustness and explainability of Artificial Intelligence : from technical to policy solutions.

Liu, H., Wang, Y., Fan, W., Liu, X., Li, Y., Jain, U., Liu, Y., Jain, A. K., & Tang, J. (2022). Trustworthy AI: A Computational Perspective. IEEE International Conference on Program Comprehension, 2022-March, 36–47. https://doi.org/10.1145/nnnnnnn.nnnnnnn

OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449. (2022). http://legalinstruments.oecd.org

OECD.AI Observatory. (2022). Human-centred values and fairness (Principle 1.2).

OECD.AI Policy Observatory. (2022a). Accountability (Principle 1.5). https://oecd.ai/en/dashboards/ai-principles/P9

OECD.AI Policy Observatory. (2022b). Transparency and explainability (Principle 1.3). https://oecd.ai/en/dashboards/ai-principles/P7

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should i trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13-17-August-2016, 1135–1144. https://doi.org/10.1145/2939672.2939778

Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., & Zhong, C. (2021). Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges. http://arxiv.org/abs/2103.11251

The Royal Society. (2019). Explainable AI: The Basics (Issue November). https://royalsociety.org/topics-policy/projects/explainable-ai/



(November 2022)

[1] PhD in Law Candidate at the University of Montreal. Master’s degree in Intellectual Property Law from University of Lisbon, Portugal.

[2] By submitting this document, I would like to contribute to the international debate on AI regulation and, in particular, on the effectiveness of the Principle of AI Explainability.

[3] We noticed in the justifications of the OECD.AI Policy Observatory, on the Principle of Transparency, that there is also a part of the interpretation on the content of the Principle of AI Transparency that refers to the contractual relationship between the parties and not to the characteristic of AI models transparency. There is no need to establish a Principle of AI Transparency to compel the parties to act transparently towards each other in fulfilling their contractual obligations.

[4] Doshi-Velez et al., for example, have discussed the relation between Responsible AI, explanations and AI Accountability before the OECD AI Principles (Doshi-Velez et al., 2017).

[5] For example: (Ribeiro et al., 2016); (The Royal Society, 2019).

[6] Some authors propose different definitions, as such: technical trustworthy vs user trustworthy vs social trustworthy (Liu et al., 2022).

[7] In the JRC Report, for example, European Commissions discuss the types of AI models, the level of AI model transparency, the relation between AI interpretability and AI understandability and the problem of technical definitions in the AI academic field 11-13 (Hamon et al., 2020, p. 14).

[8] Cynthia Rudin et al. propose this preference of use as a Principle 5 of the fundamental principles of AI Interpretability: “Principle 5 - For high stakes decisions, interpretable models should be used if possible, rather than “explained” black box models”. (Rudin et al., 2021).