Explainable Artificial Intelligence needs Human Intelligence


Modern Artificial Intelligence (AI) models often work as opaque decision-making engines (black boxes); reaching conclusions without much transparency or explanations on how a given result is obtained. In an era where AI has become an integral part of our lives, where recruiters, healthcare providers, and other fields, rely on this tool to make decisions impacting individuals, understanding the way AI works is essential.

Could Explainable Artificial Intelligence, or XAI, be a way forward, a potential solution? But, what is XAI, how does it work in practice? What are its benefits, but also its risks, its relationship with data protection? What impact may XAI have in the years to come?

These are some of the questions that our distinguished guest speakers and experts tackled during the EDPS’ Internet Privacy Engineering Network (IPEN) hybrid event, which I had the pleasure of chairing on 31st May.

IPEN, established almost 10 years ago by the EDPS, brings together data protection and technology experts, as well as other pertinent actors, to discuss relevant challenges of embedding data protection and privacy requirements into the development of technologies. The forum generates thought-provoking views, fascinating exchanges, which, like for many others, informs and feeds my own reflections about the narrow relationship between privacy and technology.

This IPEN event on XAI was no exception.

Event picture

Why XAI?

XAI focuses on developing AI systems that can not only provide accurate predictions and decisions, like other AI systems, but can also offer explanations on how a certain decision or conclusion is reached. In other words, XAI should be able to explain what it has done, what will happen next, and what information has been used to make a decision. With this information, individuals using XAI would be able to understand the reasoning behind an automated decision, and to take the appropriate, and informed, course of action. With XAI, the dynamic changes: users would not simply rely on AI systems to make decisions for them, but would play an integral part in making, or verifying, a decision. To this end, XAI - coupled with human cognition - could play an important role in fostering trust in AI systems, as well as increasing transparency and accountability of AI systems.

XAI, accountability, transparency and data protection: how does it all add up?

How does this all work in practice? How can transparency and accountability really be achieved? It won’t be enough if explanations given by an AI system are very technical - only understandable by a handful of experts.

Effective transparency and accountability, and therefore trust in AI systems, can only really be achieved if information about the underlying behaviour of a system can be explained with truthful and sincere simplicity, and in a clear and concise manner, so that this knowledge can be passed on from the provider to the users of AI systems. Obtaining clear information about the behaviour of AI also has an impact on the ability for its users, such as data controllers and processors, to evaluate the risks that this tool may pose to individuals’ rights to data protection and privacy, to protect them and their personal data.

XAI and its risks

Is it easy to explain AI in a simple, clear and concise way? Well, not really. Making AI understandable and meaningful to everyone is challenging to achieve without compromising on the predictive accuracy of AI. Arguably, one of the risks is that explanations could become subjective, convincing rather than informative, or open to interpretation, context-dependent, some participants shared at the IPEN event. Cultural filters can also play a role. There is probably not one single way to explain what an AI system does, but there are certainly many wrong ways to do so.

Risks to individuals’ privacy and personal data should be considered seriously as well. With XAI, results produced by AI systems may reveal personal information about individuals.

Other risks, shared by our panellists, include the possibility that explanations of AI-assisted decisions may reveal commercially sensitive material about how AI models and systems work. Furthermore, AI models may be exploited by individuals if they know too much about the logic behind their decisions.

XAI needs humans

Now that we have examined some of XIA’s possibilities, its possible impact on data protection, and examples of its benefits, but also risks, how may this field progress? 

To advance in the field of AI, Human-AI collaboration is important. Moreover, interdisciplinary collaboration is essential. Experts in computer science, cognitive psychology, human-computer interaction, and ethics must work together to develop robust methodologies, standards and safeguards that promotes a fair AI ecosystem, to empower individuals, giving them control over their information and respecting their privacy.

In this sense, XAI is more likely to succeed if researchers, experts and practitioners in relevant fields adopt, put into practice, and improve AI models with their unique and creative knowledge. Above all, evaluation of these models should focus on people more than on technology.

As highlighted by the European Data Protection Supervisor, Wojciech Wiewiórowski, during the Annual Privacy Forum following the IPEN event: when it comes to XAI and Artificial Intelligence in general, enforcement of appropriate rules and existing EU Regulations, such as the General Data Protection Regulation (GDPR), must be upheld. Protecting individuals’ fundamental rights must come first.

When confronted with powerful AI systems, all human beings, even the clever ones, become somehow vulnerable in relation to the power of the machine. Therefore, we must shape AI to our human. As provided in Recital 4 of the GDPR on data processing, AI should also be designed to serve humankind.