Print

It’s hatched: our plan for Artificial Intelligence in the EU institutions

Leonardo CERVERA NAVAS

It’s happening. The EU’s Artificial Intelligence Act is entering into force in the coming weeks. 

AI tools have potential, bringing new opportunities and enhancing productivity across a variety of sectors and fields, such as healthcare, sustainability, scientific research, private companies, public organisations, including the EU institutions. But, AI tools also present risks; if misused, vulnerable people may be exploited, for example. 

Taking on the role of AI supervisor of the EU institutions, the EDPS’ job is to ensure that they are prepared. 

Every EU institution, their developers and users need to know the boundaries of these tools. In other words, when to use them, and when not to use them. 

Whilst the now adopted AI Act focuses on delineating the main principles of AI, offering a degree of flexibility for the regulation to be workable in light of the challenges and developments that lie ahead, some of which remain unknown, putting this law into practice remains to be sussed. 

The EDPS has hatched a plan for AI. It focuses on three key components: governance, risk management and supervision

Whilst AI is not new, the spurt of AI tools has now become multi-disciplinary, from computer graphics, videography, to text-to-speech engines for example; these tools go well beyond the IT departments of the EU institutions. It is therefore necessary for its governance to follow a multilateral and inter - institutional approach: we need to work together to get the use of AI in the EU’s public administration right. 

It is in the EDPS’ experience that fostering close collaboration with the EU institutions is instrumental to embedding data protection and privacy into technologies’ development, deployment and use to ensure that people’s information is kept safe. This is why I proactively decided to organise a meeting on AI Preparedness with the other Secretaries-General of the EU institutions on 14 May 2024 to present the EDPS’ plan for AI, and to discuss the possibilities of a collective approach to the use of AI. Overall, the meeting contributed to nourishing the EDPS’ own analysis of AI. 

During our exchanges, Secretaries-General shared that their EU institutions are putting in place internal mechanisms, such as AI boards, specialised task forces or focus groups and fora, to steer the development of AI. This fed my own reflection on the topic; in my opinion, it is essential to cement these mechanisms by establishing a solid governance network of “AI correspondents”, composed of diverse people, not just legal experts, data protection experts, or AI experts, but also experts in human rights, ethics, intellectual property, risk management, to name just a few examples. AI tools amass information to work; we need to make sure they reflect the diverse society we live in to limit biases. 

The next issue the EDPS’ plan tackles is risk management. Here, my view is that EU institutions and the EDPS need to remain pragmatic: if everything is deemed high risk, there will be paralysis in our actions, but if real risks are not addressed and mitigated, the price to pay will be high. 

The parameters and threshold of risks linked to AI tools developed, deployed or used by EU institutions need to be identified, assessed and categorised. A system of checklists, guidelines and instructions common to all EU institutions could be put in place for example. The obligation for external providers of AI tools to demonstrate and ensure a high-level of compliance with the AI Act should be a given. 

Although challenging, quantifying the financial resources to manage risks presented by AI tools is also part and parcel of sound risk management. When considering IT investments, it is generally accepted that 10 percent of resources are allocated to cybersecurity. Similar calculations and estimations should be made for the risk management and internal compliance of AI tools. 

Rules, policies and commitments are nothing without accountability. And accountability is vain without supervision. So, how can the EDPS ensure effective supervision of the development, deployment and use of AI tools in the EU institutions? 

The EDPS plans to set up basic procedures to handle complaints, establish processes for individuals to assert their data protection rights, develop mechanisms for the supervision, prohibition and sanctioning of the use of AI for biometric categorisation, facial recognition, the inferring of emotions, and other uses banned in the AI Act, as we recommended during its drafting. 

But, for this, it is clear that the EDPS needs additional resources to support its plan for AI. With the evolution and revolution that AI brings, the EU institutions, supported by the EDPS as AI supervisor, have the opportunity to do what is right, to set an example for the EU/EEA countries on how to embrace the benefits of AI tools whilst keeping individuals and their privacy safe. Just like when the GDPR entered into force in 2016, the world is watching us again as a role model, let us set the standards for regulating AI in a human-centric way.