IPEN event on "Human oversight of automated decision-making"
The EDPS and the University of Karlstad are hosting an Internet Privacy Engineering Network (IPEN) event on "Human supervision of automated decisions" on 3 September 2024.
When: 3 September 2024 - 14:00-18:00 CEST
Where: Eva Eriksson lecture hall, Universitetsgatan 2, 651 88 Karlstad, Sweden (registration required) and online.
Programme and Video: the programme and the video recording of the event are available on the dedicate page of the event.
The aim of the IPEN event is to promote discussion on questions such as the following:
- Don't the requirements for human oversight shift the burden of responsibility from the systems and their providers to the people who operate them?
- Could there be an unavoidable liability of the operator? Let’s suppose a human operator chooses to follow the system's suggestion and turns out to be wrong. Wouldn’t that be seen as an inability of the user to understand the limitations of the system? And, on the contrary, if the operator decides against the system's suggestion and proves wrong as well, wouldn’t that result in an even worse outcome for the operator, who had clear indicators to decide otherwise?
- Article 14 (2) of the AIA (Human oversight) provides that “human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used”. Are the provisions of Article 14 clear enough about what oversight measures are expected from humans/providers and what their responsibilities should be?
- If human oversight is a risk mitigation control, how can we measure its impact?
- What does “appropriate” human oversight mean? What are the characteristics that should be taken into account to assess if a human oversight procedure is appropriate or not?
- Could regulations requiring human oversight be paving the way for the production of defective systems?
- How should this oversight happen? In the testing and monitoring of the system? Are we talking about escalation procedures like in a call centre?
- What skills should humans have? Are we talking of engineers that know how an AI system works or are we talking about humanists?
- What would be the legal implications if in the end the AI system cause harm? Who will be accountable legally and morally the user of the system, the provider of the system, the overseer of the system?
- Incorporating humans into the process is costly, may not be scalable and could reduce the speed of systems, so AI deployers might not be inclined to use human oversight. Where should the line be drawn?
For more information on the event please check the dedicated page: IPEN event on “Human oversight of automated decision-making”
About IPEN: The purpose of IPEN is to bring together developers and data protection experts with a technical background from different areas in order to launch and support projects that build privacy into everyday tools and develop new tools that can effectively protect and enhance our privacy. More info on the dedicated page IPEN - Internet Privacy Engineering Network