Kemal Kumkumoğlu
Selin Çetin Kumkumoğlu
Ayşegül Avcı
The draft text of the European Union Artificial Intelligence Act (“AIA”) was first announced by the European Commission in April 2021[1]. Last June, the European Parliament announced the amendments to the draft AIA[2].
One of the amendments made by the Parliament in this direction was Article 29a entitled “Fundamental Rights Impact Assessment for High-risk Artificial Intelligence Systems”. On 9 December 2023, the trilogue among the European Council, the European Parliament, and the European Commission reached an agreement on the AIA[3]. The agreed text has not yet been announced to the public due to the ongoing technical meetings.
Fundamental Rights Impact Assessment for High-Risk Artificial Intelligence Systems:
Under the amendments proposed by the Parliament, Article 29a provides that, in the case of the use of high-risk artificial intelligence systems as defined in the AIA, users (deployers) shall be obliged to make an assessment of the impact of the system in private use. It should be noted that the users of high-risk artificial intelligence systems subject to this obligation are those who use the artificial intelligence system for professional purposes.
What are High-Risk AI Systems?
Under the draft AIA, AI systems are classified as high-risk, including:
AI systems that are used in products falling under the EU’s product safety legislation,
Biometric identification and classification of individuals,
Management and operation of critical infrastructure,
Education and vocational training,
Employment, personnel management, and access to liberal professions,
Access to and use of essential private services and public services and assistance,
Law enforcement,
Migration, refuge, and border control management,
Support in the interpretation and application of the law,
In this respect, users (deployers) who utilize the high-risk AI systems mentioned above will be obliged to conduct a fundamental rights impact assessment for these systems under Article 29a.
Key Issues to be Included in a Fundamental Rights Impact Assessment:
The purpose of the fundamental rights impact assessment is to determine the risks that may arise with the use of high-risk artificial intelligence systems and to ensure that the appropriate measures are taken to minimize these risks. The following issues are required to be clarified in the content of this assessment which the users of high-risk artificial intelligence systems should complete before using such systems. Accordingly, this assessment should include, at a minimum, the following elements:
clear outline of the intended purpose for which the system will be used;
a clear outline of the intended geographic and temporal scope of the system’s use;
categories of natural persons and groups likely to be affected by the use of the system;
verification that the use of the system is compliant with relevant Union and national law on fundamental rights;
the reasonably foreseeable impact on fundamental rights of putting the high-risk AI system into use;
specific risks of harm likely to impact marginalized persons or vulnerable groups;
the reasonably foreseeable adverse impact of the use of the system on the environment;
a detailed plan as to how the harms and the negative impact on fundamental rights identified will be mitigated.
the governance system the deployer will put in place, including human oversight, complaint handling, and redress.
Considerations in terms of Fundamental Rights Impact Assessment
Users (deployers) of high-risk artificial intelligence systems should refrain from putting the system into use if the impact assessment does not include the above-mentioned issues or if there is no detailed plan to mitigate the risks identified during the fundamental rights impact assessment. It is also stipulated that in such cases the user (deployer) must inform the provider and the national supervisory authority immediately.
The obligation to conduct a fundamental rights impact assessment applies to high-risk artificial intelligence systems that will be put into use for the first time. At this point, users (deployers) may benefit from fundamental rights impact assessments that have already been carried out for high-risk AI systems in a similar situation or those that are currently being conducted by providers. However, if the users (deployers) of high-risk artificial intelligence systems believe that the fundamental rights impact assessment conducted for the use of the relevant system no longer fulfills the requirements, they must conduct a new fundamental rights impact assessment.
Users (deployers) of high-risk AI systems conducting a fundamental rights impact assessment are required to notify the national supervisory authority and relevant stakeholders during the impact assessment. Alongside this notification, system users (deployers) should involve particularly representatives of the person or groups of persons likely to be affected by the high-risk AI system besides the other relevant organizations, such as equality bodies, consumer protection bodies, and data protection bodies, to provide input to the impact assessment. It is important to note that SMEs that are users of high-risk AI systems are exempted from the obligation to conduct a fundamental rights impact assessment, although they may conduct this assessment voluntarily.
Where the high-risk AI user is also required to conduct a data protection impact assessment under Article 35 of the General Data Protection Regulation, the user is obliged to conduct the fundamental rights impact assessment according to Article 29a along with the data protection impact assessment.
Conclusion
The agreed text will have to be formally adopted by both Parliament and Council to become EU law. With the application of the AIA, companies using high-risk artificial intelligence systems in their operations, products, and services in the European Union will need to fulfill the obligation to conduct a fundamental rights impact assessment. We will continue to keep you informed about the latest developments on the subject.
[1] European Commission, Prososal for AIA, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
[2]European Parliament, Draft Compromise Amendments, https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/CJ40/DV/2023/05-11/ConsolidatedCA_IMCOLIBE_AI_ACT_EN.pdf
[3] Press release of the European Council on agreement after the trialogue, https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/; Press Release of the European Parliament on agreement after the trialogue, https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
Comments