top of page

The European Union Artificial Intelligence Act Has Entered into Force!

Selin Çetin Kumkumoğlu

Ayşegül Avcı

The European Union Artificial Intelligence Act ("AI Act") was published in the Official Journal of the European Union ("EU") on July 12, 2024, and entered into force in all 27 EU Member States on August 1, 2024. The AI Act, which will be implemented gradually, will have consequential impacts on actors in the EU market and beyond.

Background

The AI Act is the first comprehensive AI regulation aiming to address the risks to health, security, and fundamental rights posed by artificial intelligence ("AI") systems. The European Commission announced the first draft regulation on AI in April 2021. Following the consultations received from concerned stakeholders, in June 2023, the European Parliament publicly announced its proposals for amendments to the draft regulation. Subsequently, following trilateral negotiations between the Council of the EU, the European Parliament and the European Commission, a consensus was reached on the regulation on 9 December 2023. The agreed text was adopted by the European Parliament in March 2024 and by the Council of the EU in May 2024. Finally, the AI Act was published in the Official Journal of the EU on 12 July 2024 and entered into force on 1 August 2024.

Purpose and Scope of the Act

Article 3 of the Act defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

The Act aims to ensure the protection of fundamental rights against the harmful impacts of AI systems within the EU, including health, security, democracy, the rule of law, and environmental protection. It also seeks to support innovation, improve the functioning of the internal market, and promote the adoption of human-centered and trustworthy AI.

In this context, the Act will apply to:

  • Providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or located within the Union or in a third country,

  • Providers and deployers of AI systems that have their place of establishment or are located in a third country, where the output produced by the AI system is used in the Union,

  • Deployers of AI systems that have their place of establishment or are located within the Union,

  • Importers and distributors of AI systems,

  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark,

  • Authorized representatives of providers, which are not established in the Union,

  • Affected persons that are located in the Union.

Classification of AI Systems

Prohibited AI Systems:

Prohibited AI systems are specified as systems that cannot be developed or placed on the market. These are listed respectively in the Act:

  • AI systems that manipulate human behavior to hinder their free will,

  • AI systems used to exploit vulnerabilities of individuals (due to their age, disabilities, social, or economic status),

  • AI systems used for social scoring purposes,

  • Real-time remote biometric identification systems used by law enforcement in public areas (with some exceptions),

  • Biometric categorization systems that categorize individuals based on biometric data to infer or deduce race, political opinions, union membership, religious or philosophical beliefs, sexual life, or sexual orientation,

  • Predictive policing applications,

  • Emotion recognition systems used in workplaces and educational institutions for reasons other than medical or safety purposes,

  • AI systems that create or expand facial recognition databases through the non-targeted collection of facial images from the internet or CCTV footage.

High Risk AI Systems:

High-risk AI systems consist of AI systems used in products falling within the scope of EU product safety legislation (Annex-1) and the use of AI systems listed in Annex-3 of the Act:

High Risk AI Systems (Annex-1)

High Risk AI Systems (Annex-3)

In Accordance with Section A,

  • Machines,

  • Safety Of Toys

  • Safety Components for Lifts

  • Radio Equipment

  • Personal Protective Equipment

  • Medical Devices

  • In-Vitro Diagnostic Medical Devices

  • Devices Burning Gaseous Fuels

  • Cableway Installations

  • Pressure Equipment

  • Equipment And Protective Systems İntended For Use İn Potentially Explosive Atmospheres

  • Recreational Craft and Personal Watercraft

  • Biometric Systems

  • Critical İnfrastructures

  • Education

  • Human Resources

  • Essential Private and Public Services

  • Law Enforcement

  • Migration, Asylum and Border Control

  • Administration Of Justice and Democratic Processes

In Accordance with Section B,

  • Civil Aviation Security

  • Two Or Three-Wheel Vehicles and Quadricycles

  • Agricultural And Forestry Vehicles

  • Marine Equipment

  • Interoperability Of Rail Systems

  • Motor Vehicles and Motor Vehicle Trailers and Systems, Components, and Separate Technical Units İntended For Such Vehicles

  • Unmanned Aircraft and Their Engines, Propellers, Parts, and Equipment to Control Them Remotely

It is envisaged that high-risk AI systems will be placed on the market by fulfilling the requirements stipulated in the second Chapter of the Act. In order to avoid duplication and minimize additional obligations, high-risk AI providers will have the option to integrate the necessary testing and reporting processes, information and documentation provided for their products in accordance with the documents and procedures already existing and required under the Union harmonization legislation listed in the Act.

Requirements for High-Risk AI Systems

  • Risk Management System

  • Data Governance

  • Technical Certification

  • Record Keeping

  • Transparency Obligation

  • Human Oversight

  • Accuracy, Robustness and Cybersecurity

General Purpose AI Systems:

A number of obligations are also stipulated in the Act for generative AI systems such as ChatGPT. For the providers of such systems, obligations are included, for example, to draw up and keep up to date the technical documentation of the general-purpose AI model, to put in place a policy to comply with Union regulations on copyright and related rights, and to prepare and make publicly available a sufficiently detailed summary of the content used for the training of the general-purpose AI model. In addition, Article 51 of the Act identifies general-purpose AI systems that pose a systemic risk and sets out special provisions for such systems.

Obligations for General Purpose AI Systems

All general-purpose AI systems

  • Technical documentation,

  • Instructions for use,

  • Compliance with copyright regulations,

  • Summary of the training content

General purpose AI systems that pose systemic risk

  • Model evaluation

  • Assessment and mitigation of potential systemic risks at Union level

  • Documentation and recording of serious accidents,

  • Ensuring cyber security protection

Low Risk AI Systems:

The systems other than the above systems are low-risk AI systems and are allowed to be used freely. These are systems such as AI-enabled video games or spam filters. Providers of AI systems may voluntarily choose to apply the rules of behavior in the Act.

Obligations of Actors

The Act regulates the obligations of providers, importers, distributors and deployers of high-risk AI systems. It is important to emphasize that the Act will apply not only to AI systems placed on the market by parties located in the EU market, but also to AI systems developed in third party countries but placed on the EU market, AI embedded products and AI systems located in third party countries if their outputs are used in EU countries. Therefore, obligations will also arise, for example, for providers in Türkiye who offer such AI systems to the EU market, or for those in the EU who import such systems from Türkiye. These obligations require technical and legal assessments and therefore the co-operation of different business units.

Organizational Structure

At EU level, the AI Board will support the implementation of the AI Act and the legislation enacted pursuant to this regulation, including the drafting of codes of practice for general purpose AI models.

The AI Office will have oversight responsibilities for general purpose AI models. It will contribute to the development of standards and test practices, coordinate with national competent authorities and assist in the implementation of rules in Member States.

A scientific panel of independent experts will be established to support the activities of the AI Office. The panel will contribute to the development of methodologies for assessing the capabilities and subsequent classification of general-purpose AI models and will monitor potential safety risks; an advisory forum comprising representatives of industry and civil societies will be established and will provide technical expertise to the AI Board.

At national level, national competent authorities in Member States are given oversight powers. These may take different forms depending on the Member State in the form of a notifying authority and a market surveillance authority.

A Holistic Company Compliance with the EU AI Act

In order to ensure that AI systems are developed and put into use in a secure, fast and effective manner in accordance with the Act, a compliance management to be implemented step by step should be maintained for companies. In this respect, taking into consideration companies’ situation and needs in terms of legal, economic and technical aspects, raising the awareness of employees on this matter, determining the priority goals and objectives required for compliance, monitoring and continuously implementing steps such as internal assignments of responsibilities and the establishment of a division of labor within companies are of critical importance in ensuring holistic compliance with the Act.

The European Union Artificial Intelligence Act Has Entered into Force!

Compliance of Companies Located in Türkiye with the Act

If AI systems are placed on the EU market and/or the outputs of AI systems are used in the EU market, even if they are based in a third-party country such as Türkiye, the companies carrying out such activities must also act in accordance with this Act.

In this context, for example, if it is a high-risk AI system provider and the output of the AI system it has developed is used within the EU or directly offers these systems to the EU market, it must fulfill the stipulated obligations. Accordingly, if it is headquartered in Türkiye but uses such systems in its locations in the EU or places such systems on the market through these locations, it must meet the stipulated obligations.

Sanctions

For violations of the Act, significant administrative fines are established. The Act provides for the following sanctions depending on the level of violation:

Violation

Sanction

Violation of the provisions on prohibited AI systems

A fine of up to EUR 35 million or 7 per cent of the total worldwide annual revenue (income), whichever is higher

Failure to comply with the obligations imposed on providers, authorized representatives, importers, distributors, deployers or accredited bodies of high-risk AI systems or general-purpose AI models

A fine of up to EUR 15 million or 3 per cent of the total annual worldwide revenue (income), whichever is higher

Providing false or misleading information to accredited bodies or national competent authorities in response to a request

A fine of up to EUR 7.5 million or 1% of the total annual worldwide revenue (income), whichever is higher

Implementation of the Act

The AI Act entered into force on the twentieth day following its publication in the Official Journal of the EU. Six months after entry into force, Member States will gradually abolish the prohibited systems, while the obligations for general-purpose AI governance will apply after twelve months. After twenty-four months, all provisions in Annex III (high-risk use cases), including obligations for high-risk systems, will be applicable.

コメント


コメント機能がオフになっています。
bottom of page