top of page

The European Union Artificial Intelligence Act is on the Way!

Kemal Kumkumoğlu

Selin Çetin Kumkumoğlu

The European Commission announced the draft of the Artificial Intelligence Act (AIA) in April 2021, as a result of its works on the regulatory framework of artificial intelligence (AI), with the scope of the digital strategy of the European Union (EU)[1].

The draft of the AIA has a risk-based approach. Within this context, the draft of the AIA establishes harmonized rules for placing on the market, putting into service, and use of AI systems in the EU. Accordingly, it includes (1) the prohibition of certain types of AI practices, (2) specific requirements for high-risk AI systems and the obligations of parties mentioned in the AIA such as providers, importers, and distributors, and (3) harmonized transparency rules for limited risks AI systems used to generate or modify content such as images, sound, or video, and (4) rules on the market monitoring and surveillance.

As stated in Article 3, the AIA applies to:

  1. Providers placing on the market or putting into service AI systems in the EU, irrespective of whether providers are established in the EU or in a third country.

  2. Users of AI systems located in the EU.

  3. Providers and users of AI systems located in a third country where the output produced by the system is used in the EU.

Accordingly, even though an AI system is developed in a third-party country, if it is placed on the market in the EU or if the outputs of the AI system are used in the EU, such AI systems must comply with the requirements stipulated in the AIA. For example, if a high-risk AI system developed and put into service in Türkiye has outputs used also by the EU citizens, this AI system must meet the requirements outlined in the AIA.

The draft of the AIA sets out a risk-based approach and stipulates risk categories that the obligations of an AI system are proportionate to the level of risk. Accordingly, AI systems are categorized into these risk classes: prohibited AI systems, high-risk AI systems, and limited-risk AI systems.

Prohibited AI Systems

Article 5 lists prohibited AI systems. Accordingly, applications of such AI systems will not be allowed. These include:

  • Cognitive behavioral manipulation of individuals or specific vulnerable groups, such as voice-activated toys encouraging dangerous behavior in children

  • Social scoring, which involves classifying people based on their behavior, socio-economic status, or personal characteristics.

  • Real-time and remote biometric identification systems like facial recognition.[2]

High-Risk AI Systems

According to Article 6, AI systems used in products covered by the EU’s product safety legislation (such as toys, cars, and medical devices) and those listed in Annex III of the AIA are considered high-risk. In this context, high-risk AI systems include:

  • Biometric identification and classification of individuals.

  • Management and operation of critical infrastructure.

  • Education and vocational training.

  • Employment, personnel management, and access to self-employment.

  • Access to and enjoyment of essential private services and public services and benefits.

  • Law enforcement.

  • Migration, refuge, and border control management.

  • Support in legal interpretation and application of the law.

Limited-Risk AI Systems

For AI systems that generate or modify content such as images, sound, or video, the AIA includes certain transparency obligations. For example, there are requirements such as informing individuals when they are interacting with an AI system.

Obligations of Providers, Users, Importers, and Distributors of High-Risk AI Systems

The Regulation establishes various obligations for providers, users, importers, and distributors of high-risk AI systems. It is crucial to emphasize that the AIA will be applied not only to AI systems placed on the market by entities established in the EU but also to AI systems developed in third-party countries and placed on the EU market. Additionally, if the outputs of these systems are used in the EU countries, the AIA will be applied to AI systems and products embedded with AI located in third-party countries. Therefore, providers in Türkiye, for example, who place such AI systems on the EU market, or entities in the EU importing these systems from Türkiye will also need to meet the obligations stipulated in the AIA. These obligations involve technical and legal assessments, requiring collaborative efforts of different work teams.In June 2023, the European Parliament published the compromised amendments to the draft of the AIA[3], and discussions are ongoing with the European Council. It is anticipated that an agreement will be reached by the end of this year or the beginning of the new year. Companies located in Türkiye that put or are preparing to put into service AI systems and/or products embedded with AI to the EU market will need to comply with the AIA. In this context, as KECO Legal we will continue to keep you informed about developments related to the AIA.


[2] It is allowed in several exceptional cases, for instance, in the case of post-remote biometric identification systems, but only after obtaining judicial approval for the prosecution of certain crimes.

[3] See the European Parliament compromised amendments text on the draft of the AIA:


Commenting has been turned off.
bottom of page