Introduction

On 21st May, 2024, the Council of the European Union (EU) gave the green light to the EU Artificial Intelligence Act (hereinafter “the Act”) that the European Parliament had approved earlier this year on 13th March, with 523 votes in favour, 46 against, and 49 abstentions. The text of the approved version is accessible here. The Act will now come into force 20 days after its publication in the Official Journal of the EU.

The Act will become fully applicable 24 months after its entry into force, though certain provisions will come into force and would require compliance sooner. With the Act's extra-territorial application, businesses around the globe, including those based in India - "providing" or "deploying" "AI systems" or "general-purpose AI (GPAI) models" in the EU market will attract its provisions, except where such provision or deployment is carried out for defence, military or national security purposes.

Much like how the EU influenced the global privacy rules with the enactment of the General Data Protection Regulation in 2018, its AI Act is likely to set a global standard for AI trust and safety for businesses and regulators alike. Businesses, therefore, must proactively utilise the grace period afforded under the Act to ramp up their compliance measures so as to effectively compete and retain their presence in the EU and other domestic and regional markets.

Where to Begin

It is important that you understand how the Act defines “AI” for the purpose of regulating it, especially given that the term still lacks a universally accepted definition. You can then locate the compliance obligations for your business at the intersections of your role in the AI value chain and the risk levels of your AI system.

Step 1: Determine whether what you are providing or deploying in the EU market is AI - and if so - it falls under which category, under the Act.

The Act defines an AI system as “A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.” Note that this definition aligns with OECD's latest definition and the definition used in the White House Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

The Act defines a GPAI model as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market."

The Act also makes a distinction between GPAI models and GPAI systems. Under the Act, a GPAI system is an AI system that is built on top of a GPAI model; therefore, if such a system would meet the criteria for “high-risk” AI systems as laid down under the Act, the corresponding obligations will apply to your business, as stipulated under Article 16 and Article 26 of the Act (refer to Step 3.2 in this section).

Step 2: Determine what role your business plays in the AI value chain. Amongst the several roles that the Act defines in Article 3, the following key roles should cover the wide gamut of functions carried out by most businesses operating in the EU market:

  1. Providers. If your business develops an AI system or a GPAI model and/or supplies it in the EU market for distribution or use (for payment or free of charge), the Act will deem you a “provider” of that AI system. If your business engages in fine-tuning of a pre-existing GPAI model and supplies it in the EU market for distribution or use (for payment or free of charge), the Act will deem you a “provider” of that model.
  2. Deployers. If your business is using an AI system for any purpose in the EU market, the Act will deem you a “deployer” of that AI system.

Note: Even if your business is established or located outside of the EU, you as a provider or deployer of an AI system or GPAI model will attract the provisions of the Act if the outputs generated by your system or model are used within the EU.

Step 3: Determine where your AI system sits on the risk spectrum. The Act lays down four risk levels depending (a) on the sensitivity of the data used for developing the AI system, and (b) the purported use of the AI system.

  1. Unacceptable risk. The Act prohibits your business from providing or deploying an AI system in the EU market that does any one or more of the following:
    1. manipulates a person's behaviour using subliminal techniques
    2. exploits a person's vulnerabilities
    3. rates individuals based on their personal characteristics
    4. enables real-time biometric identification in public places for law enforcement (except for a pre-defined purpose with the approval from an EU court)
    5. enables emotion recognition (except where it intended for a safety purpose)
    6. enables prediction of recidivism (or repeat offense) in persons based simply on their personality traits
    7. enables untargeted scraping of facial images from the web or CCTV footage

    You can find the full descriptions of these prohibitions in Article 5 of the Act.

  2. High risk. If you provide an AI system in the EU market that (a) poses significant risk to the health, safety, and fundamental rights of people, or (b) serves as a safety component in a product already regulated in the EU, the Act considers such a system as “high-risk” and requires you to comply with the following obligations:
    1. ensure that the AI system passes the conformity assessment before it enters the EU market
    2. maintain comprehensive technical documentation and risk management throughout the lifecycle of the AI system
    3. implement data governance measures to ensure data quality and mitigation of bias in the AI system
    4. ensure transparency and provide instructions for deployers
    5. conduct a fundamental rights impact assessment of the AI system (if the system is used for credit scoring or life and health insurance underwriting or for rendering any public service)
    6. disclose the capabilities and limitations of the AI system, with your contact information and user instructions
    7. ensure post-market monitoring of the AI system, with incident reporting
    8. implement human oversight, accountability, cybersecurity, and other measures to ensure safe and responsible use of the AI system
    9. cooperate with national competent authorities to demonstrate compliance

    You can find the complete list of obligations for providers of high-risk AI systems in Article 16 of the Act.

    If you deploy a high-risk AI system in the EU market, the Act requires you to comply with the following obligations:

    1. follow the user instructions shared by the provider of the AI system
    2. if you control input data, ensure that such data are suitable for the AI system's intended purpose
    3. disclose the use of the AI system to individuals or groups set to be impacted by the system's outputs
    4. implement human oversight to ensure safe and responsible use of the AI system
    5. in sensitive contexts, provide meaningful explanations for decisions augmented by the AI system's outputs

    You can find the complete list of obligations for deployers of high-risk AI systems in Article 26 of the Act.

  3. Limited risk. If you provide or deploy an AI system in the EU market which could potentially manipulate or deceit individuals in the absence of adequate transparency measures, the Act considers such a system to pose “limited risk” and requires you to ensure that the system users are made aware that they are interacting with a machine. To this end, your businesses may voluntarily commit to complying with industry codes of conduct.
  4. Minimal Risk. If you provide or deploy an AI system that fits into none of the risk categories above, the Act considers such a system to pose “minimal risk” and requires no mandatory compliance.
General Purpose AI (GPAI) models

The Act treats GPAI models as a specific regulatory category. If you provide a GPAI model in the EU market, the Act requires you to comply with the following obligations:

  1. prepare technical documentation of the model
  2. prepare documentation for third-party users of the model who may build their AI systems on top of it
  3. implement policy to ensure compliance with EU copyright law
  4. publish summaries of content used to train the model
  5. You can find the complete list of obligations for providers of GPAI models in Article 53 of the Act.

    These obligations will not apply to you if your models are “released under a free and open license that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available”. However, note that this is a limited exception which will not apply if you provide a GPAI model that poses "systemic risk".

    If the cumulative amount of compute for training such a model measured in FLOPS is greater than 10^25, the Act will consider the model as having high-impact capabilities that could pose systemic risk and require you to comply with the following additional obligations:

  6. conduct continuous risk management
  7. conduct model evaluations, including adversarial testing
  8. report serious incidents to the AI Office in the EU
  9. ensure adequate cybersecurity measures

You can find the complete list of obligations for providers of GPAI models that could pose systemic risk in Article 55 of the Act.

Additionally, providers of GPAI models that are established in third countries (outside the EU) are expected to appoint an authorised representative established in the EU prior to placing their GPAI models in the EU market. You can see Article 54 of the Act for further information.

Where Compliance is Expected Sooner

In 6 months, the prohibitions on (a) AI systems that pose unacceptable risk and (b) certain uses of GPAI models would come into effect. In 12 months, the compliance obligations for GPAI models and GPAI models that pose systemic risks would come into effect. However, no fines will be imposed against violations of any GPAI-related compliance obligations for a further 12 months, creating an additional grace period for your business to prepare for compliance.

Retrospective Application

The Act will not apply to an AI system retrospectively, except where a substantial modification is made to the system after the Act becoming fully applicable (i.e., after 24 months of the Act coming into force). However, the Act will apply retrospectively to GPAI models 36 months after the Act coming into force, whether or not they have been substantially modified.

Penalties for Non-Compliance

Providing or deploying AI systems that pose unacceptable risk could attract fines of up to 35 million EUR or up to 7% of annual worldwide turnover for your business. Non-compliance with obligations imposed on high-risk AI systems under the Act could attract fines of up to 15 million EUR or up to 3% of annual worldwide turnover for your business.

Note: If you are a startup or a small or medium-sized enterprise, non-compliance with the Act would incur you fines up to the lower of the percentages or amount.

  • Raj Shekhar
    linkedin
    Author
    Raj Shekhar
    Lead - Responsible AI, nasscom
  •  Simrandeep Singh
    linkedin
    Research Support
    Simrandeep Singh
    Associate - Responsible AI, nasscom

* The author wishes to thank Vibhav Mithal, Associate Partner, Anand and Anand and Fellow, ForHumanity, for his inputs.

* This primer is for informational purposes only; its contents should not be construed as legal advice.

Connect
nasscom Responsible AI responsibleai@nasscom.in