On 21st May, 2024, the Council of the European Union (EU) gave the green light to the EU Artificial Intelligence Act (hereinafter “the Act”) that the European Parliament had approved earlier this year on 13th March, with 523 votes in favour, 46 against, and 49 abstentions. The text of the approved version is accessible here. The Act will now come into force 20 days after its publication in the Official Journal of the EU.
The Act will become fully applicable 24 months after its entry into force, though certain provisions will come into force and would require compliance sooner. With the Act's extra-territorial application, businesses around the globe, including those based in India - "providing" or "deploying" "AI systems" or "general-purpose AI (GPAI) models" in the EU market will attract its provisions, except where such provision or deployment is carried out for defence, military or national security purposes.
Much like how the EU influenced the global privacy rules with the enactment of the General Data Protection Regulation in 2018, its AI Act is likely to set a global standard for AI trust and safety for businesses and regulators alike. Businesses, therefore, must proactively utilise the grace period afforded under the Act to ramp up their compliance measures so as to effectively compete and retain their presence in the EU and other domestic and regional markets.
It is important that you understand how the Act defines “AI” for the purpose of regulating it, especially given that the term still lacks a universally accepted definition. You can then locate the compliance obligations for your business at the intersections of your role in the AI value chain and the risk levels of your AI system.
Step 1: Determine whether what you are providing or deploying in the EU market is AI - and if so - it falls under which category, under the Act.
The Act defines an AI system as “A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.” Note that this definition aligns with OECD's latest definition and the definition used in the White House Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
The Act defines a GPAI model as "an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market."
The Act also makes a distinction between GPAI models and GPAI systems. Under the Act, a GPAI system is an AI system that is built on top of a GPAI model; therefore, if such a system would meet the criteria for “high-risk” AI systems as laid down under the Act, the corresponding obligations will apply to your business, as stipulated under Article 16 and Article 26 of the Act (refer to Step 3.2 in this section).
Step 2: Determine what role your business plays in the AI value chain. Amongst the several roles that the Act defines in Article 3, the following key roles should cover the wide gamut of functions carried out by most businesses operating in the EU market:
Note: Even if your business is established or located outside of the EU, you as a provider or deployer of an AI system or GPAI model will attract the provisions of the Act if the outputs generated by your system or model are used within the EU.
Step 3: Determine where your AI system sits on the risk spectrum. The Act lays down four risk levels depending (a) on the sensitivity of the data used for developing the AI system, and (b) the purported use of the AI system.
You can find the full descriptions of these prohibitions in Article 5 of the Act.
You can find the complete list of obligations for providers of high-risk AI systems in Article 16 of the Act.
If you deploy a high-risk AI system in the EU market, the Act requires you to comply with the following obligations:
You can find the complete list of obligations for deployers of high-risk AI systems in Article 26 of the Act.
The Act treats GPAI models as a specific regulatory category. If you provide a GPAI model in the EU market, the Act requires you to comply with the following obligations:
You can find the complete list of obligations for providers of GPAI models in Article 53 of the Act.
These obligations will not apply to you if your models are “released under a free and open license that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available”. However, note that this is a limited exception which will not apply if you provide a GPAI model that poses "systemic risk".
If the cumulative amount of compute for training such a model measured in FLOPS is greater than 10^25, the Act will consider the model as having high-impact capabilities that could pose systemic risk and require you to comply with the following additional obligations:
You can find the complete list of obligations for providers of GPAI models that could pose systemic risk in Article 55 of the Act.
Additionally, providers of GPAI models that are established in third countries (outside the EU) are expected to appoint an authorised representative established in the EU prior to placing their GPAI models in the EU market. You can see Article 54 of the Act for further information.
In 6 months, the prohibitions on (a) AI systems that pose unacceptable risk and (b) certain uses of GPAI models would come into effect. In 12 months, the compliance obligations for GPAI models and GPAI models that pose systemic risks would come into effect. However, no fines will be imposed against violations of any GPAI-related compliance obligations for a further 12 months, creating an additional grace period for your business to prepare for compliance.
The Act will not apply to an AI system retrospectively, except where a substantial modification is made to the system after the Act becoming fully applicable (i.e., after 24 months of the Act coming into force). However, the Act will apply retrospectively to GPAI models 36 months after the Act coming into force, whether or not they have been substantially modified.
Providing or deploying AI systems that pose unacceptable risk could attract fines of up to 35 million EUR or up to 7% of annual worldwide turnover for your business. Non-compliance with obligations imposed on high-risk AI systems under the Act could attract fines of up to 15 million EUR or up to 3% of annual worldwide turnover for your business.
Note: If you are a startup or a small or medium-sized enterprise, non-compliance with the Act would incur you fines up to the lower of the percentages or amount.
* The author wishes to thank Vibhav Mithal, Associate Partner, Anand and Anand and Fellow, ForHumanity, for his inputs.
* This primer is for informational purposes only; its contents should not be construed as legal advice.