BLOG -


The AI Act - The Agreement and What It Means

As Ursula von der Leyen, President of the European Commission, put it: This is a historic moment. On 8 December 2023, after a three-day-marathon of negotiating, the European regulation efforts were awarded with a preliminary political agreement on the first comprehensive European act on artificial intelligence: the AI Act. It is not, as from time to time even the EU itself claims, the first regulation on AI worldwide. With an executive order in the United States and an Act on Automated Decision-Making in China, corresponding regulations already exist in other parts of the world.

The AI Act aims to ensure that only AI systems which are both safe and respect the fundamental rights and values of the EU are brought onto the European market and are used in the EU. Still, even though an agreement has been reached, the outcome of the negotiations was announced although there is not yet a consolidated text to present. The political agreement will be put into a final text over the coming months. To this end, a number of technical negotiation meetings have been scheduled until the end of February. As some of the information currently available varies widely , we will have to wait until the final text will be published in the first quarter of 2024. The overview below presents the most significant known regulations.

Definition of an AI System

The new AI Act will include an amended definition of AI systems compared to the European Parliament's last proposals, which is close to the OECD's definition. The OECD's definition is as follows:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

This definition is particularly criticised because it is extremely broad and therefore even includes simple auto-correction or Excel functions, for example. In this respect, the final wording of the Act including its recitals which, as we know, may often be used to fine-tune certain terms must be awaited.

Risk-based Approach

The Regulation, which is based on a proposal by the EU Commission in April 2021 and is part of the EU's digital strategy, follows a risk-based approach. It categorises AI systems into different groups depending on the risk they pose: from minimal risk to high-risk and even banned AI systems.

According to this approach, six principles apply to all AI systems. AI systems should (1) enable human agency and oversight, (2) be technically robust and safe, (3) comply with privacy and data protection regulations, (4) be transparent, (5) ensure diversity, non-discrimination and fairness and (6) ensure social and environmental well-being.

Banned AI Systems

AI systems involving an unacceptable risk will be banned in the EU. These include, for example, systems using manipulative techniques, systems that exploit weaknesses or vulnerabilities, social scoring and databases based on mass facial recognition.

Until the very end there was an intense debate as to whether AI for facial recognition should be permitted or not. While individual countries, such as France, supported the use of AI for facial recognition, arguing that it could be used to ensure security around major events such as the 2024 Olympic Games, most remained sceptic. Despite a tough battle over several days of negotiations, a complete ban on real-time biometric identification could not be achieved against the massive resistance of the member states. After lengthy discussions, the bans involving the recognition of emotions and remote biometric identification were adjusted.

The ban on AI systems for emotion recognition now only applies in the workplace and in education. It will however still be permissible to use such systems for medical or safety reasons, for example to monitor pilot fatigue.

The regulations on remote biometric identification have also been amended. Both 'real time' and 'post' identification will remain banned. Exceptions are expected for the prosecution of criminal offences in clearly defined cases. The AI Act also includes a catalogue of protective measures to prevent potential misuse.

High-risk Systems

A large part of AI systems will be categorised as high-risk AI systems. These are, for example, AI based medical devices or autonomous vehicles. In general, education, critical infrastructure, migration, asylum, or border controls are considered critical high-risk areas.

High-risk AI systems will have to comply with a number of requirements and obligations to be approved in the EU. For example, conformity assessments must be carried out and a quality and risk-mitigation system must be integrated. High-risk AI systems will also have to be registered in the corresponding EU database. The requirement to carry out an impact assessment for fundamental rights remains unchanged, although this will now only apply to public organisations and private bodies that provide essential public services, such as hospitals or banks. Further changes have been made to the responsibilities and the roles of the various players.

Further, a filter system has been introduced. An AI system can lose its classification as a high-risk system if it fulfils one of a total of four conditions, for example if it (1) is intended to monitor or improve a human activity or (2) is only used to recognise decision-making patterns or deviations from previous decision-making patterns, (3) is only used to carry out preparatory tasks for a human activity relevant for critical applications or (4) is only intended to carry out procedural tasks.

AI Systems with Limited Risk

Specific transparency obligations will apply to AI systems with limited risk. These will, above all, include the information that a certain content has been generated by AI. The aim is to ensure that users can make informed decisions about further use.

Generative AI

Also in the field of generative AI there will be some changes, unsurprisingly, as these provisions were the most controversial. While the European Parliament ‑ not least in view of ChatGPT - supported a regulation of generative AI systems, the Commission had recently been of the opinion that these AI systems did not require a regulation. It was instead sufficient if manufacturers submit to voluntary commitments.

These AI systems are now called 'general purpose AI' instead of 'foundation models'. The definition of general purpose AI was amended so that it now only includes large generative AI systems.

“‘General purpose AI model’ means an AI model, including when trained with a large amount of data using self-supervision at scale, that is capable to competently perform a wide range of distinct tasks regardless of the way the model is released on the market.”

Developers of general purpose AI models will have to comply with certain minimum requirements, such as creating technical documentation, providing information for downstream providers and providing information about training and testing procedures. They must also comply with copyright regulations and the products generated must be labelled with a watermark.

Large AI systems posing systemic risks, so-called 'systemic risk AI' or top tier models that exceed a certain computing power (1025 FLOPs) during training, must fulfil additional obligations. These include, for example, setting up a risk-mitigation system and maintaining an appropriate level of cyber security. Details of this have not yet been finalised. Still, it is assumed that OpenAI's GPT4 and Google's Gemini will be considered to be systems with systemic risk. Models of the European developers Aleph Alpha and Mistral on the other hand will most likely not be categorised as AI models with systemic risk based on their computing power. Honi soit qui mal y pense.

Linking transparency requirements to computing power is heavily criticised, as the capability alone says little about the risks of an AI system. In order to be able to make adjustments as technology develops, the Commission will be able to adjust the current threshold and also define additional criteria.

Open Source Models

The previous proposal also excluded models based on open source licences from the AI Act. According to the recent agreement, only those open source generative purpose AI systems that are categorised as systemic risk AI systems are to fall within the scope of the AI Act. They are also not exempt from the requirements for high-risk AI systems. This is to be welcomed, as the mere fact that an AI model is based on open source licences or not says little about the risk it poses.

Copyright Requirements

According to the regulation, AI developers will have to disseminate a copyright policy and a detailed summary of the content they have used to train their generative purpose AI models. This transparency requirement is intended to let authors determine whether their work has been used. It has not been specified yet what exactly 'detailed' means. If the information provided so far is to be believed, the text and data mining limitation for generative AI is explicitly recognised. This had been controversial. The background: Interference with copyright exploitation rights is only permitted if the rights are exempted by limitation provisions. The limitations for text and data mining, i.e. the automated analytical technique of works to obtain information about patterns, trends and correlations, are relevant for the multiplication which happens in the training of AI. According to the text and data mining limitation, the reservations of rights holders in particular must be observed.

Penalties

The penalties under the AI Act have been amended again, but remain differentiated in proportion to the seriousness of the irregularity. For example, a breach of the ban on certain systems and non-compliance with data requirements may be penalised with up to 7% of the company's global annual turnover or EUR 35 million.

Enforcement and Authorities

An AI Office is (already) being set up in the European Commission to enforce the regulation of generative purpose AI systems. All other AI systems will be monitored by the competent national authorities. In order to ensure the uniform application of legislation, they will meet regularly in a European Committee on Artificial Intelligence.

Right to Lodge a Complaint

Another new addition is the possibility for natural and legal persons to lodge a complaint with the competent national authority about non-compliance with the requirements of the AI Act.

Entry into Force of the AI Act

In principle, the majority of the AI Act's catalogue of obligations will apply 24 months after its entry into force. However, the current draft of the AI Act provides for certain obligations to apply earlier. For example, the ban on certain systems will take effect just six months after the Act comes into force, which means it is expected to apply as early as over the course of 2024. The requirements for generative purpose AI systems will apply just 12 months after the AI Act comes into force.

It is therefore highly advisable to review the provisions of the AI Act at an early stage, not least in view of the fact that the conversion of any systems may well take some time.

Next Steps

As mentioned at the beginning of this post, the AI Act is not yet final - and it may still be a while. Although the recently announced political agreement on the key points has been reached, the technical aspects of the legal text still have to be negotiated in detail over the next few weeks. It may take a while for the bits and pieces to be divided and negotiated, with a lot of the details being figured out later. Finally, the EU bodies must then approve the final text of the regulation. As it is a regulation, it applies directly in all Member States and does not need to be transposed into national law.

Conclusion

With the AI Act, the EU intends to keep what it calls an 'extremely delicate balance' between boosting innovation and uptake of AI in Europe on the one hand and respecting the fundamental rights of EU citizens on the other. However, the final document which contains more than 250 pages comes across as a bureaucratic nightmare, imposing high documentation requirements on many companies. Due to the vagueness of many regulations, there will be a range of grey areas that could lead to uncertainties and, in the worst case, to considerations as to whether the use of AI should initially be avoided against this background until a uniform application practice of the competent authorities emerges. Nonetheless, the EU's attempt to address this major contemporary issue and to take account of dynamic developments by continuously adapting the provisions, as explicitly envisaged, is to be welcomed in principle.

Dr Peggy Müller

TAGS

AI Act Artificial Intelligence KI Künstliche Intelligenz

Contact us

Dr Peggy Müller T   +49 69 756095-582 E   Peggy.Mueller@advant-beiten.com