After the political agreement on the AI Act was effectively announced in the media in December 2023, the now provisionally final version was adopted on 13 March 2024. The AI Act was approved by the European Parliament with an overwhelming majority of 523 votes to 46. All that now remains is for the legal and linguistic experts to review it and for the Council to formally adopt the Regulation. This is expected to happen before the end of the current legislature (or by July 2024).
There is no doubt that manufacturers of AI systems will have to comply with the provisions of the AI Act and will therefore certainly be keeping a close eye on this European Regulation. However, companies that "only" use AI should do the same. In the following, we have compiled the ten practical questions that companies should ask themselves if they are using or planning to use AI.
Most of the provisions of the AI Act deal with prohibited and high-risk AI systems and the resulting obligations for providers, as well as for importers and distributors of such AI systems. However, this does not mean that users of an AI system can now sit back and relax. On the contrary: users (referred to as "deployers" in the AI Act) of AI systems are also covered by the AI Act and must comply with extensive obligations.
The AI Act applies not only to companies based in the EU, but also to providers and deployers based outside the EU, provided that the output generated by the AI systems is used in the EU.
First, the AI Act excludes from ist scope the use of AI by natural persons for purely personal, non-professional purposes from the scope of application. AI systems that are developed and used exclusively in the field of scientific research and development are also excluded from the scope of application.
The provision that the AI Act does not apply to certain AI with free and open source licenses (open source) may become significant in the future. The AI Act provides for another significant exception for the use of AI systems in the military, defense and national security sectors. In addition, Member States have the option to provide for further exceptions in specific areas. For example, they can provide for further legal and administrative provisions on the use of AI systems by employers to provide further protection for employees.
AI systems that pose an unacceptable risk are completely prohibited under Art. 5 AI Act. This includes AI systems in the following eight areas:
The core of the AI Act are the regulations on so-called high-risk AI systems. In principle, AI systems are considered high-risk AI systems if they pose a significant threat to fundamental rights.
A high-risk AI system exists if an AI system is used as a safety component for a product that falls under the EU harmonisation legislation listed in Annex I or is itself such a product. This includes, for example, machinery, toys and medical devices.
An AI system is also considered to be a high-risk AI system if it falls into one of the following areas of Annex III:
However, the AI Act provides for an important exception: AI systems from the aforementioned Annex III categories can be exempted from classification as high-risk AI systems under certain conditions. The prerequisite is that there is no significant risk of harm to the health, safety or fundamental rights of natural persons. Examples include AI systems that ae intended to perform a narrow prcedural task. The same applies if the AI system is used to improve an activity previously carried out by humans. The assessment of whether such an exemption applies must be carried out by the company itself as part of a risk evaluation and documented accordingly.
Companies that use high-risk AI systems as deployers must fulfill a comprehensive catalog of obligations. These include the following, for example:
The fundamental rights impact assessment for high-risk AI systems, which was originally required for all deployers, is now only foreseen in the current text of the Regulation for state institutions and private companies performing public services, as well as for those high-risk AI systems where public services, credit assessment or risk-based pricing of life and health insurance are affected, Art. 27 AI Act.
Under certain conditions, deployers may also become providers of a high-risk AI system themselves and then be subject to the stricter provider obligations, such as the establishment of a risk management system, the implementation of a conformity assessment procedure and registration in an EU database. Such a change of responsibility comes into effect if a high-risk AI system is placed on the market or put into operation under its own name or brand, or if a significant change is made to a high-risk AI system.
While the comprehensive list of obligations outlined above applies to deployers of high-risk AI systems, deployers of low-risk AI systems are generally only subject to certain transparency obligations, Article 50 AI Act. For example, they must disclose if content such as images, videos or audio content constituting a deep fake has been artificially generated or modified by an AI. The same obligation applies when an AI generates or manipulates text that is published with the purpose of informing the public on matters of public interest.
The declared aim of the AI Act is to create an innovation-friendly regulatory framework. Accordingly, the legislator has introduced regulatory relief for micro, small and medium-sized enterprises (SMEs) - including start-ups - based in the EU. For example, SMEs can benefit from non-material and financial support. Finally, under certain conditions, SMEs are to be given priority and free access to so-called regulatory sandboxes. Finally, fines can be capped.
Exact dates cannot yet be given, as the final text oft he AI Act needs to be published in the Official Journal of the EU before it can enter into force. The ban on AI systems will take effect six months after the Regulation comes into force. The majority of the provisions of the AI Act will apply 24 months after entry into force. However, the obligations stipulated for high-risk AI systems will only apply after 36 months.
Non-compliance with the requirements of the AI Act can result in exorbitant fines. These vary depending on the violation and the size of the company. While violations of prohibited AI systems can result in fines of up to EUR 35 million or 7% of global annual turnover, other violations of obligations under the AI Act can result in fines of up to EUR 15 million or 3% of annual global turnover. Fines of up to EUR 7.5 million or 1% of turnover may be imposed for providing of false information.
Several national and EU-wide authorities are involved in enforcement, resulting in a complex structure of responsibilities and coordination procedures. In Germany, it is not yet clear which authority will ensure compliance with the requirements of the AI Act. The Federal Network Agency and the Federal Office for Information Security are being discussed.
First of all, each company should determine and classify the risk class to which the AI systems used belong. The requirements for their proper use are then derived from this categorisation. Especially for future projects, it is important to involve the departments responsible for AI in the company at an early stage to ensure sufficient testing and compliance with the regulations. This is highly recommended, especially in view of the high fines.
Another article on this topic can be found under this link.