YOUR
Search

    09.04.2024

    Artificial intelligence: what is more important than the AI Act?


    The EU recently passed the EU Artificial Intelligence Act (AI Act) with much fanfare.

     

    The Act is a milestone (see our blog post for more details). It is really relevant for providers and deployers of AI systems, especially those with high risks. However, most of the practical and legal issues associated with the use of AI are not regulated or even addressed in the law. They remain to be negotiated between the parties.

     

    1. Internal: Clear rules

     

    Wherever employees have access to the Internet, they use to at least experiment with AI, in particular to see what ChatGPT, Copilot, Claude, Dall-E, Midjourney and others can do. It has also become widely known that there can be risks for the company in using them. This has already cost some people their jobs. It is therefore all the more surprising that many companies still do not have internal guidelines on the correct use of artificial intelligence in the workplace. It is essential to regulate the handling of sensitive information and the use of the results of AI work, but ideally also responsibilities, accountability, and documentation requirements. One thing is certain: these systems will be used. A total ban would be impossible to enforce and would also hamper productivity.

     

    2. Reliable contracts

     

    Many organizations buy AI solutions from third parties or license software that includes AI. This should be governed using contracts that take into account the specific issues associated with the use of AI - not just outdated standard IT terms and conditions that are still silent on the subject of AI. Of course, there is nothing wrong with adapting outdated IT standard terms and conditions to the many new practical and legal requirements.

     

    Some important challenges:

     

    No licensee or user should rely on the legal compliance of generative AI systems (such as Chat GPT, etc.). In particular, it is questionable whether the data used for training has been obtained and used legally, especially with regard to data protection, personal rights and third party copyrights. This does not mean that a company should generally refrain from using such systems. But the distribution of risk must be properly regulated.

     

    Artificial intelligence also sometimes produces undesirable results or exhibits strange behavior. For example, the results of AI generated work can infringe the rights of third parties. Rights clearance can be much more difficult here than with human-generated work, because the AI does not or cannot disclose which authors it has used in the first place (a particular problem: this makes proper disclosure of the use of open source code almost impossible and the use therefore inadmissible). There have also been reports of chatbots used on company websites that have literally fallen flat on their faces - because the chatbot gave customers rights that they would not have had under the contract. Finally, AI also makes mistakes, which can have unexpected consequences: With this in mind, some systems regularly accept a certain level of error tolerance. However, if the settings of an AI system, for example for fraud prevention, are so strict that it only approves a transaction if fraud can be ruled out 100%, it is unlikely to ever approve a transaction. At the same time, however, a more “tolerant” setting means a conscious acceptance of wrong decisions, which can, for example, invalidate the insurance cover that would exist for wrong human decisions.

     

    In general, the point is that AI is effective but often operates in an opaque way and will sooner or later produce errors. It is therefore necessary to regulate contractually how the lack of transparency is dealt with and who bears the risk if it is not possible to determine where the error was made - and also what level of error probability is still acceptable.

     

    The usual standards of intent and gross negligence found in most standard contracts are not useful here: both parties know that errors can occur. It is therefore necessary to regulate which errors are attributable to which party. This can be done, for example, in provisions on data quality, service levels and indemnity clauses. Of course, there is no boilerplate solution for every use of AI. However, it is important that the issue is considered and regulated appropriately.

     

    It is also important to regulate the extent to which the AI can be 'trained' using the licensee's data, and whether other customers can also benefit from what the AI learns in this way. In the worst case, the data used for training could be disclosed to other customers or their end users of the AI, which could constitute a violation of privacy rights, intellectual property rights or trade secrets. If the licensee's dataset includes personal data, it generally must not be used to train the AI for other customers anyway.

     

    In connection with the AI Act, the European Commission has also presented draft standard contractual clauses for the procurement of AI systems by public authorities (AI SCC). The requirements set out in the AI SCCs are intended to ensure that the contract terms comply with the requirements of the AI Act, with one version of the AI SCCs published for high-risk AI systems and one for non-high-risk AI systems.;

    The AI SCCs cannot be used as the sole contractual basis for the use of AI, as many issues relevant to contract law (e.g. liability, intellectual property) are not addressed or are inadequately addressed. Nevertheless, the AI SCCs can provide useful points of reference for negotiating contractual terms, even between private companies.

     

    3. HR software

     

    As mentioned above, EU legislation on artificial intelligence will not apply across the board, but will impose specific obligations on providers and deployers of AI systems. However, there is one area of application that deserves special mention: Software in the HR sector is often considered a high-risk system, in particular recruitment tools (for the recruitment and selection of candidates or the placement of targeted job advertisements) and personnel management tools. High risk systems are subject to particularly strict requirements.

     

    Dr Andreas Lober

    Lennart Kriebel

     

    How the Digital Services Act Can Help Enforce IP Rights
    There has been a lot of discussion about the EU's Digital Services Act (“DSA”) s…
    Read more
    [Translate to English:]
    ADVANT Beiten Advises Amphenol on Acquisition of Luetze Group
    Berlin, 16 October 2024 - The international law firm ADVANT Beiten has advised t…
    Read more
    E-Commerce Action Plan: Germany’s Strategy to protect Online Shoppers in the EU
    On 6 September, the German Federal Ministry of Economics and Technology (“BMWK”)…
    Read more
    Consent Management Regulation - Goodbye cookie banner?
    According to a recent study by Bitkom, 76% of internet users feel annoyed by coo…
    Read more
    ADVANT Beiten Advises Shareholders of Fischer Information Technology on Sale to Quanos
    Freiburg, 5 August 2024 – The international law firm ADVANT Beiten has advised t…
    Read more
    ADVANT Beiten Advises Aesculap on Sale of TETEC AG to the Canadian Octane Group
    Dusseldorf, 26 June 2024 – The international law firm ADVANT Beiten has provided interdisciplinary advice to Aesculap AG, a subsidiary of the B. Braun group seated in Melsungen, Germany, on the sale of its…
    Read more
    Silent whistleblowers? Effects of the Whistleblower Protection Act on confidentiality agreements
    In addition to the much-publicised obligations, in particular the establishment of reporting channels, the new Whistleblower Protection Act (HinSchG) primarily contains rights for whistleblowers. They now …
    Read more
    Update AI Act - the ten most important questions for users of AI systems
    After the political agreement on the AI Act was effectively announced in the media in December 2023, the now provisionally final version was adopted on 13 March 2024. The AI Act was approved by the Europea…
    Read more
    Artificial intelligence: what is more important than the AI Act?
    The EU recently passed the EU Artificial Intelligence Act (AI Act) with much fanfare. The Act is a milestone (see our blog post for more details). It is really relevant for providers and deployers of AI…
    Read more