How AI is being regulated by the new EU AI Act
By Industry Contributor 18 April 2024 | Categories: newsBy Vanessa Van Copenhagen, Partner, Spoor & Fisher
The law is a border collie and it’s currently herding something very different to sheep, and far more challenging. It’s herding AI systems, in all their magnificent blue node-ness splendour: from their genius applications, to their copyright heists, personality violations, and questionable ethical leanings. Luckily, the border collie now has a shepherdess, and she has a staff - the European Union with its AI Act has at last, arrived on the scene. The EU AI Act is, in the EU's own words, "European regulation on artificial intelligence (AI) – the first comprehensive regulation on AI by a major regulator anywhere" (see https://artificialintelligenceact.eu/ ).
Does the shepherdess and her staff reach AI herds who are not in EU pastures?
The shepherdess has categorised the AI herd according to how dangerous the various beasts are:
- There are the predator not-sheep who will be banned from the herd, although some are allowed to stay if they are used for law enforcement for serious crimes;
- There are dangerous not-sheep who will have to be assessed before being allowed to stay, be "marked" and controlled carefully; and
- There are the not-sheep who might just be wolves in sheep's clothing, so they will have to reveal who they really are by satisfying information and transparency requirements.
And then there are the friendly ones who can graze with minimal interference.
The Predators: Banned
The predators are AI systems that pose an unacceptable risk by using any of the following prohibited practices, and are banned:
- subliminal or purposefully manipulative or deceptive techniques to materially distort behaviour;
- exploiting vulnerabilities of a person or group due to specific characteristics;
- biometric identification and categorisation of people;
- facial recognition databases based on untargeted scraping;
- inferring emotions in workplaces or educational institutions, except for medical or safety reasons;
- real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except for permissible purposes including the prevention of human trafficking, sexual exploitation, and terrorist attacks; and
- social scoring systems leading to unjustified/ disproportionate/ contextually unrelated detrimental or unfavourable treatment of natural persons/ groups.
The ban takes effect from October 2024.
The Dangerous Ones: Assessed, Marked, and Scrutinised
There are two categories of dangerous ones:
- AI intended to be used as a product or the security component of a product, covered by specific EU legislation, such as aviation, cars, marine equipment, toys, medical devices, lifts, pressure equipment and personal protective equipment; and
- AI systems falling into specific areas that will have to be registered in an EU database, such as remote biometric identification systems, AI used as a safety component in critical infrastructure, and AI used in vocational training, admission and evaluation, education, recruitment, employee selection, employment, worker management (recruitment, selection, work allocation, performance evaluation, termination) and access to self-employment, access to and enjoyment of essential private services and public services and benefits credit scoring, law enforcement, migration, asylum and border control, the democratic process, assistance in legal interpretation and application of the law.
The AI Act places onerous obligations on providers of dangerous or high risk AI systems around implementation of management systems, dataset quality criteria, technical documentation, record keeping (logs), transparency in design and development, human oversight, accuracy, robustness, cybersecurity and consistency in performance.
The dangerous ones will be assessed before being put on the market and throughout their lifecycle.
The Ones Who Could Be Wolves: Market Notification and Disclosure Obligations
These are the general-purpose AI models (GPAI). Providers of GPAI models, like ChatGPT, will have to:
- prepare and maintain up-to-date technical documentation and make information and documentation available to downstream providers of AI systems to demonstrate that the model is designed to prevent it from generating illegal content;
- put a policy in place to respect EU copyright law, including through state-of-the-art technologies to carry out lawful text-and-data mining exceptions as envisaged under the EU Copyright Directive;
- publishing detailed summaries of the content (including copyrighted data) used in training the GPAI models according to a template provided by the AI Office; and
- if located outside the EU, appoint a representative in the EU.
AI models made accessible under a free and open source will be exempt from some of the obligations (i.e. disclosure of technical documentation) given they have, in principle, positive effects on research, innovation and competition.
More stringent rules are imposed on GPAI models with 'high-impact capabilities' that could pose a systemic risk. These models must undergo thorough evaluations and any serious incidents will have to be reported to the European Commission.
Content that is either generated or modified with the help of AI - images, audio or video files (for example, deepfakes) will need to be clearly labelled as AI-generated so that users are aware when they encounter this content.
These transparency requirements apply from April 2025.
The Friendly Ones:
The rest of the herd are considered friendly if they:
- perform a narrow procedural task;
- improve the result of a previously completed human activity;
- detect decision-making patterns or deviations from prior decision-making patterns are not meant to replace or influence the previously completed human assessment, without proper human review; or
- perform a preparatory task to an assessment.
The friendly ones in the herd merely have to tow the gentle line of informing users about their AI system and labelling content in a machine-readable way so that it can be identified as artificially-generated or manipulated content, and even then, there are exceptions for law enforcement or when the AI system is used for artistic, satirical, creative or similar purposes.
The Shepherdess and Her Staff
Under the AI Act, each Member State must establish or designate at least one market surveillance authority and at least one notifying authority to ensure the implementation of the AI Act, as “national competent authorities”.
The notifying authority will be responsible for establishing and implementing procedures for the assessment, designation, and notification of conformity assessment bodies and for their monitoring. Such assessment and monitoring may be carried out by accreditation bodies.
Fines for non-compliance are steep, for example, up to €35 million or 7 % of the offender's total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data.
Does the shepherdess and her staff reach herds who are not in EU pastures?
The AI Act has a wide scope and extends to you or your organisation ("you") if:
- you are a provider of an AI system and are placing that AI system on the market in the EU or putting AI systems into service in the EU, whether or not it is placed on the market or put into service together with a product manufactured by you and bearing your trade mark;
- you are a provider or deployer of an AI system and the output is used in the EU;
- you are located in the EU and deploy an AI system; and
- you are an "importer "of a "distributor" of an AI system.
Under the AI Act, the term:
- "provider" is defined as "a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge";
- "deployer" is defined as "a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a person's non-professional activity";
- "importer" is defined as "a natural or legal person located or established in the EU that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union"; and
- "distributor" means "any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market".
The EU with its AI Act is a front runner in shepherding AI systems and shaping AI regulation, globally. It does this in a way that removes AI systems that are predators, tightly controls those that are dangerous, demands transparency and disclosure from those that are potentially harmful and allows the rest to perform their splendid applications with gentler disclosure requirements. The age old scales of justice are again at work: striking a balance between the promotion of innovation on the one side but stopping it from going rogue, on the other.
Most Read Articles
Have Your Say
What new tech or developments are you most anticipating this year?