ICQ Log - Data Protection & Information Security:

Developments in AI Regulation

 

Last Updated: 10 September 2021

Ian Duffy, Associate at Arthur Cox, discusses relevant developments in AI regulation and why organisations need to start considering the significant regulatory requirements around the development, sale and use of AI and start assessing how future requirements might impact their business. 

On 21 April 2021, the European Commission (the “Commission”) published its proposal for a Regulation on Artificial Intelligence (the “AI Regulation”). The proposal is the result of several years of work by the Commission including the publication of a “White Paper on Artificial Intelligence”.

European Commission’s Legislative Proposals in Respect of AI The AI

Regulation proposes to introduce a comprehensive regulatory framework for artificial intelligence (“AI”) in the EU. The aim is to establish a legal framework that provides the certainty necessary to facilitate innovation and investment in AI, while also safeguarding fundamental DP&IS WORKING GROUP ICQ AUTHOR: Ian Duffy, Associate, Arthur Cox. rights and ensuring that AI applications are used safely. The main provisions of the AI Regulation are the introduction of:

(a) Binding rules for AI systems that apply to providers, users, importers, and distributors of AI systems in the EU;

(b) A list of certain prohibited AI systems;

(c) Extensive compliance obligations for high-risk AI systems; and

(d) Fines of up to EUR 30 million or up to 6% of annual turnover, whichever is higher. The Commission proposes a risk–based approach based on the level of risk presented by the AI system, with different levels of risk attracting corresponding compliance requirements. The risk categories include (i) unacceptable risk (these AI systems are prohibited); (ii) highrisk; (iii) limited risk; and (iv) minimal risk.

2. The scope of the AI Regulation

2.1 Application to Providers and Users

The AI Regulation proposes a broad regulatory scope, covering all aspects of the lifecycle of the development, sale and use of AI systems. The AI Regulation will apply to:

(a) providers that place AI systems on the market or put AI systems into service, regardless of whether those providers are established in the EU or in a third country;  

(b) users of AI systems in the EU; and  

(c) providers and users of AI systems that are located in a third country where the output produced by the system is used in the EU. 

Therefore, the AI Regulation will apply to actors both inside and outside the EU as long as the AI system is placed on the market in the EU or its use affects people located in the EU.

2.2 Definition of AI System

The AI Regulation defines “AI systems” broadly as software that is developed with machine learning, logic, and knowledgebased or statistical approaches, and that “can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. Examples of commonly used tools that might fall under the definition of “AI systems” include certain autocorrect functions, email-monitoring tools (e.g. spam filters) and information search tools. The level of regulation that might apply to such tools under the AI Regulation will depend on the categorisation of the tools by reference to the level of risk that they present. This categorisation is considered further below.

3. Prohibited AI Systems

The AI Regulation lists a number of AI systems which the Commission believes bear an unacceptable risk as they contravene EU values and violate fundamental rights, and therefore are explicitly prohibited. These AI systems include:

(a) AI systems that deploy subliminal techniques to exploit the vulnerabilities of a specific group of persons to materially distort the behaviour of a person belonging to the group in a manner that causes physical or psychological harm.

(b) The use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons based on their social behaviour or characteristics where the social score generated leads to the detrimental or unfavourable treatment of certain groups of persons.

(c) AI systems used for real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement, unless it is strictly necessary for a targeted crime search or the prevention of substantial threats. This particular prohibition has likely been introduced to address concerns raised by both the European Parliament and the Commission in 2020 in connection with a facial recognition app developed by Clearview to allow clients such as US law enforcement authorities to match photos of unknown people to images of them found online.

High Risk AI Systems

The AI Regulation contains specific requirements for so-called “high-risk” AI systems.

4.1 The definition of a high-risk AI system

The term “high-risk AI” is not defined, but Articles 6 and 7 of the AI Regulation indicate the criteria used to determine whether a system should be considered high risk.

(a) Article 6 refers to AI systems intended to be used as a safety component of products (or which are themselves a product). This includes products or components that are covered by existing EU product safety legislation that are listed in Annex II to the AI Regulation.

(b) Article 7 refers to stand-alone AI systems whose use may have an impact on the fundamental rights of natural persons. These systems are listed in Annex III and include, for example, real-time and “post” biometric identification systems, education and vocational training, employment, law enforcement, migration, asylum and border control, and administration of justice and democratic processes

4.2 General requirements applicable to high-risk AI systems

The AI Regulation imposes the following general requirements on high-risk AI systems:

(a) Transparency: High-risk AI systems must be designed and developed to ensure that the system is sufficiently transparent to enable users to interpret its output and use it appropriately;

(b) Human oversight: High-risk AI systems must be designed and developed in such a way that there is human oversight of the system, aimed at minimising risks to health, safety and fundamental rights;

(c) Risk management system: A risk management system must be established and maintained throughout the lifetime of the system to identify and analyse risks and adopt suitable risk management measures;

(d) Training and testing: Data sets used to support training, validation and testing must be subject to appropriate data governance and management practices and must be relevant, representative, accurate and complete;

(e) Technical documentation: Complete technical documentation that demonstrates compliance with the A 

Regulation must be in place before the AI system is placed on the market and must be maintained throughout the lifecycle of the system; and

(f) Security: A high level of accuracy, robustness and security must consistently be ensured throughout the lifecycle of the high-risk AI system.

4.3 Requirements applicable to providers of high-risk AI

The AI Regulation imposes the following specific requirements on the provider of a high-risk AI system:

(a) Compliance: Ensure compliance with the requirements for highrisk AI systems (outlined above)

(b) Conformity assessment: Ensure the system undergoes the relevant conformity assessment procedure (prior to placing the system on the market/putting the system into service)

(c) Corrective action and notification: Immediately take corrective action to address any suspected nonconformity and notify relevant authorities of such non- conformity

(d) Quality management system: Implement a quality management system, including a strategy for regulatory compliance, and procedures for design, testing, validation, data management, and record keeping

(e) Registration: Register the AI system in the AI database before placing a high-risk AI system on the market; and

(f) Post-market monitoring: Implement and maintain a post-market monitoring system, by collecting and analysing data about the performance of the high-risk AI system throughout the system’s lifetime.

4.4 Requirements applicable to users of high-risk AI

The AI Regulation imposes more limited but notable obligations on users of high-risk AI systems, including:

(a) to use the systems in accordance with the instructions of the provider and implement all technical and organisational measures stipulated by the provider to address the risks of using the high-risk AI system;

(b) ensure all input data is relevant to the intended purpose;

(c) monitor operation of the system and notify the provider about serious incidents and malfunctioning; and

(d) maintain logs automatically generated by the high-risk AI system, where those logs are within the control of the user.

5. All Other AI Systems

Other AI systems which do not qualify as prohibited or high-risk AI systems are not subject to any specific requirements. In order to facilitate the development of “trustworthy AI”, the Commission has stated that providers of “non-high-risk” AI systems should be encouraged to develop codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. For certain AI systems which pose a limited risk, transparency requirements are imposed. For example, AI systems which are intended to interact with natural persons must be designed and developed in such a way that users are informed they are interacting with an AI system, unless it is “obvious from the circumstances and the context of use.”

This transparency obligation would apply in the context of the use of chatbots, for example. All other “minimal risk” AI systems can be developed and used subject to existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Providers of those systems may choose to apply the requirements for trustworthy AI on a voluntary basis and adhere to voluntary codes of conduct.

6. Enforcement

(a) European Artificial Intelligence Board (“EAIB”): The AI Regulation provides for the establishment of the EAIB, to advise and assist the Commission in connection with the AI Regulation. The EAIB facilitates effective cooperation between the national supervisory authorities and the Commission, coordinates and contributes to guidance by the Commission and assists the national supervisory authorities and the Commission to ensure consistent application of the AI Regulation.

(b) National competent authorities: Member States must designate national competent authorities and a national supervisory authority responsible for providing guidance and advice on the AI Regulation.

(c) Enforcement: Member State authorities are required to conduct market surveillance of AI systems. If an authority believes that an AI system presents a risk to health, safety or fundamental rights, the authority must carry out an evaluation of the AI system and where necessary, impose corrective action. (d) Sanctions: Infringement of the AI Regulation is subject to financial sanctions of up to €10m – €30m or 2% – 6% of the global annual turnover, whichever is higher. The level of fine imposed depends on the nature of the infringement. The AI Regulation will be enforced by supervisory authorities and does not provide for a complaint system or direct enforcement rights for individuals. It is unclear whether Member States will appoint data protection supervisory authorities, national standards agencies or other agencies to perform the “competent authority” role. Notably the AI Regulation does not replicate the “one stop shop” system under GDPR which may lead to concerns about consistency and cooperation across the 27 Member States.

Next Steps

The proposal now goes to the European Parliament and the Council of Europe for further consideration and debate. Once adopted, the Regulation will come into force 20 days after its publication in the Official Journal. The Regulation will apply 24 months after that date, but some provisions may apply sooner.

Conclusion

The AI Regulation is being widely heralded as the new “GDPR for AI” and it certainly represents a comprehensive and brave move by the Commission to lead the way in one of the most rapidly developing areas of technology since the creation of the Internet. The rapid evolution and deployment of AI into Internet of Things devices, vehicles, mobile devices, retail, medical and other spheres creates huge opportunities to advance the state of the art. In recognition of this, the Irish Government recently published its “AI-Here for Good: A National Artificial Intelligence Strategy for Ireland”.

However, AI also has the potential for enormous harms to a broad range of human rights, such as personal safety, privacy, equality and beyond. By introducing a risk based approach for producers and users of AI systems and applying extra-territorial effect, the EU has provided a welcome legal and ethical framework that, as with the GDPR, may well become the global standard. While this framework may evolve further before the AI Regulation comes into law, it is important for relevant organisations to start considering the significant regulatory requirements around the development, sale and use of AI that the AI Regulation is likely to introduce and to start assessing how these regulatory requirements might impact their business.

Lawyer Photo

Author: AUTHOR: Ian Duffy

Senior Associate, Technology & Innovation at Arthur Cox

ICQ Autumn Edition 2021

This article was taken from the ACOI's ICQ Autumn Edition 2021