Regulation of Artificial Intelligence

Agreement on Rules for Reliable Artificial Intelligence Systems European Union

Prepared by Jorge Castaño

In today’s digital era, artificial intelligence systems (“AI systems”) have emerged as a transformative force driving innovations across various sectors. While many of us have heard about the risks associated with artificial intelligence, few have managed to concretely identify the underlying threats. The rapid evolution of this technology has led to widespread concerns about job displacement, algorithmic biases, lack of transparency, and other ethical challenges. However, a significant measure has recently been introduced to address these issues: the implementation of specific regulations for artificial intelligence by the European Parliament. These regulations aim to establish ethical guidelines and ensure responsible use of artificial intelligence for the benefit of society.

Therefore, this document will briefly explain the provisional agreement reached by the European Parliament and the Council, establishing harmonized rules on artificial intelligence (the “Rules”). It is essential to specify that this document is limited to listing the AI systems that the European Parliament has identified as prohibited or high-risk. This is done to provide our clients with information that will likely serve as a reference for legislations worldwide (with the highly probable exception of the United States). This document does not intend to undertake a comprehensive analysis of the Rules, which intentionally regulate numerous aspects related to the use of AI systems.

The Rules

Artificial intelligence extends beyond commercial, productive, or economic aspects; it involves a system that behaves similarly to humans. This perspective emphasizes the need to consider not only the practical and economic benefits of AI but also its implications in terms of ethics, human values, and the relationship between technology and humanity. The growing sophistication of AI, particularly in areas such as social interaction and decision-making, raises fundamental questions about the nature of intelligence and consciousness, as well as how we should address and regulate these emerging technologies.

A detailed examination of the Rules immerses us in a lexicon that goes beyond mere technicalities. Terms such as trust, emotions, values, respect for human dignity, freedom, democracy, rule of law, fundamental rights, non-discrimination, data protection, privacy, children’s rights, autonomous behavior of AI systems, manipulation, exploitation, and social control, among others, reflect not only the diversity of aspects to be addressed in the development and implementation of AI but also highlight the profound interconnection between technology and society’s fundamental values. This analysis underscores that the interaction between humans and artificial intelligence goes beyond efficiency and productivity, directly impacting ethical, emotional, social, and cultural aspects that define our coexistence with technology in the contemporary era.

In simple terms, what the Rules aim to achieve is to identify the risks that artificial intelligence poses to the human species and based on that identification, regulate its use. To this end, the Rules identify prohibited AI systems and high-risk systems. Hence, at the beginning of this document, we asserted that there is finally a document that outlines the risks posed by artificial intelligence in black and white. Certainly, there are risks not covered by the Rules, and we may not even be able to imagine them, but at least they serve as a starting point.

What is an Artificial Intelligence System (AI System)?

The first thing to establish is what is meant by AI systems. The Rules define AI systems as software developed using one or more machine learning techniques, logic-based strategies, and statistical strategies. These systems can, for a defined set of objectives set by humans, generate output information such as content, predictions, recommendations, or decisions that influence the environments with which they interact.

Prohibited Systems

The Rules state that the following are prohibited AI systems:

  • It is prohibited to use AI systems intended to alter human behavior and likely to cause physical or psychological harm. These systems deploy subliminal components imperceptible to humans or exploit vulnerabilities of minors and individuals due to age or physical or mental incapacity. They do so with the intention of substantially altering the behavior of a person in a way that harms (or is likely to harm) that person or another.
  • AI systems that provide social scores for individuals’ general use by public authorities or on their behalf may have discriminatory results or lead to the exclusion of certain groups. The resulting social scoring from such AI systems can result in harmful or unfavorable treatment of individuals or entire collectives. Therefore, the use of these systems is prohibited.
  • The use of AI systems for remote “real-time” biometric identification of individuals in public spaces for law enforcement purposes is considered to unduly infringe on the rights and freedoms of the affected individuals. This use may affect the privacy of a significant portion of the population, create a sense of constant surveillance, and indirectly discourage citizens from exercising their freedom of assembly. Consequently, the use of these systems is prohibited. However, these systems can be used for searching for crime victims, certain life-threatening situations, or the prosecution of certain crimes.

High-Risk Systems

High-risk AI systems are those that may have significant consequences for the health, safety, and fundamental rights of individuals in the European Union. Therefore, the Rules aim to clearly define what constitutes a high-risk system, regulate them, and thus limit any potential restrictions on international trade.

The Rules consider the following AI systems as high-risk:

  • Systems that function as components of products subject to conformity assessment procedures by an external conformity assessment body.
  • AI systems intended for “deferred” remote biometric identification of individuals, which may result in biased outcomes and discriminatory consequences. Therefore, these systems should be considered high-risk.
  • AI systems intended as components of security in the management and operation of road traffic and the supply of water, gas, heating, and electricity. Their failure or defect can endanger the lives and health of people on a large scale and disrupt the normal course of human activities.
  • AI systems used to determine access or distribution of individuals among different educational institutions. These are considered high-risk because they can determine a person’s educational and professional trajectory and impact their ability to subsist.
  • AI systems used in employment, worker management, and access to self-employment, especially for hiring and personnel selection; decision-making regarding promotion and contract termination; and task assignment and monitoring or evaluation of individuals in contractual employment relationships. These are high-risk because they can significantly affect individuals’ future employment prospects and livelihoods or be discriminatory.
  • AI systems used to assess the creditworthiness or solvency of individuals, as they decide whether these individuals can access financial resources or essential services such as housing, electricity, and telecommunications. These are high-risk because they can discriminate against individuals or groups and perpetuate historical patterns of discrimination, for example, based on racial or ethnic origin, disability, age, or sexual orientation, or generate new forms of discriminatory effects.
  • AI systems that law enforcement authorities use for individual risk assessments, polygraphs, those used to detect the emotional state of an individual, detect deepfakes, assess the reliability of evidence, personality risks, recidivism, and profiling. These are considered high-risk.
  • Finally, AI systems for migration management, asylum, border control, justice administration, and democratic processes are considered high-risk.
  • The prerequisites for the use of a high-risk AI system include:
  • Risk management system.
  • Data quality and data governance.
  • Technical documentation.
  • Records.
  • Transparency and communication of information to users.
  • Human oversight.
  • Accuracy, robustness, and cybersecurity.

What is needed for this proposal to become European Union regulation?

As indicated, the regulation described in this document is a proposal that requires the formal adoption of the European Parliament and the Council to become binding regulation for the European Union.

No dude en contactar a Jorge Castaño jcastano@brickabogados.com si tienen alguna inquietud o si desean ampliación sobre el tema anteriormente expuesto.

El presente documento ha sido preparado por Brick Abogados especialmente para sus clientes, únicamente con fines informativos, por lo cual no se considera como asistencia o recomendación legal.