How will the new EU regulation on AI affect cybersecurity?

By Alba Huerga
31 Jul 2024

On July 12, 2024, the first general regulation on Artificial Intelligence was introduced worldwide. It is a pioneering regulatory framework created by the European Union that will condition the economic and social development of the coming years and will have a direct impact on cybersecurity’s field.

Regulation (EU) 2024/1689 of the European Parliament and of the Council establishes a set of harmonized rules on Artificial Intelligence to ensure the safe and ethical development and use of this emerging technology.

This regulation is framed within the European Commission’s European Artificial Intelligence Strategy. This IA Strategy is a plan that aims to make the EU a world-reference region for AI and a driving force for this industry.

The main objective of Regulation (EU) 2024/1689 (RIA) is to promote the human approach to AI, making it sustainable, safe, secure, inclusive and reliable. Another objective of this regulation is to ensure respect for fundamental rights, democracy and the rule of law.

The document published in the Official Journal of the European Union (OJEU) consists of 180 whereas clauses, 113 articles and 13 annexes focused on ensuring the safe and ethical development and use of AI.

What is the scope of the new European regulation on AI?

The RIA sets out the scope of application of the regulation, but also establishes the definition of AI and different concepts associated with it, such as ‘AI general purpose models’, ‘AI systems’ or ‘biometric data’.

The scope of this regulation is quite broad and affects companies, public entities, SMEs or e-commerces that use AI both in their internal processes and in their business models.

According to the legal document, the regulation will be applicable to all those providers of AI systems or models of general use that deploy Artificial Intelligence in European territory, regardless of whether they are established or located in the EU or in a third country.

Nuevo reglamento IA Unión Europea

Likewise, all those product manufacturers who place on the market or put into service an AI system together with their product and under their own name or brand will have to follow the RIA.

In addition, those responsible for the deployment of AI systems that are established or located in the Union, even though they provide services outside the European area, will also be subject to this regulation.

Regulation (EU) 2024/1689 (RIA) does not cover or apply in cases where ordinary people use AI systems in the exercise of a purely personal activity of a non-professional nature.

Regulation (EU) 2024/1689 has a risk-based approach

One of the key issues addressed by the new regulatory framework on Artificial Intelligence is the question of risk. According to Regulation (EU) 2024/1689, ‘AI risk’ is defined as the combination of the likelihood of harm and the severity of it.

The RIA approaches artificial intelligence from the perspective of the risks it may cause to individuals, and seeks to tailor the type and content of obligations in accordance with the scope and severity of the risks that cause them. We are talking, for example, about issues of manipulation, identity theft, data protection, intellectual property rights, privacy or the right to privacy, both in the case of individuals and legal entities.

The RIA also adapts the type and content of its rules to the intensity and scope of the risks occasioned by the AI systems. 

Following Compliance techniques, the RIA establishes a clear distinction and classification of the different types of risks linked to the field of cybersecurity and digital law.

 

Nuevo reglamento IA Unión Europea

 

Unacceptable risk AI systems

“Unacceptable Risk” includes all those systems that may pose a direct threat to public safety, privacy and fundamental rights and are therefore completely prohibited. 

The RIA identifies as ‘unacceptable risk’ subliminal, manipulative or deceptive techniques that transcend people’s consciousness causing them to make decisions that they would not normally make, as well as the creation of “Mass facial recognition databases”. 

This practice involves the creation or expansion of a facial recognition database through the non-selective extraction of facial images from the Internet or closed-circuit television (CCTV). 

Another practice considered ‘unacceptable risk’ is the exploitation of vulnerabilities of individuals or groups of individuals through AI. This way, the weaknesses of people who are vulnerable or at risk of social exclusion due to their age, disability, social or economic situation are exploited in order to alter their behavior in a harmless way.

High-Risk AI Systems

High-Risk AI systems include all those that may have a relevant impact on the fundamental rights of individuals. This includes all critical infrastructures, education and vocational training, employment, essential public and private services (e.g. healthcare or banking), certain law enforcement systems, migration and customs management, justice and democratic processes (such as, for example, influencing elections).

The RIA sets out a number of requirements that all AI systems considered High Risk must follow to ensure the security and ethics of their operation. At A2SECURE we have compiled some of the most important ones. All of them imply the activation of changes or adaptation programs for companies in terms of cybersecurity:

  1. Establish a risk management system for the identification and analysis of known and reasonably foreseeable risks, as well as for the adoption of appropriate measures to address such risks.  
  1. Undertake data governance and management actions that include fit-for-purpose practices for training, validation and test data sets (taking into account, among others, potential biases or contexts of use). 
  1. Develop technical documentation before introducing a system, application or technology on the market or making it accessible to the public. The documentation must be up to date and written in a way that demonstrates that the IA system complies with the requirements of the IA Regulation. 

Include human vigilance in order to prevent or reduce risks that may arise in the use of the AI system. One of the ways to achieve this is to equip the AI system with appropriate human-machine interface tools or to have a Security Operations Center (SOC) to monitor and neutralize potential threats.

Limited Risk AI Systems

Limited-risk systems include general-purpose systems such as chatbots. In these cases the main obligation is to properly inform users that they are interacting with a machine so that they can make an informed decision to continue or take a step back.

Nuevo reglamento IA Unión Europea

Providers will also have to ensure that AI-generated content is identifiable and indicate this visibly on their platforms, digital tools or apps. In parallel, media outlets, magazines or news agencies – whether digital or not – that publish AI-generated texts with an informative purpose will have to indicate that the content has been “artificially” generated. These precautions also apply to audio and video content that constitute deep fakes.

Minimum risk AI systems

Those systems categorized as “minimal risk” are not specifically regulated by the RIA. These systems include all those that individuals choose to use independently or freely (e.g., AI video games, GPT Chat or spam filters).

“The RIA contemplates a sequence of risk levels and, depending on the diagnosis, in each case applies a management system for these risks, preserving the legal guarantees and the proper functioning of the institutions,” states the analysis of this specialized media. 

That’s why it is important to bear in mind that the higher the risk, the greater the obligations and the need to establish specific protocols with the help of partners or suppliers specialized in IA and legislation.

Nuevo reglamento IA Unión Europea

 

The RIA establishes an implementation of the regulatory framework in different phases, without prejudice to other temporary obligations set out in Article 112 RIA, “Assessment and review”. 

However, according to Article 113 RIA, the regulation will enter into force of the RIA 20 days after its publication in the OJEU, i.e. on August 1, 2024 and will become binding after two years, i.e. on August 1, 2026.

In Spain, the market surveillance authority in charge of ensuring compliance with this regulation is the Spanish AI Supervisory Agency (AESIA), which is linked to the Ministry for Digital Transformation and Public Function.

 

Would you like to know in more detail how Regulation (EU) 2024/1689 will affect your organization and what procedures you must carry out to comply with it? Contact our Digital Law department, pioneers in AI, and start activating your adaptation process.

Comments are closed.