Follow us:

TECHEU AI Regulation 2024/1689: Key Legal Impacts

octombrie 11, 20240

On June 13, 2024, the European Parliament and the Council adopted Regulation (EU) 2024/1689.

What is the purpose of Regulation (EU) 2024/1689? The aim of this Regulation is to improve the functioning of the internal market by establishing a uniform legal framework for the development and use of artificial intelligence (AI) systems within the European Union, in line with the Union’s fundamental values. It promotes the adoption of trustworthy and human-centered AI, ensuring the protection of health, safety, and fundamental rights, as well as democracy, the rule of law, and the environment. The Regulation includes harmonized rules regarding the market introduction and use of AI systems, prohibits certain high-risk practices, regulates high-risk systems, and supports innovation and start-ups.

To understand the provisions of the Regulation, it is essential to familiarize ourselves with key concepts such as:

„AI system” – a machine-based system capable of operating autonomously, adapting post-deployment, and generating predictions, recommendations, decisions, or content based on received data that may influence physical or virtual environments;

„Provider” – any natural person, legal entity, or organization that develops or commissions the development of an AI system and introduces it to the market or operates it under their own name or brand, whether for a fee or free of charge;

„deployer” – any natural person, legal entity, or organization that uses an AI system under their authority, except for personal use without professional purposes.

To whom does it apply? Regulation (EU) 2024/1689 applies to a wide range of entities involved in the development, marketing, and use of artificial intelligence (AI) systems in the European Union. It targets:

  • providers who introduce or operate AI systems on the market, including general-use AI models, regardless of whether they are established in the Union or in a third country;
  • deployers of AI systems in the Union, as well as those outside the Union if the results of their products are used in the Union;
  • importers, distributors, manufacturers of products incorporating AI, and authorized representatives of providers from outside the Union;
  • individuals affected who are located within the Union.

To whom does it not apply? Regulation (EU) 2024/1689 does not apply to AI systems used exclusively for scientific research and development, nor to research, testing, or development activities carried out before market introduction. It also does not apply to individuals using AI for strictly personal, non-professional purposes.

How will these entities ensure the implementation of the regulation? To ensure the effective implementation of Regulation (EU) 2024/1689 by the subjects mentioned above, it highlights the importance of AI literacy. AI literacy is crucial for the responsible use of these technologies. Providers and deployers must ensure that personnel and users of AI systems possess adequate knowledge and skills, tailored to their level of training and the context in which AI is used. Consideration should also be given to the individuals and groups affected by the use of these systems.

Why was this Regulation necessary? While artificial intelligence (AI) brings numerous benefits, it can also be misused, offering powerful means for manipulation, exploitation, and social control. These practices are extremely harmful and conflict with the fundamental values of the European Union, such as human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights, including non-discrimination, data protection, privacy, and children’s rights. The use of such systems for law enforcement is prohibited, except in clearly defined situations where it is strictly necessary for a major public interest, outweighing the associated risks. For this reason, comprehensive regulation of AI system use is required to prevent these abuses and ensure the protection of citizens’ rights.

What practices are prohibited in the field of AI? Prohibited AI practices are those deemed dangerous and abusive, conflicting with the fundamental values and rights protected by the European Union. These include:

  • Manipulation and exploitation: Using AI systems to psychologically manipulate or exploit people’s vulnerabilities, potentially affecting their free will or behavior;
  • Surveillance and social control: AI systems used for large-scale biometric surveillance in public spaces or social control, outside of specific, strictly regulated public interest cases;
  • Automatic classification based on sensitive criteria: Using AI to classify individuals based on characteristics such as race, gender, sexual orientation, or political affiliation, in ways that may lead to discrimination;
  • Social scoring: AI systems used to evaluate or score individuals’ behavior based on state-imposed criteria, affecting access to rights and services based on such scores.

In this context, Regulation (EU) 2024/1689 includes the classification of high-risk AI systems, their requirements, risk management systems, the importance of human oversight, and the obligations of providers and implementers of high-risk AI systems.

How do we combat risks and prohibited practices in AI? To combat the risks and prohibited practices mentioned above, Regulation (EU) 2024/1689 ensures codes of good practice and innovation supporting measures, designed to encourage the ethical and responsible use of AI while promoting safe technological development.

Codes of good practice establish voluntary guidelines for AI developers and users, promoting adherence to European values, the protection of fundamental rights, and avoidance of the risks associated with AI system use. These codes encourage transparency, safety, and accountability in AI development and implementation.

Innovation support measures include facilities and incentives for small and medium-sized enterprises and start-ups, supporting research and development of innovative AI solutions. These measures aim to reduce barriers to market entry, promoting responsible innovation and compliance with regulations without hindering technological progress. For example, establishing regulatory testing spaces for AI – member states must ensure that by August 2, 2026, their competent authorities create at least one national-level AI regulatory testing space. These spaces can also be established in collaboration with other member states, and the Commission may offer technical support and advice for their creation and operation.

How is the monitoring of the Regulation’s provisions carried out? After the introduction of AI systems to the market, continuous monitoring mechanisms are established to ensure their compliance and safety:

  • AI providers and deployers are required to monitor system performance, report incidents, and implement corrective measures if necessary;
  • Information sharing between competent authorities in member states and the European Commission is essential for effective market surveillance. This includes sharing data on incidents, risks, and best practices to prevent abusive or risky AI use;
  • The activity of national competent authorities, which verify AI systems’ compliance with regulatory requirements and apply sanctions in cases of non-compliance, helps maintain a high level of protection for citizens and promotes responsible AI use within the European Union.

Lawyers’ role in implementing the Regulation (EU) 2024/1689. The Regulation provides a uniform legal framework for the use and development of artificial intelligence (AI) in the European Union, promoting ethical, human-centered AI that respects fundamental rights, safety, and human dignity. Lawyers will play a crucial role in ensuring compliance with these regulations, advising companies and AI providers on legal obligations, associated risks, and protecting users’ rights. The Regulation will also influence other jurisdictions globally, setting strict standards for preventing AI-related abuses and promoting responsible innovation. In the long term, there will be increased market oversight and provider responsibility, along with a strengthening of protection mechanisms for users and consumers.

 

Leave a Reply

Your email address will not be published. Required fields are marked *