This is not a thing of the future. Although the full implementation of the AI Regulation will be in 2026, we have to start preparing now. It is time to adapt our operations and structures to comply with the requirements of Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024.
It is true that there were already some rules regulating some aspect of AI, but their dispersion made necessary a kind of “harmonization” to serve as a general regulatory framework.

🤝 What does the IA Regulation regulate?
As with data protection, the standard classifies AI systems by risk level. For each risk level there will be a set of obligations. It is a very complete standard as it includes the definitions of the most relevant terms.
In short, what it regulates is certain uses of AI that are or could be contrary to current EU values.
📝 Who is affected by the AI Regulation?
- AI system providers
- Users of AI systems
- Specific sectors (healthcare, financial, etc.)
In relation to them, it could be said that, depending on their role, they have a:
- Active duty (providers and deployment managers): subjects who must make people literate.
- Active law (workers, collaborators and other personnel at the disposal of suppliers and deployment managers): Be literate.
“For each level of risk there will be a series of obligations. […] certain uses of AI that are or could be contrary to current EU values…. .”
📌 The most important thing so far?
- It establishes a duty of AI literacy for the actors involved in its deployment. It focuses on developing the capabilities, knowledge and understanding of those actors who promote or interact with it. It aims to make society aware, not only of its benefits, but also of the potential harms it may entail. It aims to establish a basic framework of IA competencies and, to this end, the prior technical knowledge, experience and training of those involved in its deployment will have to be taken into account. In addition, the people or groups of people who are expected to use such systems will have to be made literate.
- On the other hand, Article 5 points out some practices with respect to systems that, based on their characteristics, are considered prohibited. In general, AI systems may not be introduced on the market, put into service and/or used if they involve or may involve:
- Manipulative AI
- AI systems that use hidden or deceptive techniques to manipulate people’s behavior, leading them to make harmful decisions they would not otherwise make, are prohibited.
- Abuse of vulnerable people
- AI systems that take advantage of people’s weaknesses due to age, disability or socioeconomic status to manipulate them and cause them harm are prohibited.
- Citizen scoring
- The use of AI systems that evaluate and classify people according to their social behavior is restricted if this leads to unfair or prejudicial treatment in unrelated situations.
- Predictive risk assessment
- The use of AI to predict whether someone will commit a crime based solely on their profile or personality is prohibited.
- Indiscriminate facial recognition databases
- The creation of facial recognition databases by mass extraction of images from the Internet or surveillance cameras is prohibited.
- Inferring emotions in work and educational environments
- The use of AI to infer people’s emotions at work and school, for example, is prohibited.
- Discriminatory biometric categorization
- Systems that classify people according to their race, political opinions, religion, etc., based on their biometric data are prohibited.
- Strict regulation of remote biometric identification in real time
- The use of facial identification in public spaces by the police is limited, being allowed only in specific and serious cases.

👀 Conclusion.
Phew, how dense is the subject of regulation, isn’t it? We have tried to summarize the main points of Article 5 to make it easier for you to understand, but as you can see, it is complex and technical. Remember that each point has its exceptions and nuances, so if you want to go deeper, feel free to take a look at our next post where we will give you more examples and details.
And of course, if you have any questions, we are here to help! Contact us and we will be happy to help you.
We move forward, as a team 🤝✨ Will you join us?