Skip to main content

Subliminal Techniques and Manipulation (Article 5a)

(…) using subliminal techniques that transcend a person’s consciousness or deliberately manipulative or misleading techniques.

Ejemplo: Publicidad subliminal en redes sociales:

Imagine an APP that uses AI to personalize advertising. This AI could analyze user data (tastes, preferences, behavioral patterns) to identify their psychological vulnerabilities. It could then subtly insert subliminal messages into ads and thus influence the user’s subconscious, creating a positive association with a specific product or service, or generating a sense of need, urgency or scarcity. Even exploit the fear of missing out on something. (FOMO), to increase the likelihood that the user will make an impulse purchase/act.

Exploitation of Vulnerabilities (Article 5b)

(…) that exploits any of the vulnerabilities of a natural person or a specific group of persons derived from their age or disability, or from a specific social or economic situation,…

Ejemplo: Discriminación en la concesión de créditos:

Imagine an AI system used by a bank that analyzes the data of credit applicants, including their credit history, income level and neighborhood. The system uses biased algorithms that discriminate against low-income people or ethnic minorities, denying credit or offering less favorable terms. This would perpetuate economic inequality and limit opportunities for vulnerable people, something that currently runs counter to Europe’s values.

Citizen Scoring and Ranking (Article 5c)

(…) to evaluate or classify people (…) so that the citizen’s score The resultant situation may result in one or more of the following situations:

i) un trato perjudicial o desfavorable hacia determinadas personas (…) en contextos sociales que no guarden relación con los contextos donde se generaron (…) los datos originalmente,

Ejemplo: Seguros basados en el comportamiento:

Insurance companies are starting to base rates not only on driving history, but also on social media analysis, exercise apps (for what you eat and how you exercise) or dating apps. A healthy person, with a good diet and social life, could have better rates than someone who uploads pictures of unhealthy food to RRSS, or who demonstrates not much physical activity. This new “pricing” would classify citizens into healthy and unhealthy, which can generate privacy issues and discrimination based on non-objective information.

ii) un trato perjudicial o desfavorable hacia determinadas personas físicas o colectivos de personas que sea injustificado o desproporcionado con respecto a su comportamiento social o la gravedad de este;

Ejemplo: Evaluación de empleados:

A company uses AI to monitor employee behavior (social media activity, emails, office conversations) and assigns them a “productivity” or “loyalty” score. Employees with low scores are fired, demoted or excluded from promotion opportunities, even if their job performance is adequate. This system creates an environment of distrust and can penalize employees for activities outside the work environment.

Predictive Risk Assessment (Article 5d)

(…) AI to perform risk assessments of natural persons for the purpose of assessing or predicting the risk of a natural person committing a crime based solely on the profiling of a natural person or the assessment of personality traits and characteristics. (…)

Ejemplo: Sistema de predicción de reincidencia (al puro estilo Minority Report)

An AI system analyzes personal data of ex-convicts (age, family history, neighborhood, etc.) to predict the likelihood that they will commit new crimes. High “risk” individuals are subjected to police surveillance, release restrictions or mandatory “rehabilitation” programs, even if they have not committed any subsequent crimes. This system perpetuates discrimination and stigmatization of ex-convicts, based on profiles rather than concrete facts.

In this context, how to prepare for the AI regulation?

Any organization that is going to develop, support or use AI should consider the following aspects before entering into it:

  • Risk identification and assessment.
  • Implementation of compliance and mitigation measures to prevent identified risks.
  • Training and awareness of personnel who will use or develop AI through European ethical guidelines, internally developed by the organization and based on the risk encountered.

What awaits us now? Entry of the sanctioning regime

Attention! That the key dates are here. Although some RIA provisions already went into effect on February 2, mark your calendar for August 2! As of that day, the authorities will be able to start sanctioning companies that do not comply with the RIA.

States also have duties. They must be aware of the guidelines issued by the European Commission and prepare the ground so that by 2026 all rules, penalties and fines are correctly applied.

In Spain, the Spanish Artificial Intelligence Supervisory Agency (AESIA) is in charge of overseeing all this. They will be the ones to monitor that AI systems comply with the regulations.

So let’s be good! At The Lighthouse Team we have a team of consultants with expertise in new technologies ready to hit the ground running.

If you have any questions, please contact us https://thelighthouse.team/en/contact/

Mélany Resa

Abogada colegiada en el ICAM, especializada en PI, Competencia y NNTT. Experta en derecho contractual, con amplia experiencia en la gestión de carteras de PI y liderazgo en fusiones y adquisiciones. Me definen una comunicación cercana, pensamiento estratégico y capacidad para optimizar procesos legales en derecho mercantil y societario.

Close Menu

Contacto

C/Serrano Anguita 13
28004 Madrid
España

T: + 34 619364782
E: info@thelighthouse.team