Prohibited AI Practices Under the EU AI Act: What Is Now Strictly Forbidden
An in depth explanation of AI practices that are classified as unacceptable risk under Article 5 of the EU AI Act, including manipulation, social scoring, biometric surveillance and predictive profiling.
The EU AI Act has established a definitive legal boundary for artificial intelligence development and deployment. Unlike high-risk systems that require strict mitigation, prohibited AI practices are subject to an absolute ban. There is no middle ground, no proportionality test, and no grace period for compliance.
As of February 2, 2025, organizations operating within or targeting the European Union must ensure their systems do not fall into the "unacceptable risk" category. Failure to comply results in immediate illegality.
The Definition of Unacceptable Risk in AI
The European Union's risk hierarchy places certain technologies in a category deemed inherently harmful to fundamental rights, human autonomy, and democratic safeguards. Under Article 5 of the EU AI Act, the following practices are strictly forbidden.
1. Manipulative and Subliminal AI Systems
The ban on manipulative AI is one of the most significant compliance hurdles for modern software. The law prohibits any system that uses subliminal, deceptive, or manipulative techniques to distort a person’s behavior in a way that causes significant harm.
- Covert Influence: AI that steers decision-making without the user's conscious awareness.
- Harmful Nudging: Systems that encourage self-destructive behavior or coercive financial decisions.
- The Threshold: If your AI nudges behavior through methods a user cannot reasonably detect or resist, it is likely a prohibited practice.
2. Exploitation of Vulnerable Groups
The AI Act explicitly forbids systems that leverage specific vulnerabilities to influence behavior. This applies regardless of the developer's intent.
- Targeted Vulnerabilities: Age (children), physical or mental disability, and specific socio-economic circumstances.
- Predatory Logic: Examples include AI-driven encouragement of unsafe behavior in minors or manipulating the economically vulnerable into predatory contracts.
- Legal Reality: Good intentions do not provide a "safe harbor." If the system leverages a structural vulnerability, it is illegal.
3. Social Scoring Systems
The EU has prohibited social scoring to prevent the creation of "digital reputations" that limit human rights. This ban extends beyond government use to include private sector mechanisms.
- Generalized Evaluation: Ranking individuals based on social behavior or inferred personal traits.
- Disproportionate Disadvantage: AI that limits access to employment, housing, or financial services based on unrelated social data.
- Compliance Warning: If your model creates a composite trustworthiness or risk profile across unrelated domains, you are at high risk of violating Article 5.
4. Individual Predictive Policing
AI systems designed to identify individuals as future offenders based solely on profiling or statistical inference are banned.
- Prohibited Criteria: Predicting criminal behavior based on location history, appearance, or behavior patterns rather than verifiable evidence of a specific act.
- Private Sector Impact: This logic also applies to private security, fraud prevention, and insurance risk modeling that relies on probabilistic profiling instead of concrete actions.
5. Biometric Data Scraping and Categorization
Biometric data is a "red line" under the new framework. Many legacy datasets used for training are now considered legal liabilities.
- Illegal Scraping: Creating facial recognition databases by scraping images from the internet, social media, or CCTV footage.
- Sensitive Inference: Systems that categorize individuals based on race, religion, political beliefs, or sexual orientation are prohibited.
- Traceability: If you cannot verify the lawful origin and purpose of your biometric data, your system is non-compliant.
6. Real-Time Biometric Identification (RBI)
The use of "live" facial recognition in publicly accessible spaces is broadly prohibited, with extremely narrow exceptions for law enforcement regarding terrorism or serious crime.
- Commercial Ban: For private actors, there is effectively no legal path to use real-time biometric identification in public environments.
- Public Spaces: This includes any environment accessible to the public, regardless of whether it is privately or publicly owned.
Enforcement: The Cost of Non-Compliance
Violating Article 5 triggers the most severe penalties within the EU AI Act framework.
| Penalty Type | Maximum Fine |
|---|---|
| Administrative Fine | Up to 35 Million Euros |
| Turnover-Based Fine | Up to 7% of global annual turnover |
| Market Action | Immediate withdrawal and decommissioning |
Unlike other regulatory hurdles, there is no remediation path for prohibited systems. Once a system is identified as an Article 5 violation, it must be removed from the market immediately.
Why Modern Organizations Are Exposed
Most Article 5 violations are not the result of malice, but of technical debt and opaque supply chains. Organizations are often exposed through:
- Model Reuse: Utilizing third-party LLMs or agents that contain hidden manipulative logic.
- Legacy Datasets: Training sets containing scraped biometric data that is now illegal.
- Cross-Domain Scoring: Repurposing a risk model from one industry (e.g., retail) for another (e.g., insurance).
- Opaque Personalization: AI "nudging" that crosses the line into subliminal manipulation.
Risk Mitigation with Scalevise
At Scalevise, we view Article 5 not as a hurdle, but as a critical design constraint for sustainable innovation. We help organizations:
- Audit Workflows: Identifying hidden Article 5 exposure in existing tools.
- Data Provenance: Ensuring training sets do not rely on prohibited biometric scraping.
- Architectural Redesign: Re-engineering AI use cases to ensure they remain within the "safe" risk categories.
The smartest move today is not to wait for a regulatory audit. It is to remove legal ambiguity before it becomes a corporate liability.