Is it AI under EU Law?

Navigate the EU AI Act with confidence

Determine if your system qualifies as AI and understand your compliance requirements under the new European AI Act and French regulations.

Understanding AI Regulation in Europe

The EU AI Act

The world's first comprehensive AI law (Regulation EU 2024/1689), effective from August 1, 2024. It establishes harmonized rules for AI systems across the EU using a risk-based approach. The regulation applies to providers placing AI systems on the EU market and users of AI systems located within the EU.

GDPR & AI

The General Data Protection Regulation (2016/679) continues to apply to AI systems that process personal data. AI systems must comply with both GDPR requirements (lawful basis, data minimization, transparency) and AI Act obligations when processing personal data.

GDPR Guide by CNIL

French Implementation

France must designate national competent authorities by August 2, 2025. The CNIL (data protection authority) provides specific guidance on AI and personal data. French courts will enforce penalties ranging from €7.5M to €35M or up to 7% of worldwide annual turnover.

CNIL AI Guide

What is an AI System under EU Law?

Understanding the Legal Definition

Article 3(1) EU AI Act

According to Article 3(1) of the AI Act, an 'AI system' is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.

Key Elements of the Definition

Machine-based System

The system operates on machines (computers, servers, embedded systems). This excludes purely biological intelligence but includes software running on hardware.

Varying Levels of Autonomy

The system can operate with different degrees of independence from human control, from minimal automation to fully autonomous operation.

Potential Adaptiveness

The system may change its behavior after deployment based on experience, though not all AI systems need to be adaptive to qualify.

Inference Capability

The system draws conclusions or makes deductions from input data to generate outputs, distinguishing it from traditional deterministic software.

Specific Output Types

Must generate predictions, content, recommendations, or decisions that can influence physical or virtual environments.

✓ Qualifies as AI System

  • Machine learning models that classify or predict
  • Recommendation systems (Netflix, Amazon)
  • Chatbots and virtual assistants
  • Computer vision systems for object recognition
  • Fraud detection algorithms
  • Automated decision-making systems
  • Generative AI (text, image, code generation)

✗ Does NOT qualify as AI System

  • Traditional deterministic software
  • Simple rule-based systems without inference
  • Basic calculators or spreadsheet formulas
  • Static databases or file storage systems
  • Pre-programmed industrial machines
  • Traditional statistical software
  • Simple automation scripts

Borderline Cases Requiring Analysis

Some systems may be difficult to classify and require detailed technical assessment:

  • Advanced rule-based systems with complex logic
  • Statistical analysis tools with predictive elements
  • Optimization algorithms in logistics or scheduling
  • Simulation software with adaptive parameters
  • Expert systems with inference engines

Key Compliance Dates

February 2, 2025

Prohibited Systems Ban

Social scoring, emotion detection at work, and biometric categorization banned

August 2, 2025

GPAI Rules Apply

General-purpose AI model obligations become effective

August 2, 2026

Full Application

All high-risk system requirements fully applicable

AI Risk Categories

Prohibited

Description

Systems banned under Article 5 that manipulate human behavior, exploit vulnerabilities, or violate fundamental rights. Includes subliminal techniques, social scoring by governments, real-time biometric identification in public spaces (with exceptions), emotion recognition in workplaces/schools, and biometric categorization systems.

Examples

Social credit systems, workplace emotion detection, subliminal advertising AI, real-time facial recognition in public

Maximum Penalty

Up to €35M or 7% of global turnover

High Risk

Description

Systems listed in Annex III or embedded in regulated products. Requires conformity assessment, CE marking, registration, human oversight, accuracy testing, robustness measures, and documentation. Covers critical infrastructure, education, employment, essential services, law enforcement, migration, and democratic processes.

Examples

CV screening systems, credit scoring, medical diagnostic AI, autonomous vehicles, critical infrastructure management

Maximum Penalty

Up to €15M or 3% of global turnover

Limited Risk

Description

AI systems that interact directly with humans must inform users they are interacting with AI (Article 50). Includes chatbots, deepfakes, emotion recognition systems, and biometric categorization systems not otherwise prohibited or high-risk.

Examples

Customer service chatbots, content generation tools, image/video manipulation software, voice assistants

Maximum Penalty

Up to €7.5M or 1.5% of global turnover

Minimal Risk

Description

All other AI systems not falling under prohibited, high-risk, or limited risk categories. No specific AI Act obligations but must comply with general EU laws (GDPR, consumer protection, product liability). Companies may voluntarily adopt codes of conduct.

Examples

Spam filters, video game AI, inventory management systems, basic recommendation engines

Maximum Penalty

Standard GDPR penalties may apply