Field note · Implementation guide · Updated May 2026

The EU AI Act, in practice.

This is the companion to the Visual Guide. The practical "what to ship" page. It assumes you already know what the Act says and gets straight to implementation. Fourteen sectors, mapped to the Articles, with the steps your legal team will not write for you. The orientation strip below is a 30-second self-test. If anything in it is news, you are already behind.

Run the self-test ← Back to the Overview
The Act regulates AI by intended purpose and deployment context, not by how clever the model is. Your obligations are decided by what the system does to people, not by how it was built.
Synthesis of Recitals 7, 11, 26 and Article 6 · Regulation (EU) 2024/1689
§ Orientation · The 30-second self-test

Three things to know before anything else.

If any of the three tiles below is news to you, you are behind on the Act. This page assumes you have read Regulation (EU) 2024/1689, or someone on your team has. From here, everything is implementation.

§ Live · already enforceable

Article 5 prohibitions and Article 4 AI literacy have applied since 2 February 2025. GPAI obligations (Arts. 53, 55) since 2 August 2025. No transition. No SME exemption from prohibitions.

§ Deadline · 2 Aug 2026

All Annex III high-risk obligations and Article 50 transparency become enforceable. Fines up to 15M / 3% global turnover 35M / 7% for prohibited use. SMEs pay the lower of fee/percentage.

§ The role question

You are a provider (developed it / put your name on it / substantially modified it Art. 25) or a deployer (use someone else's professionally). Most teams calling an LLM API and shipping it as a product are providers, not deployers.

§ Sectors · 14 implementation guides

How it applies to your work.

Each card states what the Act requires for that sector, the numbered implementation steps, the most common mistake (the "Don't"), the key Articles, and the deadline. Sectors are deliberately broad. Most organisations will recognise themselves in two or three.

1 · Frontier AI model providers
OpenAI · Anthropic · Google DeepMind · Meta · Mistral · xAI · Cohere
GPAI Systemic risk if >10²⁵ FLOPs

You are the first link in every downstream compliance chain. The Act treats you as a special category in Chapter V, with obligations that flow up the value chain you must give downstream providers and deployers the documentation they need to comply.

→ What to ship
1
Sign the GPAI Code of Practice or publish a written justification for not signing. Signatories get a rebuttable presumption of conformity. 26 majors had signed by early 2026. Refusing now is a public regulatory signal.
2
Publish your training-data summary using the AI Office template. Categories of data, source types, scale, copyright handling, opt-out compliance. Even partial disclosure under the template beats silence silence is the audit trigger.
3
Maintain a frontier safety framework with enforceable thresholds. RSP / Preparedness Framework / Frontier Safety Framework. The AI Office expects evaluations against your own published thresholds shipping a model that should have triggered a pause is the worst scenario.
4
Build a downstream documentation pack as a product. Banks, hospitals, recruiters integrating your model need usage policies, capability cards, evaluation results, restricted-use lists. Ship a developer portal section, not a 200-page PDF.
5
Set up the 15-day incident reporting workflow. Art. 73 requires reports to the AI Office within 15 days of confirming a causal connection. Build the triage / legal review / submission pipeline before you need it.
Don't

Don't claim "no systemic risk" without a paper trail. If your training compute approaches 10²⁵ FLOPs, document the calculation. The Commission can designate models below the threshold as systemic-risk if capabilities warrant; retrofitting that documentation under an open investigation is brutal.

Key · Arts. 53, 55, 56, 88, 101 · Annex XI–XIII
Live since 2 Aug 2025
2 · Building on third-party LLM APIs
SaaS startups · enterprise product teams · agencies wrapping GPT / Claude / Gemini / Llama
Limited risk default Provider if you put your brand on it

"We're just calling an API" sounds like a deployer position. It almost never is. The moment you ship a product that uses AI to make consequential decisions under your name, Article 25 promotes you to provider with the full obligation set. The decision of provider vs deployer is the single most-misclassified question of 2026.

→ What to ship
1
Inventory every AI feature, every embedded AI, every shadow AI. Copilot, the AI inside your CRM, AI meeting summaries, that ChatGPT tab Marketing uses every day. Cloud Security Alliance March 2026: most enterprises have deployed high-risk systems without realising it.
2
For each system, write down the intended purpose in one sentence. The same chatbot supporting general users is limited-risk. The same chatbot triaging benefit applications is high-risk Annex III·5(a). Intended purpose decides classification.
3
Pick your role honestly per system. Internal-only with a third-party model = deployer. Product feature under your brand = provider. Custom fine-tune of a model = provider. The provider designation isn't bad; pretending you're a deployer when you're not is.
4
Renegotiate your model-provider contract. You need pass-through compliance commitments, technical documentation access, incident notification, training-data clarity, data-residency terms. Vendors who won't give you these aren't viable EU partners post-August 2026.
5
Ship Article 50 disclosure UI this quarter. Chatbot banner, "AI-generated" labels on output, deepfake markers. This is the cheapest, most visible compliance work and it ships now, not in August.
Don't

Don't outsource your compliance to your vendor's terms of service. Deployer obligations under Art. 26 cannot be assigned away. If your fraud-detection vendor's model is a black box you can't explain, you are still the one telling regulators why a customer was denied service.

Key · Arts. 3, 4, 25, 26, 50 · Annex III
By 2 Aug 2026
3 · HR, recruitment & workforce management
ATS · CV screening · interview AI · performance management · gig allocation
Emotion AI banned High-risk · Annex III·4

One of the most exposed sectors. The Act explicitly lists employment AI as high-risk in Annex III, and explicitly bans emotion recognition in workplaces. Both employers (deployers) and HR-tech vendors (providers) are in scope.

→ What to ship
1
Audit the entire employee lifecycle. Targeted job ads, CV parsers, interview tools, scheduling AI, performance dashboards, productivity monitors. Most HR stacks have 4–8 AI systems most HR teams have inventoried zero.
2
Turn off emotion AI today. Video-interview "soft skills" scoring, call-centre voice analytics, webcam engagement tools. The ban has been live since February 2025 any continued use is a documented aggravating factor in unrelated enforcement.
3
Demand vendor documentation. Bias audit results, conformity assessment status, technical documentation (Art. 11), usage logs (Art. 12). Vendors that won't provide these get a termination clause and a six-month migration plan.
4
Notify workers before deployment. Art. 26(7) requires worker notification before a high-risk system goes live. Add it to your works-council consultation and your employee handbook not after the fact.
5
Designate human reviewers with real override authority. "Human in the loop" only counts if the human can and does meaningfully overturn the AI. Document one case per month where the human ruled against the model. If there are zero, you don't have oversight, you have rubber-stamping.
Don't

Don't re-label emotion AI as "stress detection" or "wellness analytics". The Commission's February 2025 guidelines make clear that commercial relabeling doesn't exit the ban. The test is whether the system infers emotion from biometric data not what the marketing page calls it.

Key · Arts. 5(1)(f), 9–15, 26(7), 27, 86 · Annex III(4)
Emotion AI: live · Rest: 2 Aug 2026
4 · Banks, lenders & insurers
Credit scoring · AML monitoring · insurance pricing · transaction risk · onboarding
High-risk · Annex III·5

Credit scoring and life/health insurance pricing are named high-risk. So is access to essential public services. Fraud detection is carved out but only when it's solely for fraud, not when the same model also triages access. A mature model-risk-management programme covers 60–70% of what the Act requires; the documentation burden is the gap.

→ What to ship
1
Map every model to a risk tier and a role. Bank-built credit scorecard: provider, high-risk. Bank using a vendor scoring API: deployer of high-risk. Both routes carry obligations; the documentation responsibilities differ sharply. The Harvard Data Science Review piece (Summer 2025) is the best peer-reviewed walk-through.
2
Run Art. 10 data-governance evidence on every training set. "Relevant, representative, free of errors to the best extent possible". Credit data that historically under-represents women or minorities is a documented Art. 10 problem, regardless of current performance.
3
Generate decision rationales at the moment of decision. Affected individuals have a right to explanation under Art. 86 and (separately) under GDPR Art. 22. Retrofit-explainability for opaque models has been ruled inadequate in multiple GDPR cases expect the same here.
4
Build one control matrix for AI Act + DORA + GDPR + CRD/CRR. Risk management, incident reporting, third-party oversight, data governance, recordkeeping the four regimes share 70%+ of controls. Don't run four parallel programmes.
5
Run a Fundamental Rights Impact Assessment. Art. 27 applies to private deployers providing essential services retail banks and insurers fall in. The assessment must precede deployment, not follow it.
Don't

Don't lean on the fraud-detection carve-out without scoping it tightly. If your "fraud" model also influences whether a customer keeps their account, gets a card limit increase, or qualifies for a product it's no longer solely fraud detection. The dual-use is what pulls it back into high-risk credit scoring.

Key · Arts. 9, 10, 12, 14, 15, 26, 27, 86 · Annex III(5)
By 2 Aug 2026
5 · Healthcare & medical devices
Diagnostic AI · clinical decision support · imaging · digital therapeutics · LLM clinical reasoning
High-risk · Annex I + III

Healthcare AI sits under two EU frameworks simultaneously: MDR/IVDR and the AI Act. Compliance with one creates no presumption of compliance with the other. Fragmented compliance is now the single largest source of avoidable liability exposure in European healthcare, per the peer-reviewed literature.

→ What to ship (manufacturers)
1
Build a unified technical file satisfying MDR Annex II and AI Act Article 11 in one document, indexed for both notified-body reviewers and AI Office requests. Nature Medicine 2024 and BMJ Health 2025 both describe this as the cheapest single compliance investment available to MedTech.
2
Extend your ISO 13485 QMS with AI-specific risk management per ISO 14971. The harmonised AI standards from CEN-CENELEC will plug in here build the QMS to accept them when published.
3
Route the AI-specific conformity assessment through the same notified body handling MDR review where possible. Single-notified-body assessment is explicitly contemplated and materially reduces documentation burden.
4
Design system-level explainability into the architecture. Hospitals owe patients meaningful explanations under GDPR Art. 22 and that explanation has to come from your model design, not a post-hoc PDF generated by the hospital.
→ What to ship (hospitals as deployers)
1
Run local clinical validation on your actual patient population. The absence of documented local validation is the most exploited litigation gap in EU medical-AI cases (Nagel / Tactical Management, 2026).
2
Train physicians on each system's known limits. Required by Art. 26, not optional. Document attendance.
3
Set up dual-channel incident reporting to both the manufacturer and the national competent authority. Same event, two reports.
Don't

Don't assume GPAI in clinical reasoning is exempt. An LLM used for clinical decision-making must satisfy both Chapter V GPAI rules and high-risk medical-device requirements. The fact that the underlying model is general-purpose doesn't remove the medical-device classification of your deployment.

Key · Arts. 6, 9–15, 26, 27, 43, 72 · Annex I + III(5)(d)
Stand-alone: 2 Aug 2026 · Embedded: 2 Aug 2027
6 · Chatbots, voice agents & AI-generated content
Customer support · AI companions · voice assistants · ad creative · synthetic media · deepfakes
Limited risk · Art. 50

Article 50 is the rule almost every business will hit. Four duties: chatbot disclosure, synthetic content marking, emotion/biometric notice, deepfake labelling. Fines reach 15M / 3% not trivial despite this being the "lightest" tier.

→ What to ship
1
Map every AI-facing surface and pick a disclosure pattern for each. Chatbots, voice IVR, AI agents in apps, AI replies in support tooling, AI-generated ad creative. Document the pattern per surface Art. 50 enforcement will inspect the surface, not the policy.
2
Ship a visible, persistent chatbot disclosure. "You're chatting with an AI assistant I can connect you to a person any time." Air Canada lost a 2024 case where the chatbot hallucinated European courts are likely to take a similar line.
3
Adopt C2PA-style provenance + visible deepfake labels. The Article 50 Code of Practice (second draft March 2026; final June 2026) endorses a multi-layered approach: machine-readable metadata and visible labels. Open-source watermarking (Resemble PerTH and similar) demonstrates 98%+ recovery rates the technical bar is achievable.
4
Build a human escalation path with logged decisions. Legally or commercially sensitive replies must be reviewable by a person. Logs retained at least six months required by Art. 26(6) for high-risk integrations and good practice everywhere.
5
Update Terms of Service and pass-through vendor contracts. Customer agreements reflect AI use; vendor agreements pass through watermarking and disclosure obligations from your foundation-model provider.
Don't

Don't rely on "obvious from context" exemption casually. The Act allows skipping disclosure when AI use is obvious to a "reasonably well-informed, observant and circumspect natural person". If your chatbot has a human name, a friendly avatar, and writes in first-person, "obvious" is hard to defend. JIPITEC's peer-reviewed Article 50 analysis (2025) makes this point in detail.

Key · Art. 50 (and Art. 5(1)(f) if emotion AI)
By 2 Aug 2026
7 · Marketing, advertising & personalisation
Programmatic ad tech · personalisation engines · generative ad creative · automated bidding · attribution
Mostly limited risk Three trapdoors

Marketing AI sits mostly outside high-risk territory but three traps pull specific use cases up. Subliminal manipulation (Art. 5(1)(a)) is banned. Exploiting vulnerabilities based on age or socio-economic status (Art. 5(1)(b)) is banned. Personalisation feeding Annex III decisions (e.g. risk-based pricing of insurance, eligibility for services) is high-risk.

→ What to ship
1
Tier your marketing AI by impact on individuals. High (decision-influencing), medium (personalising), low (analytics). Bid optimisation, audience expansion, creative gen, brand-safety, attribution each on its own row of the matrix.
2
Set vulnerability red lines in your media-plan templates. No targeting based on financial distress, addiction recovery, age vulnerability, comparable conditions. Encode the rules where media planners actually work, not only in legal review.
3
Label AI-generated creative. Small "AI-generated" disclosure on the asset, C2PA-style provenance metadata. TikTok, Meta, YouTube already require equivalent. Don't wait for the platform to demand it after launch.
4
Vendor-vet every ad-tech partner. If your DSP retrains models on your data, your data is feeding a provider's high-risk system and the contract needs to address that. Procurement owns this, not just legal.
5
Pre-write an incident playbook for generative-creative failures. When (not if) your generative ad tool produces something harmful, you need a "pause campaign / preserve logs / notify stakeholders" workflow that runs in hours, not weeks.
Don't

Don't deploy deepfake endorsements without explicit labelling and rights. Even satirical AI-generated celebrity content triggers Art. 50(4) disclosure. The proposed Digital Omnibus (May 2026) signals additional restrictions on nudification and identity-impersonation apps the regulatory direction is one-way.

Key · Arts. 5(1)(a)(b), 26, 50 · DSA + GDPR interplay
Prohibitions: live · Art. 50: 2 Aug 2026
8 · Education & EdTech
Admissions AI · proctoring · adaptive learning · automated grading · vocational assessment
Emotion AI banned High-risk · Annex III·3

The second sector alongside workplaces where emotion recognition is banned outright. Also a named high-risk domain for admissions, evaluation and assessment. Several major proctoring vendors have already restructured their EU offerings to comply.

→ What to ship
1
Distinguish admissions and assessment AI from learning AI. Annex III·3 is explicit: admissions decisions and learning-outcome evaluation are high-risk. A study-planner that doesn't gate access or grade work is typically minimal risk.
2
Replace facial-emotion proctoring with conduct-based detection. Eye-tracking for "engagement" is dead under EU law. Behaviour-based proctoring (looking-away patterns, multi-person detection) remains possible but is high-risk and demands full Art. 9–15.
3
Inform students before each high-risk AI is deployed. Document the disclosure. Right to explanation under Art. 86 applies; affected students will exercise it.
4
Build meaningful human override into automated grading. Auto-grading cannot be final without a real review path. Art. 14 oversight applies.
Don't

Don't claim "safety" exemption for general engagement monitoring. The Art. 5(1)(f) carve-out for medical/safety reasons is narrow pilot fatigue, accessibility tools for special-needs therapy. It does not cover "we want to know if students are paying attention".

Key · Arts. 5(1)(f), 9–15, 26, 86 · Annex III(3)
Emotion AI: live · Rest: 2 Aug 2026
9 · Legal, consulting & professional services
Contract review AI · research copilots · client deliverables · white-labelled AI tools
Mostly limited risk Provider trap on white-labels

Most professional-services AI is limited risk productivity tools. Two scenarios pull it higher: AI used in the administration of justice (Annex III·8) is high-risk, and consultancies/firms that white-label third-party AI tools become providers under Art. 25.

→ What to ship
1
Adopt a firm-wide AI usage policy allowed tools, data-handling, disclosure to clients, citation discipline for AI research output. Most bar and professional bodies expect this by 2026.
2
Treat engagement letters as AI Act artefacts. Document the AI you'll use, data flows, disclosure terms, and the provider/deployer allocation if you're customising tools for a client.
3
Ship role-based AI literacy training. The Commission's May 2025 Q&A treats contractors and clients as "other persons on whose behalf" the firm operates AI proportionate training is expected for all of them.
4
If you white-label an AI compliance / research / drafting tool to clients, accept that you are now the provider under Art. 25(1)(a). Full Art. 16 obligations follow. Most firms haven't yet realised this the white-labelled "Firm Name AI Assistant" is the textbook trap.
Don't

Don't cite hallucinated case law. Multiple US courts have sanctioned firms for unverified AI-generated legal citations (2023–2025). EU courts are watching. Build mandatory verification into the workflow, not a CLE module.

Key · Arts. 4, 25, 50 · Annex III(8) for justice
Literacy: live · Disclosure: 2 Aug 2026
10 · Public sector & government agencies
Benefit eligibility · service triage · law enforcement · migration · border control · justice
High-risk across the board FRIA mandatory

Public bodies face the strictest obligations. Almost every consequential AI use case sits in Annex III's high-risk list, and public deployers have a unique duty: a Fundamental Rights Impact Assessment under Article 27, performed before deployment.

→ What to ship
1
Inventory before procurement closes another deal. Most agencies are deploying AI through SaaS and don't know it. The procurement file becomes the Art. 27 starting evidence.
2
Build FRIA into the procurement template. Tendering authorities can require providers to supply Annex IV technical documentation and bias-audit evidence use the procurement leverage while you have it.
3
Register every high-risk system in the EU database before use (Art. 49, 71). The database opened in stages through 2025–2026; the registration record is itself audit evidence.
4
Join the AI Pact. Commission's voluntary pre-compliance pathway costs nothing, demonstrates good faith, opens a line to the AI Office.
Don't

Don't defer the FRIA to "when we have time". Article 27 makes the FRIA a precondition no FRIA, no lawful deployment. The first high-profile public-sector enforcement actions in 2027 will almost certainly turn on a missing or perfunctory FRIA.

Key · Arts. 26, 27, 49, 71, 86 · Annex III(5)–(8)
By 2 Aug 2026 (some until 2030)
11 · Critical infrastructure & utilities
Energy grid · water · transport routing · digital infrastructure · telecoms
High-risk · Annex III·2 NIS2 + CER + AI Act

AI as a safety component in critical digital infrastructure, road traffic, or water/gas/electricity supply is named high-risk. The cybersecurity bar (Art. 15) is calibrated to safety-of-persons. NIS2, CER and the AI Act point at the same set of controls run them as one programme.

→ What to ship
1
Unify AI Act, NIS2 and CER programmes. Treat the regulators as overlapping audiences for the same control set. Don't run three documentation stacks.
2
Adversarial-test what could fail visibly. Power-grid AI under data poisoning is a safety incident, not a privacy incident. Testing budget here should be 10× the documentation budget.
3
Build human-on-the-loop with hardware override separation. Art. 14 oversight is meaningful only if the human can stop the system independently of the AI itself.
4
Set up dual-channel incident reporting AI Office (Art. 73) and national CSIRT (NIS2). Same event, two notifications, both within tight windows.
Key · Arts. 9, 14, 15, 26, 73 · Annex III(2)
By 2 Aug 2026
12 · Agentic AI & automated workflows
Multi-step agents · RPA + LLM hybrids · autonomous business-process AI · agent frameworks
Multi-classification Law has gaps here

The Act regulates AI by intended purpose and deployment context, not by autonomy level. A single agent taking actions across credit, HR and customer-service hits multiple Annex III categories at once. Most agents in production today have not been classified.

→ What to ship
1
Classify by action, not by agent. If your agent can take a high-risk action even rarely, the whole flow inherits high-risk obligations. Map each tool the agent can call, and tag the high-risk ones.
2
Hard-gate consequential actions behind human confirmation. Termination, denial, financial commitments above a threshold, customer-data deletion, escalations to law enforcement each gets an explicit human click.
3
Log the full reasoning trace. Tool calls, inputs, outputs, intermediate reasoning. Art. 12 requires records sufficient to reconstruct decisions for an agent, that means the chain, not just the final answer.
4
Test the kill switch. A single hallucinated tool call can trigger thousands of downstream actions. The kill switch has to be tested under load, not theoretical.
Don't

Don't assume the Digital Omnibus will clarify your liability. The proposed Omnibus does touch agentic AI but is still in trilogue. Plan for the current law as written; treat any future clarification as upside, not relief.

Key · Arts. 3, 12, 14, 26 · plus Annex III categories per action
By 2 Aug 2026
13 · Open-source AI & model weights
Llama-style releases · Mistral · open-weight foundation models · research-licence models
GPAI carve-out Disappears at systemic-risk threshold

Genuinely open AI gets a meaningful carve-out but with two conditions. First, the carve-out is narrow: it relieves you of technical-documentation and downstream-provider information duties, not of copyright policy and training-data summary. Second, the carve-out disappears entirely if your model has systemic risk.

→ What to ship
1
Pick a truly open licence (Apache 2.0, MIT, or equivalent). Custom commercial-use restrictions may forfeit the carve-out. "Open weights with non-compete clauses" probably doesn't qualify.
2
Publish the copyright policy and training-data summary anyway. No carve-out applies to Art. 53(1)(c) and (d). The training-data summary template is the operational standard.
3
If you're approaching 10²⁵ FLOPs, plan systemic-risk obligations early. Notification to the AI Office, frontier safety framework, adversarial testing these are not last-minute work.
Key · Arts. 2(12), 53, 55 · Annex XI
Live since 2 Aug 2025
14 · Startups & SMEs
Pre-revenue · seed-stage · sub-50 employees · regulatory sandbox candidates
Inverted fines Simplified documentation

The Act has more genuine SME relief than headlines suggest, but it's procedural rather than substantive. Obligations are the same; how documentation is structured, how fines are calculated, and what support is available these change. Most valuable single provision: Art. 99(6) inverts the fine calculation SMEs pay the lower of fixed fee or % turnover, not the higher.

→ What to ship
1
Apply to a regulatory sandbox now. Spain (AESIA), France (CNIL/ARCEP), Netherlands (RDI), Finland (Traficom), Germany (BNetzA from mid-2026). Priority SME access, free participation, protection from enforcement during good-faith compliance work.
2
Build compliance into product from v0.1. Logging, transparency UI, human-override hooks are product features that double as Articles 12, 14 and 50 evidence. Cheaper than retrofitting at Series B.
3
Use the simplified SME documentation forms. Designed to be filled by a non-specialist more SMEs should claim them.
4
Sign the AI Pact. Free, voluntary, public signal. Access to AI Office support and the Pact's literacy repository.
5
Treat compliance as sales enablement. Enterprise buyers in 2026 want to know you fulfil deployer obligations. Documented compliance shortens enterprise sales cycles materially.
Don't

Don't assume Article 5 prohibitions don't apply to startups. They do. The "lower of" fine inversion applies to high-risk and information-supply infringements, not to prohibited practices. A seed-stage company doing emotion AI in workplaces faces the same headline ban as a Fortune 500.

Key · Arts. 11(1), 55(2), 57–62, 99(6)(7)
By 2 Aug 2026
§ Artefacts · The documents you'll actually need

What goes in the files.

The Act's obligations land as documents Annex IV technical files, FRIAs, training-data summaries, contract clauses. Most teams underestimate how concrete these are. Here's what each actually contains, in operational terms.

Provider · Art. 11 + Annex IV

The technical file for a high-risk AI system

The single document that demonstrates Article 9–15 compliance. Kept current; reviewed by your notified body (where applicable) and by market surveillance authorities. Annex IV is the table of contents:

  • General description intended purpose, version, provider, downstream user instructions
  • Detailed system description architecture, training methodology, computational resources, validation
  • Risk management system identified risks, mitigation, residual risk
  • Data governance data sources, labelling, cleaning, bias evaluation, gaps
  • Human oversight measures who, how, with what authority
  • Accuracy & robustness metrics including cybersecurity testing
  • Conformity-assessment route taken internal control (Annex VI) or notified body (Annex VII)
  • Post-market monitoring plan what you track and how you act on it
Deployer · Art. 27

Fundamental Rights Impact Assessment

Required before first use of certain high-risk systems by public bodies and by private deployers in essential services. Not a privacy DPIA this is the rights assessment. Notify to your market surveillance authority on completion.

  • Deployment context what, where, by whom, for what decisions
  • Categories of natural persons likely affected including vulnerable groups
  • Specific risks to fundamental rights not generic risk; tied to actual use
  • Human oversight measures the people, training, override authority
  • Measures if risks materialise remediation, complaints handling
  • Internal governance who owns this, how it's reviewed, update triggers
GPAI · Art. 53(1)(d)

Training-data summary

Published using the AI Office template. Granular enough to enable rights-holders to exercise opt-outs; not so granular that it forces trade-secret disclosure. The AI Office has been clear that silence is the audit trigger, not partial disclosure.

  • Categories of data text, image, code, scientific, synthetic, etc.
  • Source types public web, licensed corpora, user-contributed, synthetic
  • Scale (orders of magnitude) per category
  • Copyright handling Art. 4(3) DSM Directive opt-out compliance, licensing terms
  • Data-quality measures deduplication, filtering, safety classifiers
Both · Art. 25 + procurement

Vendor / model-provider contract terms

The pass-through clauses you need when you integrate someone else's AI into your product. Cheaper to negotiate at contract signing than to retrofit during enforcement.

  • Compliance representation vendor confirms model meets applicable Art. 53/55 obligations
  • Documentation access you can obtain Annex IV-equivalent technical documentation
  • Incident notification vendor notifies you within agreed timeframes (synced to Art. 73's 15 days)
  • Material change notice vendor flags model swaps, fine-tunes, capability changes
  • Data handling whether your data trains the model, retention, residency
  • Audit / inspection rights at reasonable intervals, on reasonable notice
  • Indemnification for IP claims, fundamental-rights claims, regulatory findings caused by the vendor's system
Provider + Deployer · Art. 50

The disclosure pattern library

Each AI-facing surface needs a disclosure but the disclosure looks different per surface. Build one pattern library, reuse it, document each instance.

  • Chatbots: persistent banner + first-message disclosure + on-demand "talk to a person" path
  • Voice agents: spoken disclosure on connect; explicit "this is an automated assistant" phrasing
  • AI-generated text in public-interest content: visible label + machine-readable provenance metadata
  • Deepfakes: visible label at first exposure + C2PA-style metadata + non-removable embedded mark
  • Emotion / biometric categorisation systems: notice at the point the system runs, in plain language
  • AI-generated marketing creative: small "AI-generated" mark on the asset + provenance metadata for platforms
All organisations · Art. 4

The AI literacy programme

No prescribed format. The Commission's May 2025 Q&A makes clear that proportionate, role-based, documented is what regulators expect. Internal record-keeping is enough no certification required.

  • Base level (all staff): 4–6 hours covering what AI is, hallucinations, bias, escalation rules, your AI usage policy
  • Functional level (AI users): training on the specific systems they operate, including known limits
  • Technical level (devs, MLOps): Articles 9–15, conformity assessment, technical documentation
  • Executive level (leadership): AI Act roles, fine exposure, governance accountability
  • Refresh: annually, plus every new tool deployment
  • Evidence: attendance, content version, role mapping, signed acknowledgement
§ Self-test · Five questions in order

The fastest way to know where you stand.

A field-ready flow for the question every team asks: "Are we in scope, at what tier, and how urgent is it?" Not a substitute for legal advice on edge cases, but enough to know if you need to be acting this quarter.

1
Do you develop, deploy, import or distribute AI in the EU or is the output of your AI used in the EU?
No on all four: out of scope (for now enterprise customers may apply EU standards by contract). Yes: proceed.
2
Does any use case match the eight prohibited practices in Article 5?
Yes: stop using the system today. No transition, no SME exemption. Live since Feb 2025.
3
Does your AI fall in any Annex III high-risk category? (Recruitment, credit, insurance pricing, education, biometrics, critical infrastructure, essential services, law enforcement, migration, justice.)
Yes: 2 Aug 2026 binding. Full Art. 9–15 + 26/27 programme required. Start now.
4
Is your system a chatbot, does it generate content, recognise emotion, or create deepfakes?
Yes: Article 50 transparency by 2 Aug 2026. Ship disclosure UI and content marking this quarter.
5
Otherwise minimal risk. No specific obligations except Article 4 AI literacy for staff using the system (live since Feb 2025). Document the classification decision so you can defend it later.

Sources for this section: Regulation (EU) 2024/1689 (EUR-Lex, ELI: data.europa.eu/eli/reg/2024/1689/oj) · European Commission AI Act page, updated May 2026 · GPAI Code of Practice (final, 10 July 2025) · Article 50 Code of Practice (second draft, March 2026; final June 2026 expected) · Future of Life Institute AI Act Explorer · Future of Privacy Forum · Center for Security and Emerging Technology (Georgetown) · sector-specific peer-reviewed work cited in Research §.

§ Research · Peer-reviewed & institutional sources

The paper trail behind every claim.

All sources below are the official Act text, EU Commission output, peer-reviewed publications, or major institutional analysis with documented methodology. Direct links open in new tabs.

EUR-Lex · Official text

Regulation (EU) 2024/1689 · Artificial Intelligence Act

The official consolidated text of the Act, in 24 EU languages, with Official Journal references and ELI persistent URL. The primary source for every Article and Annex cited on this page.

OJ L · 12.7.2024 · 144 pp.
Read official →
European Commission · Operational

AI Act · Shaping Europe's Digital Future

The Commission's living implementation hub. Updated guidelines, codes of practice, timeline announcements and links to the AI Office. First place to check for any operational interpretation question.

EC · Updated May 2026
Read →
European Commission · Code of Practice

General-Purpose AI Code of Practice

Voluntary tool finalised 10 July 2025; signed by OpenAI, Anthropic, Google, Microsoft, Amazon, IBM, Mistral and others. Creates a rebuttable presumption of conformity for Article 53 and 55 obligations. The operational standard for foundation-model providers.

AI Office · July 2025
Read →
Future of Life Institute · Reference

EU Artificial Intelligence Act · Explorer

The most comprehensive non-governmental reference: Article-by-Article navigation, plain-English summaries, AI literacy programme directory, implementation timeline, and a high-level summary kept current with Commission guidance.

FLI · 2024–2026
Read →
Harvard Data Science Review · Peer-reviewed

The Future of Credit Underwriting and Insurance Under the EU AI Act

Peer-reviewed analysis (MIT Press / HDSR, Summer 2025) of how Annex III high-risk classification reshapes credit scoring and life/health insurance pricing. Identifies overlaps, blind spots, and frictions between the AI Act and existing EU financial-services law.

HDSR · Issue 7.3 · 2025
Read →
PMC · Peer-reviewed

Medicine, healthcare and the AI Act: gaps, challenges and future implications

Peer-reviewed assessment of how the AI Act interacts with the Medical Device Regulation (MDR) for clinical AI. Argues that the dual framework creates non-trivial documentation burden but is workable for manufacturers with mature ISO 13485 QMS.

PMC · 2024
Read →
Nature Medicine · Peer-reviewed

Navigating the European Union Artificial Intelligence Act for Healthcare

Practitioner-oriented peer-reviewed piece on the practical interplay between AI Act Articles 9–15 and the MDR conformity-assessment regime for Class IIa+ medical devices. Proposes single-notified-body assessment as the lowest-burden compliance route.

Nature Medicine · 2024
Read →
arXiv · Legal analysis

Subject Roles in the EU AI Act: Mapping and Regulatory Implications

Systematic analysis of how the Act allocates obligations between providers, deployers, importers, distributors, and downstream providers. Documents the cascading information obligations under Articles 13 and 53, and the multi-operator dimensions of Articles 20 and 73.

arXiv:2510.13591 · Oct 2025
Read →
arXiv · Standardisation

Analysis of the EU AI Act and a Proposed Standardisation Framework for ML Fairness

Identifies the absence of quantifiable fairness metrics and the interchangeable use of "transparency", "explainability" and "interpretability" in the AI Act as sources of compliance ambiguity. Argues for tailored standardisation alongside the regulation.

arXiv:2510.01281 · Sep 2025
Read →
PMC · Peer-reviewed

Balancing Innovation and Control: The EU AI Act in an Era of Global Uncertainty

Peer-reviewed analysis citing the European Commission's own impact assessment that compliance costs for a single AI unit could reach €29,277 annually, with certification adding €16,800–23,000. Documents the disproportionate burden on smaller organisations.

PMC · 2025
Read →
PMC · Peer-reviewed

Simplifying software compliance: AI for technical documentation under the AI Act

Empirical study evaluating ChatGPT and DoXpert against expert legal review for AI Act technical documentation. Finds partial alignment, important issues with ChatGPT (3.5 and 4) and a moderate statistically significant correlation between DoXpert and expert judgments. Implications for how SMEs can practically discharge Article 11 obligations.

PMC · 2025
Read →
arXiv · Industry analysis

Mapping Industry Practices to the GPAI Code of Practice

Systematic comparison of OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, Amazon and other frontier-model providers' published safety frameworks against the Code of Practice's Safety & Security commitments (II.1–II.16). Shows where industry self-governance and EU regulatory expectations already align, and where they diverge.

arXiv:2504.15181 · April 2025
Read →
Georgetown CSET

AI Safety under the EU AI Code of Practice · A New Global Standard?

Center for Security and Emerging Technology analysis of the GPAI Code of Practice's safety provisions, the 10²³ FLOP GPAI threshold and the 10²⁵ FLOP systemic-risk threshold. Identifies the Code as a candidate global benchmark.

CSET · Dec 2025
Read →
Future of Privacy Forum

Red lines: emotion recognition in the workplace and education

Detailed analysis of Article 5(1)(f), the Commission's February 2025 guidelines on prohibited practices, and the practical edge cases. Includes the unresolved questions around mood-detection, intention-inference, and the line between emotion and pain/fatigue states.

FPF · March 2026
Read →
JIPITEC · Peer-reviewed

Article 50 AI Act: Do the Transparency Provisions Improve Upon the Commission's Draft?

Peer-reviewed legal analysis (Journal of Intellectual Property, Information Technology and E-Commerce Law) of how Article 50 evolved through trilogue. Identifies the territorial-scope mechanics (Art. 2(1)(c)), the personal scope, and the strengthening of deepfake disclosure during the legislative process.

JIPITEC · 2025
Read →
European Commission · Code of Practice

Code of Practice on Marking and Labelling of AI-Generated Content

Article 50 implementation Code. Second draft published March 2026; final expected June 2026. Endorses a multilayered approach combining visible disclosures with machine-readable metadata or watermarking. The operational standard for deepfake and synthetic-content disclosure obligations.

AI Office · 2026
Read →
§ The other half · Overview

Want the foundational material? Start with the Visual Guide.

Risk pyramid, prohibited practices, GPAI obligations, Annex III, the timeline, the penalty regime. Everything this page assumes you already know.

Read the Overview →