§ Orientation · The 30-second self-test
Three things to know before anything else.
If any of the three tiles below is news to you, you are behind on the Act. This page assumes you have read Regulation (EU) 2024/1689, or someone on your team has. From here, everything is implementation.
§ Live · already enforceable
Article 5 prohibitions and Article 4 AI literacy have applied since 2 February 2025. GPAI obligations (Arts. 53, 55) since 2 August 2025. No transition. No SME exemption from prohibitions.
§ Deadline · 2 Aug 2026
All Annex III high-risk obligations and Article 50 transparency become enforceable. Fines up to 15M / 3% global turnover 35M / 7% for prohibited use. SMEs pay the lower of fee/percentage.
§ The role question
You are a provider (developed it / put your name on it / substantially modified it Art. 25) or a deployer (use someone else's professionally). Most teams calling an LLM API and shipping it as a product are providers, not deployers.
§ Sectors · 14 implementation guides
How it applies to your work.
Each card states what the Act requires for that sector, the numbered implementation steps, the most common mistake (the "Don't"), the key Articles, and the deadline. Sectors are deliberately broad. Most organisations will recognise themselves in two or three.
1 · Frontier AI model providers
OpenAI · Anthropic · Google DeepMind · Meta · Mistral · xAI · Cohere
GPAI
Systemic risk if >10²⁵ FLOPs
You are the first link in every downstream compliance chain. The Act treats you as a special category in Chapter V, with obligations that flow up the value chain you must give downstream providers and deployers the documentation they need to comply.
→ What to ship
1
Sign the GPAI Code of Practice or publish a written justification for not signing. Signatories get a rebuttable presumption of conformity. 26 majors had signed by early 2026. Refusing now is a public regulatory signal.
2
Publish your training-data summary using the AI Office template. Categories of data, source types, scale, copyright handling, opt-out compliance. Even partial disclosure under the template beats silence silence is the audit trigger.
3
Maintain a frontier safety framework with enforceable thresholds. RSP / Preparedness Framework / Frontier Safety Framework. The AI Office expects evaluations against your own published thresholds shipping a model that should have triggered a pause is the worst scenario.
4
Build a downstream documentation pack as a product. Banks, hospitals, recruiters integrating your model need usage policies, capability cards, evaluation results, restricted-use lists. Ship a developer portal section, not a 200-page PDF.
5
Set up the 15-day incident reporting workflow. Art. 73 requires reports to the AI Office within 15 days of confirming a causal connection. Build the triage / legal review / submission pipeline before you need it.
Don't claim "no systemic risk" without a paper trail. If your training compute approaches 10²⁵ FLOPs, document the calculation. The Commission can designate models below the threshold as systemic-risk if capabilities warrant; retrofitting that documentation under an open investigation is brutal.
2 · Building on third-party LLM APIs
SaaS startups · enterprise product teams · agencies wrapping GPT / Claude / Gemini / Llama
Limited risk default
Provider if you put your brand on it
"We're just calling an API" sounds like a deployer position. It almost never is. The moment you ship a product that uses AI to make consequential decisions under your name, Article 25 promotes you to provider with the full obligation set. The decision of provider vs deployer is the single most-misclassified question of 2026.
→ What to ship
1
Inventory every AI feature, every embedded AI, every shadow AI. Copilot, the AI inside your CRM, AI meeting summaries, that ChatGPT tab Marketing uses every day. Cloud Security Alliance March 2026: most enterprises have deployed high-risk systems without realising it.
2
For each system, write down the intended purpose in one sentence. The same chatbot supporting general users is limited-risk. The same chatbot triaging benefit applications is high-risk Annex III·5(a). Intended purpose decides classification.
3
Pick your role honestly per system. Internal-only with a third-party model = deployer. Product feature under your brand = provider. Custom fine-tune of a model = provider. The provider designation isn't bad; pretending you're a deployer when you're not is.
4
Renegotiate your model-provider contract. You need pass-through compliance commitments, technical documentation access, incident notification, training-data clarity, data-residency terms. Vendors who won't give you these aren't viable EU partners post-August 2026.
5
Ship Article 50 disclosure UI this quarter. Chatbot banner, "AI-generated" labels on output, deepfake markers. This is the cheapest, most visible compliance work and it ships now, not in August.
Don't outsource your compliance to your vendor's terms of service. Deployer obligations under Art. 26 cannot be assigned away. If your fraud-detection vendor's model is a black box you can't explain, you are still the one telling regulators why a customer was denied service.
3 · HR, recruitment & workforce management
ATS · CV screening · interview AI · performance management · gig allocation
Emotion AI banned
High-risk · Annex III·4
One of the most exposed sectors. The Act explicitly lists employment AI as high-risk in Annex III, and explicitly bans emotion recognition in workplaces. Both employers (deployers) and HR-tech vendors (providers) are in scope.
→ What to ship
1
Audit the entire employee lifecycle. Targeted job ads, CV parsers, interview tools, scheduling AI, performance dashboards, productivity monitors. Most HR stacks have 4–8 AI systems most HR teams have inventoried zero.
2
Turn off emotion AI today. Video-interview "soft skills" scoring, call-centre voice analytics, webcam engagement tools. The ban has been live since February 2025 any continued use is a documented aggravating factor in unrelated enforcement.
3
Demand vendor documentation. Bias audit results, conformity assessment status, technical documentation (Art. 11), usage logs (Art. 12). Vendors that won't provide these get a termination clause and a six-month migration plan.
4
Notify workers before deployment. Art. 26(7) requires worker notification before a high-risk system goes live. Add it to your works-council consultation and your employee handbook not after the fact.
5
Designate human reviewers with real override authority. "Human in the loop" only counts if the human can and does meaningfully overturn the AI. Document one case per month where the human ruled against the model. If there are zero, you don't have oversight, you have rubber-stamping.
Don't re-label emotion AI as "stress detection" or "wellness analytics". The Commission's February 2025 guidelines make clear that commercial relabeling doesn't exit the ban. The test is whether the system infers emotion from biometric data not what the marketing page calls it.
4 · Banks, lenders & insurers
Credit scoring · AML monitoring · insurance pricing · transaction risk · onboarding
High-risk · Annex III·5
Credit scoring and life/health insurance pricing are named high-risk. So is access to essential public services. Fraud detection is carved out but only when it's solely for fraud, not when the same model also triages access. A mature model-risk-management programme covers 60–70% of what the Act requires; the documentation burden is the gap.
→ What to ship
1
Map every model to a risk tier and a role. Bank-built credit scorecard: provider, high-risk. Bank using a vendor scoring API: deployer of high-risk. Both routes carry obligations; the documentation responsibilities differ sharply. The Harvard Data Science Review piece (Summer 2025) is the best peer-reviewed walk-through.
2
Run Art. 10 data-governance evidence on every training set. "Relevant, representative, free of errors to the best extent possible". Credit data that historically under-represents women or minorities is a documented Art. 10 problem, regardless of current performance.
3
Generate decision rationales at the moment of decision. Affected individuals have a right to explanation under Art. 86 and (separately) under GDPR Art. 22. Retrofit-explainability for opaque models has been ruled inadequate in multiple GDPR cases expect the same here.
4
Build one control matrix for AI Act + DORA + GDPR + CRD/CRR. Risk management, incident reporting, third-party oversight, data governance, recordkeeping the four regimes share 70%+ of controls. Don't run four parallel programmes.
5
Run a Fundamental Rights Impact Assessment. Art. 27 applies to private deployers providing essential services retail banks and insurers fall in. The assessment must precede deployment, not follow it.
Don't lean on the fraud-detection carve-out without scoping it tightly. If your "fraud" model also influences whether a customer keeps their account, gets a card limit increase, or qualifies for a product it's no longer solely fraud detection. The dual-use is what pulls it back into high-risk credit scoring.
5 · Healthcare & medical devices
Diagnostic AI · clinical decision support · imaging · digital therapeutics · LLM clinical reasoning
High-risk · Annex I + III
Healthcare AI sits under two EU frameworks simultaneously: MDR/IVDR and the AI Act. Compliance with one creates no presumption of compliance with the other. Fragmented compliance is now the single largest source of avoidable liability exposure in European healthcare, per the peer-reviewed literature.
→ What to ship (manufacturers)
1
Build a unified technical file satisfying MDR Annex II and AI Act Article 11 in one document, indexed for both notified-body reviewers and AI Office requests. Nature Medicine 2024 and BMJ Health 2025 both describe this as the cheapest single compliance investment available to MedTech.
2
Extend your ISO 13485 QMS with AI-specific risk management per ISO 14971. The harmonised AI standards from CEN-CENELEC will plug in here build the QMS to accept them when published.
3
Route the AI-specific conformity assessment through the same notified body handling MDR review where possible. Single-notified-body assessment is explicitly contemplated and materially reduces documentation burden.
4
Design system-level explainability into the architecture. Hospitals owe patients meaningful explanations under GDPR Art. 22 and that explanation has to come from your model design, not a post-hoc PDF generated by the hospital.
→ What to ship (hospitals as deployers)
1
Run local clinical validation on your actual patient population. The absence of documented local validation is the most exploited litigation gap in EU medical-AI cases (Nagel / Tactical Management, 2026).
2
Train physicians on each system's known limits. Required by Art. 26, not optional. Document attendance.
3
Set up dual-channel incident reporting to both the manufacturer and the national competent authority. Same event, two reports.
Don't assume GPAI in clinical reasoning is exempt. An LLM used for clinical decision-making must satisfy both Chapter V GPAI rules and high-risk medical-device requirements. The fact that the underlying model is general-purpose doesn't remove the medical-device classification of your deployment.
6 · Chatbots, voice agents & AI-generated content
Customer support · AI companions · voice assistants · ad creative · synthetic media · deepfakes
Limited risk · Art. 50
Article 50 is the rule almost every business will hit. Four duties: chatbot disclosure, synthetic content marking, emotion/biometric notice, deepfake labelling. Fines reach 15M / 3% not trivial despite this being the "lightest" tier.
→ What to ship
1
Map every AI-facing surface and pick a disclosure pattern for each. Chatbots, voice IVR, AI agents in apps, AI replies in support tooling, AI-generated ad creative. Document the pattern per surface Art. 50 enforcement will inspect the surface, not the policy.
2
Ship a visible, persistent chatbot disclosure. "You're chatting with an AI assistant I can connect you to a person any time." Air Canada lost a 2024 case where the chatbot hallucinated European courts are likely to take a similar line.
3
Adopt C2PA-style provenance + visible deepfake labels. The Article 50 Code of Practice (second draft March 2026; final June 2026) endorses a multi-layered approach: machine-readable metadata and visible labels. Open-source watermarking (Resemble PerTH and similar) demonstrates 98%+ recovery rates the technical bar is achievable.
4
Build a human escalation path with logged decisions. Legally or commercially sensitive replies must be reviewable by a person. Logs retained at least six months required by Art. 26(6) for high-risk integrations and good practice everywhere.
5
Update Terms of Service and pass-through vendor contracts. Customer agreements reflect AI use; vendor agreements pass through watermarking and disclosure obligations from your foundation-model provider.
Don't rely on "obvious from context" exemption casually. The Act allows skipping disclosure when AI use is obvious to a "reasonably well-informed, observant and circumspect natural person". If your chatbot has a human name, a friendly avatar, and writes in first-person, "obvious" is hard to defend. JIPITEC's peer-reviewed Article 50 analysis (2025) makes this point in detail.
7 · Marketing, advertising & personalisation
Programmatic ad tech · personalisation engines · generative ad creative · automated bidding · attribution
Mostly limited risk
Three trapdoors
Marketing AI sits mostly outside high-risk territory but three traps pull specific use cases up. Subliminal manipulation (Art. 5(1)(a)) is banned. Exploiting vulnerabilities based on age or socio-economic status (Art. 5(1)(b)) is banned. Personalisation feeding Annex III decisions (e.g. risk-based pricing of insurance, eligibility for services) is high-risk.
→ What to ship
1
Tier your marketing AI by impact on individuals. High (decision-influencing), medium (personalising), low (analytics). Bid optimisation, audience expansion, creative gen, brand-safety, attribution each on its own row of the matrix.
2
Set vulnerability red lines in your media-plan templates. No targeting based on financial distress, addiction recovery, age vulnerability, comparable conditions. Encode the rules where media planners actually work, not only in legal review.
3
Label AI-generated creative. Small "AI-generated" disclosure on the asset, C2PA-style provenance metadata. TikTok, Meta, YouTube already require equivalent. Don't wait for the platform to demand it after launch.
4
Vendor-vet every ad-tech partner. If your DSP retrains models on your data, your data is feeding a provider's high-risk system and the contract needs to address that. Procurement owns this, not just legal.
5
Pre-write an incident playbook for generative-creative failures. When (not if) your generative ad tool produces something harmful, you need a "pause campaign / preserve logs / notify stakeholders" workflow that runs in hours, not weeks.
Don't deploy deepfake endorsements without explicit labelling and rights. Even satirical AI-generated celebrity content triggers Art. 50(4) disclosure. The proposed Digital Omnibus (May 2026) signals additional restrictions on nudification and identity-impersonation apps the regulatory direction is one-way.
8 · Education & EdTech
Admissions AI · proctoring · adaptive learning · automated grading · vocational assessment
Emotion AI banned
High-risk · Annex III·3
The second sector alongside workplaces where emotion recognition is banned outright. Also a named high-risk domain for admissions, evaluation and assessment. Several major proctoring vendors have already restructured their EU offerings to comply.
→ What to ship
1
Distinguish admissions and assessment AI from learning AI. Annex III·3 is explicit: admissions decisions and learning-outcome evaluation are high-risk. A study-planner that doesn't gate access or grade work is typically minimal risk.
2
Replace facial-emotion proctoring with conduct-based detection. Eye-tracking for "engagement" is dead under EU law. Behaviour-based proctoring (looking-away patterns, multi-person detection) remains possible but is high-risk and demands full Art. 9–15.
3
Inform students before each high-risk AI is deployed. Document the disclosure. Right to explanation under Art. 86 applies; affected students will exercise it.
4
Build meaningful human override into automated grading. Auto-grading cannot be final without a real review path. Art. 14 oversight applies.
Don't claim "safety" exemption for general engagement monitoring. The Art. 5(1)(f) carve-out for medical/safety reasons is narrow pilot fatigue, accessibility tools for special-needs therapy. It does not cover "we want to know if students are paying attention".
9 · Legal, consulting & professional services
Contract review AI · research copilots · client deliverables · white-labelled AI tools
Mostly limited risk
Provider trap on white-labels
Most professional-services AI is limited risk productivity tools. Two scenarios pull it higher: AI used in the administration of justice (Annex III·8) is high-risk, and consultancies/firms that white-label third-party AI tools become providers under Art. 25.
→ What to ship
1
Adopt a firm-wide AI usage policy allowed tools, data-handling, disclosure to clients, citation discipline for AI research output. Most bar and professional bodies expect this by 2026.
2
Treat engagement letters as AI Act artefacts. Document the AI you'll use, data flows, disclosure terms, and the provider/deployer allocation if you're customising tools for a client.
3
Ship role-based AI literacy training. The Commission's May 2025 Q&A treats contractors and clients as "other persons on whose behalf" the firm operates AI proportionate training is expected for all of them.
4
If you white-label an AI compliance / research / drafting tool to clients, accept that you are now the provider under Art. 25(1)(a). Full Art. 16 obligations follow. Most firms haven't yet realised this the white-labelled "Firm Name AI Assistant" is the textbook trap.
Don't cite hallucinated case law. Multiple US courts have sanctioned firms for unverified AI-generated legal citations (2023–2025). EU courts are watching. Build mandatory verification into the workflow, not a CLE module.
10 · Public sector & government agencies
Benefit eligibility · service triage · law enforcement · migration · border control · justice
High-risk across the board
FRIA mandatory
Public bodies face the strictest obligations. Almost every consequential AI use case sits in Annex III's high-risk list, and public deployers have a unique duty: a Fundamental Rights Impact Assessment under Article 27, performed before deployment.
→ What to ship
1
Inventory before procurement closes another deal. Most agencies are deploying AI through SaaS and don't know it. The procurement file becomes the Art. 27 starting evidence.
2
Build FRIA into the procurement template. Tendering authorities can require providers to supply Annex IV technical documentation and bias-audit evidence use the procurement leverage while you have it.
3
Register every high-risk system in the EU database before use (Art. 49, 71). The database opened in stages through 2025–2026; the registration record is itself audit evidence.
4
Join the AI Pact. Commission's voluntary pre-compliance pathway costs nothing, demonstrates good faith, opens a line to the AI Office.
Don't defer the FRIA to "when we have time". Article 27 makes the FRIA a precondition no FRIA, no lawful deployment. The first high-profile public-sector enforcement actions in 2027 will almost certainly turn on a missing or perfunctory FRIA.
11 · Critical infrastructure & utilities
Energy grid · water · transport routing · digital infrastructure · telecoms
High-risk · Annex III·2
NIS2 + CER + AI Act
AI as a safety component in critical digital infrastructure, road traffic, or water/gas/electricity supply is named high-risk. The cybersecurity bar (Art. 15) is calibrated to safety-of-persons. NIS2, CER and the AI Act point at the same set of controls run them as one programme.
→ What to ship
1
Unify AI Act, NIS2 and CER programmes. Treat the regulators as overlapping audiences for the same control set. Don't run three documentation stacks.
2
Adversarial-test what could fail visibly. Power-grid AI under data poisoning is a safety incident, not a privacy incident. Testing budget here should be 10× the documentation budget.
3
Build human-on-the-loop with hardware override separation. Art. 14 oversight is meaningful only if the human can stop the system independently of the AI itself.
4
Set up dual-channel incident reporting AI Office (Art. 73) and national CSIRT (NIS2). Same event, two notifications, both within tight windows.
12 · Agentic AI & automated workflows
Multi-step agents · RPA + LLM hybrids · autonomous business-process AI · agent frameworks
Multi-classification
Law has gaps here
The Act regulates AI by intended purpose and deployment context, not by autonomy level. A single agent taking actions across credit, HR and customer-service hits multiple Annex III categories at once. Most agents in production today have not been classified.
→ What to ship
1
Classify by action, not by agent. If your agent can take a high-risk action even rarely, the whole flow inherits high-risk obligations. Map each tool the agent can call, and tag the high-risk ones.
2
Hard-gate consequential actions behind human confirmation. Termination, denial, financial commitments above a threshold, customer-data deletion, escalations to law enforcement each gets an explicit human click.
3
Log the full reasoning trace. Tool calls, inputs, outputs, intermediate reasoning. Art. 12 requires records sufficient to reconstruct decisions for an agent, that means the chain, not just the final answer.
4
Test the kill switch. A single hallucinated tool call can trigger thousands of downstream actions. The kill switch has to be tested under load, not theoretical.
Don't assume the Digital Omnibus will clarify your liability. The proposed Omnibus does touch agentic AI but is still in trilogue. Plan for the current law as written; treat any future clarification as upside, not relief.
13 · Open-source AI & model weights
Llama-style releases · Mistral · open-weight foundation models · research-licence models
GPAI carve-out
Disappears at systemic-risk threshold
Genuinely open AI gets a meaningful carve-out but with two conditions. First, the carve-out is narrow: it relieves you of technical-documentation and downstream-provider information duties, not of copyright policy and training-data summary. Second, the carve-out disappears entirely if your model has systemic risk.
→ What to ship
1
Pick a truly open licence (Apache 2.0, MIT, or equivalent). Custom commercial-use restrictions may forfeit the carve-out. "Open weights with non-compete clauses" probably doesn't qualify.
2
Publish the copyright policy and training-data summary anyway. No carve-out applies to Art. 53(1)(c) and (d). The training-data summary template is the operational standard.
3
If you're approaching 10²⁵ FLOPs, plan systemic-risk obligations early. Notification to the AI Office, frontier safety framework, adversarial testing these are not last-minute work.
14 · Startups & SMEs
Pre-revenue · seed-stage · sub-50 employees · regulatory sandbox candidates
Inverted fines
Simplified documentation
The Act has more genuine SME relief than headlines suggest, but it's procedural rather than substantive. Obligations are the same; how documentation is structured, how fines are calculated, and what support is available these change. Most valuable single provision: Art. 99(6) inverts the fine calculation SMEs pay the lower of fixed fee or % turnover, not the higher.
→ What to ship
1
Apply to a regulatory sandbox now. Spain (AESIA), France (CNIL/ARCEP), Netherlands (RDI), Finland (Traficom), Germany (BNetzA from mid-2026). Priority SME access, free participation, protection from enforcement during good-faith compliance work.
2
Build compliance into product from v0.1. Logging, transparency UI, human-override hooks are product features that double as Articles 12, 14 and 50 evidence. Cheaper than retrofitting at Series B.
3
Use the simplified SME documentation forms. Designed to be filled by a non-specialist more SMEs should claim them.
4
Sign the AI Pact. Free, voluntary, public signal. Access to AI Office support and the Pact's literacy repository.
5
Treat compliance as sales enablement. Enterprise buyers in 2026 want to know you fulfil deployer obligations. Documented compliance shortens enterprise sales cycles materially.
Don't assume Article 5 prohibitions don't apply to startups. They do. The "lower of" fine inversion applies to high-risk and information-supply infringements, not to prohibited practices. A seed-stage company doing emotion AI in workplaces faces the same headline ban as a Fortune 500.
§ Artefacts · The documents you'll actually need
What goes in the files.
The Act's obligations land as documents Annex IV technical files, FRIAs, training-data summaries, contract clauses. Most teams underestimate how concrete these are. Here's what each actually contains, in operational terms.
Provider · Art. 11 + Annex IV
The technical file for a high-risk AI system
The single document that demonstrates Article 9–15 compliance. Kept current; reviewed by your notified body (where applicable) and by market surveillance authorities. Annex IV is the table of contents:
- General description intended purpose, version, provider, downstream user instructions
- Detailed system description architecture, training methodology, computational resources, validation
- Risk management system identified risks, mitigation, residual risk
- Data governance data sources, labelling, cleaning, bias evaluation, gaps
- Human oversight measures who, how, with what authority
- Accuracy & robustness metrics including cybersecurity testing
- Conformity-assessment route taken internal control (Annex VI) or notified body (Annex VII)
- Post-market monitoring plan what you track and how you act on it
Deployer · Art. 27
Fundamental Rights Impact Assessment
Required before first use of certain high-risk systems by public bodies and by private deployers in essential services. Not a privacy DPIA this is the rights assessment. Notify to your market surveillance authority on completion.
- Deployment context what, where, by whom, for what decisions
- Categories of natural persons likely affected including vulnerable groups
- Specific risks to fundamental rights not generic risk; tied to actual use
- Human oversight measures the people, training, override authority
- Measures if risks materialise remediation, complaints handling
- Internal governance who owns this, how it's reviewed, update triggers
GPAI · Art. 53(1)(d)
Training-data summary
Published using the AI Office template. Granular enough to enable rights-holders to exercise opt-outs; not so granular that it forces trade-secret disclosure. The AI Office has been clear that silence is the audit trigger, not partial disclosure.
- Categories of data text, image, code, scientific, synthetic, etc.
- Source types public web, licensed corpora, user-contributed, synthetic
- Scale (orders of magnitude) per category
- Copyright handling Art. 4(3) DSM Directive opt-out compliance, licensing terms
- Data-quality measures deduplication, filtering, safety classifiers
Both · Art. 25 + procurement
Vendor / model-provider contract terms
The pass-through clauses you need when you integrate someone else's AI into your product. Cheaper to negotiate at contract signing than to retrofit during enforcement.
- Compliance representation vendor confirms model meets applicable Art. 53/55 obligations
- Documentation access you can obtain Annex IV-equivalent technical documentation
- Incident notification vendor notifies you within agreed timeframes (synced to Art. 73's 15 days)
- Material change notice vendor flags model swaps, fine-tunes, capability changes
- Data handling whether your data trains the model, retention, residency
- Audit / inspection rights at reasonable intervals, on reasonable notice
- Indemnification for IP claims, fundamental-rights claims, regulatory findings caused by the vendor's system
Provider + Deployer · Art. 50
The disclosure pattern library
Each AI-facing surface needs a disclosure but the disclosure looks different per surface. Build one pattern library, reuse it, document each instance.
- Chatbots: persistent banner + first-message disclosure + on-demand "talk to a person" path
- Voice agents: spoken disclosure on connect; explicit "this is an automated assistant" phrasing
- AI-generated text in public-interest content: visible label + machine-readable provenance metadata
- Deepfakes: visible label at first exposure + C2PA-style metadata + non-removable embedded mark
- Emotion / biometric categorisation systems: notice at the point the system runs, in plain language
- AI-generated marketing creative: small "AI-generated" mark on the asset + provenance metadata for platforms
All organisations · Art. 4
The AI literacy programme
No prescribed format. The Commission's May 2025 Q&A makes clear that proportionate, role-based, documented is what regulators expect. Internal record-keeping is enough no certification required.
- Base level (all staff): 4–6 hours covering what AI is, hallucinations, bias, escalation rules, your AI usage policy
- Functional level (AI users): training on the specific systems they operate, including known limits
- Technical level (devs, MLOps): Articles 9–15, conformity assessment, technical documentation
- Executive level (leadership): AI Act roles, fine exposure, governance accountability
- Refresh: annually, plus every new tool deployment
- Evidence: attendance, content version, role mapping, signed acknowledgement
§ Self-test · Five questions in order
The fastest way to know where you stand.
A field-ready flow for the question every team asks: "Are we in scope, at what tier, and how urgent is it?" Not a substitute for legal advice on edge cases, but enough to know if you need to be acting this quarter.
1
Do you develop, deploy, import or distribute AI in the EU or is the output of your AI used in the EU?
No on all four: out of scope (for now enterprise customers may apply EU standards by contract). Yes: proceed.
2
Does any use case match the eight prohibited practices in Article 5?
Yes: stop using the system today. No transition, no SME exemption. Live since Feb 2025.
3
Does your AI fall in any Annex III high-risk category? (Recruitment, credit, insurance pricing, education, biometrics, critical infrastructure, essential services, law enforcement, migration, justice.)
Yes: 2 Aug 2026 binding. Full Art. 9–15 + 26/27 programme required. Start now.
4
Is your system a chatbot, does it generate content, recognise emotion, or create deepfakes?
Yes: Article 50 transparency by 2 Aug 2026. Ship disclosure UI and content marking this quarter.
5
Otherwise minimal risk. No specific obligations except Article 4 AI literacy for staff using the system (live since Feb 2025). Document the classification decision so you can defend it later.
Sources for this section: Regulation (EU) 2024/1689 (EUR-Lex, ELI: data.europa.eu/eli/reg/2024/1689/oj) · European Commission AI Act page, updated May 2026 · GPAI Code of Practice (final, 10 July 2025) · Article 50 Code of Practice (second draft, March 2026; final June 2026 expected) · Future of Life Institute AI Act Explorer · Future of Privacy Forum · Center for Security and Emerging Technology (Georgetown) · sector-specific peer-reviewed work cited in Research §.