EU AI Act: What UK Businesses Must Do Now
Brexit didn't get UK businesses off the hook. If your AI work touches the EU in any way, the EU AI Act applies to you, and parts of it have been enforceable since February 2025. The penalties are GDPR-grade: up to €35 million or 7% of worldwide turnover, whichever is higher.
Here's what UK SMEs need to know, what's actually in force right now, and what to do about it.
Who is in scope
The Act has deliberately broad reach. You are caught if any of these are true:
- You build an AI system that is sold or used in the EU.
- You use AI in your EU operations, including a UK firm with EU subsidiaries, EU clients, or EU staff.
- The output of your AI ends up in the EU. This is the trigger that catches most UK businesses by surprise.
That third trigger is the one to focus on. It doesn't matter where the system was built or where it is hosted. If the output reaches the EU, you are in scope.
Two examples:
- A UK consultancy uses AI to draft sections of an advice note for a client based in Germany. In scope.
- A UK marketing agency uses AI to generate ad copy for a campaign distributed to EU residents. In scope.
Many UK businesses end up in scope without intending to: cross-border clients, distributed content, downstream EU customers. Inadvertent scope is still scope.
The four risk tiers
The Act sorts AI systems into four tiers. Your obligations depend on which tier your system falls into.
Unacceptable risk: banned. Illegal since 2 February 2025. Examples include social scoring systems, untargeted scraping of facial images, emotion recognition in workplaces or schools, and AI that exploits the vulnerabilities of specific groups. There is no compliance path. If you are running one, it has to go.
High risk: heavily regulated. This is where most of the compliance burden sits. High-risk AI includes recruitment and HR systems (CV screening, performance evaluation, employee monitoring), credit scoring, education, critical infrastructure, biometric identification, law enforcement, and medical devices. Required: risk management, technical documentation, data governance evidence, human oversight, accuracy and robustness testing, post-market monitoring.
Limited risk: transparency obligations. Chatbots and AI-generated content sit here. Users must be told they are interacting with AI. Synthetic content must be labelled.
Minimal risk: largely unregulated. Spam filters, AI in video games, that sort of thing. Most AI systems sit here.
What is already in force
Many UK organisations are behind on this. Significant parts of the Act are already enforceable.
Since 2 February 2025. Bans on prohibited AI practices are live. AI literacy obligations apply to every in-scope organisation, regardless of risk level. You must ensure that staff and anyone using AI on your behalf are sufficiently AI-literate. This catches almost everyone.
Since 2 August 2025. Obligations on providers of General-Purpose AI models, the foundation models behind ChatGPT, Claude, and Gemini, are in force. The penalty regime is technically active. The European AI Office is operational and supervising compliance.
The 2 August 2026 deadline. High-risk system obligations were originally due to apply from this date. The European Commission's Digital Omnibus proposal would push the deadline to 2 December 2027 for stand-alone high-risk AI, and 2 August 2028 for high-risk AI embedded in regulated products such as medical devices and machinery.
As of late April 2026, this is still being negotiated. The second trilogue on 28 April ended without agreement, with another round scheduled for 13 May.
The practical position: continue planning against the original 2 August 2026 deadline. Even if the delay passes, it does not change what compliance actually looks like, and producing the required documentation takes months. Starting six weeks out is not realistic.
Six things to do now
1. Map your AI systems. Build an inventory of every AI system you use or provide. Include embedded AI in third-party tools, which is often missed. For each, note where outputs are used and whether any reach the EU.
2. Classify against the risk tiers. Most systems will be minimal or limited risk. Flag anything that touches recruitment, HR decisions, credit, education, biometrics, or critical infrastructure for high-risk assessment. Anything in the prohibited category needs to go now. Our AI readiness assessment includes a full Act inventory and risk classification.
3. Address AI literacy. This obligation already applies. It is the easiest win and the one most often overlooked. Roll out structured AI training for staff who use or operate AI systems, and document it.
4. Appoint a governance lead. Not necessarily a new hire, but someone with named authority to make decisions on risk classification, documentation standards, and compliance timelines. Fragmented ownership is the single biggest predictor of compliance failure.
5. Start the documentation work. For high-risk systems: technical documentation, data governance evidence, human oversight plans, risk management procedures. Audit your training data sources. Update vendor and customer contracts to reflect AI use and responsibility allocation. This is the kind of work proper requirements engineering was built for.
6. Watch your supply chain. If you are a deployer using third-party AI, your provider's compliance status directly affects yours. EU clients are already asking for evidence of AI Act readiness in procurement. Slow movers will lose contracts to better-prepared competitors.
Why this matters beyond the EU
Two things to keep in mind.
First, the EU AI Act is shaping up to become a de facto global standard, in the way GDPR did. Future UK and US AI legislation is likely to draw heavily on its principles. Building EU AI Act compliance into your operations now is partly a hedge against future UK regulation.
Second, organisations already certified under ISO 42001, ISO 27001, NIS2, or GDPR are partially aligned with several AI Act obligations. If your governance maturity is already strong, the lift is smaller than it looks. If it isn't, this is the prompt to address it.
The UK government has taken a lighter, principles-based approach to AI regulation so far, but UK regulators are actively reviewing the gaps, and the government has signalled legislation is coming. The window for treating AI governance as optional is closing on both sides of the Channel.
How We Can Help You Comply
Need help mapping your AI systems against the Act, rolling out an AI literacy programme, or producing the documentation required for high-risk systems? Our AI readiness assessment gives you a clear inventory and gap analysis. For ongoing governance support, our fractional AI officer service embeds senior expertise alongside your team. Book a free consultation to talk through your obligations.