ISO 42001: The First AI Management System Standard
If GDPR was the trust contract for personal data and ISO 27001 became the trust contract for information security, ISO/IEC 42001 is shaping up to be the trust contract for artificial intelligence. Published in December 2023, it is the world's first certifiable AI management system standard, and adoption is accelerating fast as procurement teams start asking for it.
Most UK SMEs we speak to have heard the number but not the substance. Here is what ISO 42001 actually is, what it asks of you, and why it matters even if you have no immediate plan to certify.
What ISO 42001 actually is
ISO/IEC 42001 specifies the requirements for an Artificial Intelligence Management System, or AIMS. It is not a technical standard for building models. It is a management system standard, in the same family as ISO 27001 (information security) and ISO 9001 (quality). It tells you how to govern AI across its lifecycle, from concept and procurement through development, deployment, monitoring, and decommissioning.
The structure follows the familiar Plan-Do-Check-Act methodology. You set policy and objectives, identify risks and opportunities, put controls in place, monitor performance, and improve continuously. None of that is novel if you have lived through ISO 27001. What is new is the subject matter: AI-specific risks like bias, opacity, drift, training data provenance, and human oversight.
Crucially, ISO 42001 is technology-agnostic and sector-agnostic. It works for a 50-person professional services firm using a third-party copilot just as well as it works for a regulated bank training its own models. The controls scale to your context.
How the standard is structured
ISO 42001 has ten clauses, of which clauses 4 to 10 are auditable: context of the organisation, leadership, planning, support, operation, performance evaluation, and improvement. If you have seen ISO 27001, this will look almost identical.
The interesting work sits in Annex A, which lists reference control objectives across nine domains:
- AI policies
- Internal organisation and accountability
- Resources for AI systems
- AI system impact assessment
- AI system lifecycle (development and operation)
- Data for AI systems
- Information for interested parties (transparency)
- Use of AI systems
- Third-party and customer relationships
Crucially, Annex A controls are not mandatory line by line. Clause 6.1.3 requires you to compare your chosen risk treatment against Annex A to confirm nothing necessary has been omitted, but you select what applies to your context. This is the same Statement of Applicability mechanism that ISO 27001 practitioners will recognise.
Who is it for
In practice, three groups of UK organisations are paying attention right now.
AI providers and developers. If you build AI products or features, certification is becoming a procurement filter. Enterprise buyers, especially in financial services, healthcare, and the public sector, are starting to require it the way they once started requiring ISO 27001.
AI deployers. If you use AI bought from someone else, including embedded AI in tools like Microsoft 365 Copilot or Salesforce Einstein, you still have governance obligations. ISO 42001 gives you a framework for managing third-party AI risk that maps neatly to what your auditors and clients want to see.
Organisations under regulatory pressure. If you are caught by the EU AI Act, FCA expectations on AI in financial services, or the ICO's evolving AI guidance, ISO 42001 gives you a recognised structure to demonstrate due diligence. It is not a compliance shortcut, but it is a credible artefact when a regulator asks how you govern AI.
Smaller organisations sometimes assume this is "enterprise stuff." It is not. The standard is explicitly designed to scale, and the early adopters in the UK SME space are using it as a competitive differentiator with enterprise clients.
Why this matters even without certification
Most of our SME clients will not certify in the next twelve months. The cost and effort of formal certification only makes sense at a certain scale or with specific procurement triggers. So why does this standard still matter?
Because it is becoming the operating manual for AI governance. The structure of ISO 42001 is what enterprise procurement teams, regulators, and insurers are anchoring on. Even informal alignment, mapping your AI inventory, impact assessments, and policies to the Annex A domains, makes you measurably more credible than competitors who cannot answer basic governance questions.
It also gives you a head start on overlapping regimes. As we covered in our piece on the EU AI Act, organisations already aligned with ISO 42001, ISO 27001, or GDPR are partially aligned with several Act obligations. Build the management system once, satisfy multiple stakeholders.
Where ISO 42001 sits next to other standards
A common question: do I still need ISO 27001? Yes. ISO 42001 does not replace your information security management system; it sits alongside it. The two are designed to integrate. Most of the AI-specific data controls in ISO 42001 assume you already have basic information security hygiene in place. If you have ISO 27001, the lift to add ISO 42001 is significantly smaller.
Mapping to other regimes is straightforward in principle:
- EU AI Act. ISO 42001 covers many of the governance, documentation, and human oversight obligations the Act imposes on high-risk AI. It is not an automatic compliance pass, but it is the closest off-the-shelf framework.
- NIST AI Risk Management Framework. Voluntary US framework, broadly compatible. Many organisations align with both.
- GDPR / UK GDPR. ISO 42001's data controls reinforce existing data protection obligations, particularly around training data and DPIAs.
If you are starting from zero, ISO 27001 is still the foundation. ISO 42001 layers on top.
Practical first steps
Whether or not you intend to certify, four steps give you most of the value:
1. Build an AI inventory. Every AI system you build, use, or have embedded in third-party tools. This is the same inventory work the EU AI Act demands, and it is the foundation for everything else.
2. Run an AI impact assessment. For each system, what is the potential impact on users, employees, customers, and third parties? This becomes your risk register.
3. Define accountability. Name an AI governance lead. Document who decides on risk classification, vendor selection, and deployment approval. Fragmented ownership is the single biggest predictor of governance failure.
4. Document your AI policy. Even a two-page policy that sets out acceptable use, prohibited use, human oversight expectations, and incident response is a meaningful step. ISO 42001 expects this; so does any serious enterprise client.
This is the kind of work our AI readiness assessment is built for. We map your AI inventory against ISO 42001 domains, identify gaps, and produce the documentation a future auditor or procurement team will ask for, without the overhead of full certification before it is justified.
The window is now
The pattern with ISO 27001 was clear: organisations that adopted early gained a procurement advantage that lasted years. Late adopters scrambled. ISO 42001 is on the same trajectory, accelerated by the EU AI Act and the wider tightening of AI governance expectations.
You do not need to certify tomorrow. You do need to start aligning now, while it is still a competitive advantage rather than a baseline requirement.
How We Can Help You Align
If you would like a structured assessment of where you stand against ISO 42001 and what realistic alignment looks like for your business, our AI readiness assessment gives you a clear gap analysis and prioritised next steps. For ongoing governance support, our fractional AI officer service embeds senior expertise alongside your team. Book a free consultation to talk through what's appropriate for your stage.