Key takeaways
  • NIST AI RMF is a voluntary vocabulary. ISO 42001 is a certifiable management system standard. The EU AI Act is binding law. The three operate at different levels of the stack and are not substitutes for one another.
  • The EU AI Act is the only instrument that can impose fines. The other two shape evidence, posture and procurement defensibility.
  • Operators already certified to ISO 42001 carry roughly sixty percent of the evidence needed for AI Act conformity assessment for high risk systems. NIST AI RMF alignment produces the technical artefacts both of the others ask for.
  • The biggest gap between the three is on autonomous agent behaviour. None of them fully addresses multi step, tool using agents. This is the gap Agent Certified is built to close.
  • The practical sequence for most operators is: align to NIST AI RMF for the technical layer, pursue or evidence ISO 42001 for the governance layer, and map both to EU AI Act articles for compliance purposes.

An operator asking the question "which AI standard do we need to follow" in 2026 will usually receive three answers at once, delivered with different urgency by different advisers. A law firm will say the EU AI Act. A big four firm will say ISO 42001. An engineering team will say NIST AI RMF. All three are correct in their own way, and the disagreement is not a turf war. It is a reflection of the fact that the three instruments operate at different levels of the stack, and a serious operator needs a working view of all three.

This article compares the three instruments as a practical buyer. What each is. What it demands. Where it overlaps with the others. Where it stops. And where an operator has to go beyond the three to cover autonomous agent behaviour. The companion article on the seven dimensions of Agent Certified explains how the gap is closed.

What each instrument is

NIST AI RMF 1.0

The NIST AI Risk Management Framework is a voluntary framework published by the US National Institute of Standards and Technology in January 2023. It is organised around four functions: Govern, Map, Measure and Manage. It is explicit about being neither a compliance tool nor a certification, and it avoids prescribing specific controls. What it provides is a shared vocabulary and a shared set of risk questions that teams can use to structure their AI risk work.

NIST AI RMF is the most widely referenced technical vocabulary in the AI risk space. Engineering teams find it immediately useful because it describes the problem space at the level they think at. Regulators outside the United States, including European ones, reference it in their own guidance. Insurance underwriters use it as a reference point when constructing AI risk questionnaires.

ISO/IEC 42001:2023

ISO 42001 is the first international management system standard for artificial intelligence, published by the International Organization for Standardization and the International Electrotechnical Commission in December 2023. It follows the familiar structure of other ISO management system standards such as ISO 27001 for information security, with clauses covering context, leadership, planning, support, operation, performance evaluation and improvement.

Crucially, ISO 42001 is certifiable. Organisations can be audited against it and receive a formal certificate from an accredited body. That makes it qualitatively different from NIST AI RMF. The certificate is a durable artefact that procurement teams, regulators and insurers recognise.

EU AI Act

Regulation (EU) 2024/1689, known as the EU AI Act, is binding law inside the European Union. It entered into force on 1 August 2024 and phases in over a multi year implementation schedule, with provisions on general purpose AI and most other obligations taking effect through 2025 and 2026. High risk system obligations apply from 2 August 2026 for most use cases.

The Act classifies AI systems into risk categories: prohibited, high risk, limited risk and minimal risk. It imposes obligations on providers and deployers of high risk systems that include risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record keeping (Article 12), transparency to users (Article 13), human oversight (Article 14), accuracy and robustness (Article 15), and specific deployer obligations (Article 26). It is enforced by national competent authorities with significant fining powers.

What each one demands

The three instruments ask for different things in different language, and they ask with different force.

NIST AI RMF asks operators to develop the capability to identify, analyse and communicate AI risk through its four functions. It is not prescriptive. It expects the operator to choose controls proportional to the risk. The asked for artefact is, broadly, a working risk management practice that a reviewer would recognise.

ISO 42001 asks for a documented management system with policies, objectives, roles, controls, monitoring and continual improvement. It expects the operator to treat AI governance the way mature organisations treat information security or quality management: as a repeatable, auditable practice. The asked for artefact is certification by an accredited body, or equivalent evidence of an audit ready management system.

The EU AI Act asks operators of in scope systems to meet a list of specific obligations, including conformity assessment before the system is placed on the market, technical documentation that matches the scope of Annex IV, and post market monitoring. It expects operators to demonstrate compliance when asked by competent authorities, with fines of up to thirty five million euro or seven percent of global turnover for the most serious breaches.

Where they overlap

The overlap between the three instruments is substantial and was designed in deliberately.

All three address risk management as a governance function. NIST's Govern and Manage functions, ISO 42001's clauses 4 to 6, and AI Act Article 9 all ask for a documented, accountable process for identifying, treating and monitoring AI risk.

All three address data governance. ISO 42001 addresses it through its information assets control. NIST addresses it through its Map and Measure functions, particularly around data quality and context. The AI Act addresses it explicitly in Article 10, with specific requirements for training, validation and testing data quality.

All three address human oversight. NIST references it as part of the Manage function. ISO 42001 includes it in its operational controls. The AI Act makes it a specific obligation under Article 14 for high risk systems, requiring that systems be designed so that they can be effectively overseen by natural persons.

The practical implication is that an operator doing serious work against any one of the three is producing most of the evidence they need for the other two. This is not an accident. NIST engaged with the ISO working group during ISO 42001's development. The ISO working group engaged with European Commission staff during AI Act drafting. The three were built to be compatible, even where they differ on force and scope.

Where they differ

The differences are equally important and tend to be where operators get into trouble.

NIST AI RMF is voluntary and vocabulary led. It tells an operator how to think about risk but does not tell them what they must do. For an engineering team it is liberating. For a board asked to evidence compliance to an insurer, it is insufficient on its own because there is no certificate to produce.

ISO 42001 is a management system standard, not a technical standard. It asks whether you have a policy, whether you have assigned roles, whether you monitor and improve. It does not ask whether your retrieval augmented generation pipeline validates its inputs or whether your agent's kill switch actually works. Operators who interpret ISO 42001 as sufficient technical due diligence are making a category error.

The EU AI Act is binding and specific, but it was largely drafted with classical AI systems in mind. It talks about systems that take inputs and produce outputs. It does not directly address AI agents that act over time, use tools, and interact with other agents. The Act is adaptable to agents, but the guidance is still catching up and many of the specific control expectations for agents are being filled in by national regulators, sector authorities and voluntary frameworks.

The autonomous agent gap

None of the three instruments fully addresses autonomous agent behaviour. An AI agent is not a single inference. It is a sequence of decisions, tool invocations and state changes taken over time, often with long lived side effects. The conceptual model of "input goes in, output comes out, check the output" does not match what an agent actually is.

Specific gaps show up across the three instruments when applied to agents. None of them fully specify how to evaluate a kill switch for a multi step agent. None of them produce a clear framework for the Autonomy Envelope: the explicit boundary between what the agent may do without human confirmation and what requires a human in the loop. None of them address multi agent systems where one agent invokes another in a chain that crosses trust boundaries. None of them address the specific failure modes of retrieval augmented generation against poisoned context.

These are not theoretical gaps. They are the dimensions on which real agents are actually failing in production, and the dimensions on which insurers are explicitly saying they cannot price risk until there is a clearer signal. This is the gap that Agent Certified was designed to close. The framework is built on top of the three reference instruments and maps each of its seven dimensions back to at least one primary reference, but it adds the operational detail that an agent specifically needs. The full methodology is published at the methodology page.

A practical sequence for operators

Most European operators are not starting from zero on all three instruments at the same time. They usually have some alignment with one of the three and limited exposure to the others. The following sequence works for most situations.

Step one: use NIST AI RMF to structure the technical risk layer. Its vocabulary is the cleanest, its functions map to real engineering practice, and it is free. Teams that have never done AI risk work will find NIST the least intimidating starting point.

Step two: run the governance layer through ISO 42001, even if formal certification is not the near term goal. The clauses are familiar if the operator already runs ISO 27001. The evidence the clauses ask for is durable and reusable across audits, insurers and procurement teams. Operators pursuing formal certification should budget eight to eighteen months depending on organisational maturity.

Step three: map the output of steps one and two to the specific articles of the EU AI Act that apply to your in scope systems. For most operators this is Articles 9, 10, 14, 15 and 26. The mapping exercise itself is valuable because it surfaces gaps in ways that a generic gap analysis does not. Legal counsel should be involved, but the mapping is primarily an evidence exercise.

Step four: where autonomous agents are in production, close the agent specific gap through a framework that was built for agents. Agent Certified is one such framework. There will be others. What matters is that the agent specific evidence exists, because the three primary instruments alone do not produce it.

What this means for counterparty conversations

An insurer, procurement lead or regulator asking a European operator for their AI posture in 2026 is usually asking for evidence on all three instruments at once, whether or not they say so explicitly. The operator who can produce a single readable signal that maps back to each instrument saves themselves a cycle of clarifying questions and produces a stronger posture overall.

The right way to read this article is not as a ranking of the three instruments. It is as a description of three complementary tools, each answering a different question, that a serious operator needs to use together. NIST for vocabulary. ISO 42001 for governance. The AI Act for legal obligation. And something agent specific to cover the part none of them fully reach.

Further reading

Readers who want to see how Agent Certified maps to each of these instruments in detail should start with the methodology reference table. Readers comparing voluntary certification to mandatory conformity assessment under the AI Act should read the companion article on compliance versus certification under the EU AI Act. Operators considering a formal assessment can start with the assessment request page.