Key takeaways
  • Compliance, certification and conformity assessment are three different things under the EU AI Act, and treating them as interchangeable is how operators get into trouble.
  • Conformity assessment is the legal obligation for providers of high risk AI systems under Articles 43 to 48 of the AI Act. It is a regulatory act, not a marketing claim.
  • Voluntary certification is a private, market driven signal that can strengthen an evidence file but cannot replace the legal obligation to complete conformity assessment for in scope systems.
  • For most standalone AI systems the provider may self assess against harmonised standards. For some high risk systems, particularly those covered by existing product safety regulation, notified body involvement is required.
  • The right way to think about voluntary certification such as Agent Certified is that it sits alongside conformity assessment, producing evidence the conformity assessment can rely on, without replacing the formal legal act.

A common mistake in AI Act conversations is treating three different things as if they are the same thing. Compliance. Certification. Conformity assessment. The words are used interchangeably in marketing material, in procurement questionnaires, and sometimes even in internal counsel memos. They should not be. Each one means something specific, each one has a different legal weight, and the confusion between them is the kind of confusion that surfaces painfully during an enforcement action.

This article is a short briefing on what each term actually means under the EU AI Act, how they relate to each other, and where a voluntary framework such as Agent Certified fits in. It is not legal advice. It is an operational sketch for readers trying to build a defensible posture before the high risk obligations of the Act take effect from 2 August 2026.

The three terms, defined

Compliance

Compliance is the general state of meeting a legal obligation. An organisation is compliant with the EU AI Act when it has met all the obligations that apply to it for the systems it operates. Compliance is binary in principle but messy in practice, because there are many obligations and they apply differentially based on the risk classification of each system, the role of the organisation (provider, deployer, importer, distributor), and the timing of each obligation within the phased implementation schedule.

Compliance is not something an organisation achieves once and moves on from. It is a continuing state. The Act imposes post market monitoring obligations (Article 72), incident reporting obligations, and requires that providers continue to evaluate whether their high risk system remains in conformity over its full lifecycle.

Conformity assessment

Conformity assessment is a specific regulatory act defined in EU law. It is the formal process by which a provider of a high risk AI system demonstrates that the system meets the requirements set out in Section 2 of Chapter III of the AI Act, covering risk management, data governance, technical documentation, record keeping, transparency, human oversight, accuracy, robustness and cybersecurity. Articles 43 to 48 of the AI Act describe the conformity assessment procedures in detail.

For most standalone high risk AI systems, the provider may conduct conformity assessment based on internal control, meaning a self assessment against harmonised standards or common specifications, followed by a declaration of conformity and affixing of the CE marking. For high risk AI systems that are safety components of products covered by existing Union harmonisation legislation, such as medical devices or machinery, conformity assessment follows the procedures of that existing legislation, often involving a notified body.

Conformity assessment is not optional. It is a prerequisite for placing a high risk AI system on the Union market or putting it into service. Operators who treat it as a later stage concern are mis reading the Act.

Certification

Certification, in the general sense, is the formal recognition by a third party that an organisation, product or process meets a defined set of criteria. It can be statutory or voluntary, accredited or unaccredited, legally required or commercially elective. ISO 42001 certification by an accredited certification body is an example of formal voluntary certification with widely recognised international weight. A certification issued under a private framework such as Agent Certified is an example of voluntary certification with market recognition but without the machinery of national accreditation bodies behind it.

Certification is not the same thing as conformity assessment. Holding a certificate does not automatically mean a system has undergone conformity assessment, and conformity assessment does not require the organisation to hold any particular certificate. The two are related but distinct legal acts.

Where the three interact

In practice, operators use all three together. A well run operator pursues compliance as a continuing state, undergoes conformity assessment as a specific regulatory act for each in scope system, and uses certification as a voluntary signal that both strengthens the conformity assessment evidence and produces a durable artefact for counterparties.

The interaction works approximately as follows. An ISO 42001 certificate produces evidence on governance, management system structure, roles, and continual improvement. That evidence maps cleanly to AI Act Article 9 on risk management and to the technical documentation obligations under Annex IV. A framework certification such as Agent Certified produces evidence on technical controls, the Autonomy Envelope, Trust and Safety, and Context Integrity, which maps to AI Act Articles 10, 13, 14 and 15. Neither certification replaces conformity assessment. Both strengthen the evidence on which conformity assessment depends.

The legal primacy runs in one direction. Conformity assessment is a regulatory obligation that cannot be discharged by private certification. Private certification is useful evidence that cannot stand in place of conformity assessment.

Where operators get it wrong

There are three common errors.

The first is treating a private certification as if it were a CE marking equivalent. A certificate under a voluntary framework, whether formal or informal, is not a regulatory clearance to place a high risk AI system on the Union market. Presenting it as if it were is a marketing error that can become a legal problem during enforcement.

The second is treating ISO 42001 certification as sufficient for AI Act compliance. ISO 42001 is a management system standard. It governs how the organisation runs its AI programme. It does not assess any specific AI system against the specific technical requirements of the AI Act's high risk regime. Operators who stop at ISO 42001 and believe they are done are missing a substantial part of the obligation, especially the technical documentation requirements of Annex IV and the specific risk management requirements of Article 9.

The third is the opposite error. Treating conformity assessment as the only thing that matters and ignoring voluntary certification entirely. This error usually surfaces during an insurer or counterparty conversation. The operator has technically met their conformity assessment obligation but has no durable, legible artefact to present to an external reader, and spends weeks producing ad hoc documentation under pressure. Voluntary certification, done well, produces that artefact in advance.

Harmonised standards and the role of CEN-CENELEC

The European standardisation organisations CEN and CENELEC, working under a mandate from the European Commission, are developing harmonised standards to support the AI Act. When a harmonised standard is published in the Official Journal of the European Union, compliance with that standard creates a presumption of conformity with the corresponding AI Act requirements. This is a significant practical mechanism because it allows providers to self assess against a specific, testable technical standard rather than against the open language of the Act itself.

As of early 2026, several harmonised standards are in preparation under the CEN-CENELEC Joint Technical Committee 21 on Artificial Intelligence. The final set is expected to become available through 2026 and into 2027. Until they are formally listed in the Official Journal, the presumption of conformity mechanism is not yet operational, and providers must demonstrate compliance with the underlying requirements directly.

Voluntary frameworks such as Agent Certified are not harmonised standards in this legal sense. They are private instruments that can help operators produce the evidence their conformity assessment relies on, but they do not by themselves trigger the presumption of conformity under Article 40 of the Act.

Notified bodies and third party conformity assessment

For some categories of high risk AI system, the AI Act requires conformity assessment with the involvement of a notified body. A notified body is an organisation designated by a member state to perform conformity assessment tasks under specific regulations. The list of notified bodies for the AI Act is maintained by the European Commission and is published in the NANDO database.

Most standalone AI systems classified as high risk under Annex III of the AI Act are eligible for conformity assessment based on internal control, meaning the provider self assesses without notified body involvement. The exceptions include high risk AI systems that are also covered by specific sectoral product safety legislation, where the notified body involvement is driven by that legislation rather than by the AI Act directly.

The practical implication is that most operators of autonomous agents under the AI Act will not be working with a notified body on the AI specific conformity assessment. They will be self assessing against harmonised standards (once available) or against the underlying Article requirements. This makes the quality of their own evidence file the single most important factor in the defensibility of their position.

Why the distinction matters for agents specifically

Autonomous AI agents create a specific pressure on the compliance, certification and conformity assessment distinction. The AI Act's technical requirements were drafted with classical AI systems in mind, and the mapping from Article language to agent specific evidence is not always obvious. An agent's "accuracy and robustness" under Article 15 is not the same kind of measurement as a classification model's accuracy score. Its "human oversight" under Article 14 is not a simple human in the loop toggle. Its "technical documentation" under Annex IV needs to cover tool invocation, multi step reasoning, and the agent's autonomy envelope, none of which are items the Annex explicitly names.

This is why a framework built specifically for agents becomes useful. Agent Certified does not replace conformity assessment. It provides a structured way to produce the technical, governance and oversight evidence that conformity assessment for an agent specific system needs. Readers who want the full picture of how that works should read the article on the seven dimensions and the full methodology.

A pragmatic posture for operators

The posture that holds up in practice is the one that treats the three concepts as complementary layers, not as competing labels.

At the bottom layer, compliance as a continuing state. A documented inventory of AI systems. A living risk register. A clear view on which systems are in scope of the AI Act and which are not. Assigned ownership. A working relationship with competent authorities where relevant.

In the middle layer, formal conformity assessment for in scope high risk systems, against harmonised standards where available, with a technical documentation file that matches Annex IV and a declaration of conformity on record before the system is placed on the market or put into service.

At the top layer, voluntary certification that produces legible, durable artefacts counterparties can read and rely on without having to reconstruct the evidence themselves. ISO 42001 for the management system layer. Agent Certified or equivalent for the agent specific technical and oversight layer. Sector specific assurance where relevant.

Operators who build posture in this way find that each layer reinforces the others. The compliance layer produces the evidence the conformity assessment needs. The conformity assessment produces the artefacts the certification layer can point to. The certification layer produces the signal that procurement teams and insurers can read in minutes instead of weeks.

What to do now

The most useful exercise for most operators is the evidence mapping exercise. Take each of the specific obligations that apply to your in scope systems under the AI Act. For each one, identify the evidence you currently have, where it lives, and who owns it. Then identify the gaps. Most operators find that they are stronger on governance and weaker on technical documentation, weaker on human oversight design than they thought, and weaker on post market monitoring than the Act expects.

The remediation plan that comes out of that exercise is the compliance plan. The evidence that comes out of closing the gaps is the input to conformity assessment. The durable artefacts that can be produced once the evidence exists are the input to certification. All three layers then move forward in parallel.

Readers interested in starting the technical layer of that exercise against an agent specific framework can read the full methodology, review the certification levels, or request a formal assessment.