Skip to main content
FORTISEU
Back to Blog
AI Governance29 March 202615 min readAttila Bognar

EU AI Act High-Risk AI Systems: Building Conformity Assessment Readiness Before August 2026

The EU AI Act's high-risk AI system obligations apply from August 2, 2026. Deployers and providers have four months to build conformity assessment capability, technical documentation, risk management systems, and post-market monitoring. This is the operational preparation guide.

EU AI Act High-Risk AI Systems: Building Conformity Assessment Readiness Before August 2026 featured visual
EU AI ActHigh-risk AIConformity assessmentAI governanceRisk managementTechnical documentation

On August 2, 2026, the obligations for high-risk AI systems under the EU AI Act (Regulation 2024/1689) become applicable. The prohibited practices have been enforceable since February 2, 2025. The GPAI obligations took effect on August 2, 2025. But the high-risk regime — the most operationally demanding tier of the AI Act — has its own timeline. Four months remain.

The risk classification decision tree determines whether your AI systems fall into the high-risk category. This post assumes you have done that classification work and identified one or more systems that are high-risk under Annex I or Annex III. The question now is: what must you build before August 2?

The answer is a compliance infrastructure. Not a policy document — an operational system encompassing risk management, data governance, technical documentation, human oversight mechanisms, accuracy monitoring, and post-market surveillance. For providers, this infrastructure must be in place before the system is placed on the market. For deployers, it must be in place before the system is put into service. The August 2 date means neither providers nor deployers can afford to treat this as a future problem.

Provider Obligations vs Deployer Obligations

The AI Act distinguishes sharply between providers (who develop or have an AI system developed and place it on the market or put it into service under their own name) and deployers (who use an AI system under their authority). Most enterprises are deployers of AI systems that others provide. But the boundary is not always clean.

You are a provider if you develop an AI system in-house and deploy it in a high-risk context. You are also a provider if you take an existing AI system, substantially modify it, and deploy the modified version. You are also treated as a provider if you put your name or trademark on a high-risk AI system, even if someone else developed it.

Provider obligations under Chapter 3, Section 2 (Art. 8-15) include:

  • Establishing and maintaining a risk management system (Art. 9)
  • Ensuring data governance and management practices (Art. 10)
  • Drawing up technical documentation (Art. 11)
  • Designing for record-keeping (logging) (Art. 12)
  • Ensuring transparency and provision of information to deployers (Art. 13)
  • Designing for human oversight (Art. 14)
  • Ensuring appropriate accuracy, robustness, and cybersecurity (Art. 15)
  • Conducting conformity assessment before placing on the market (Art. 43)
  • Drawing up an EU declaration of conformity (Art. 47)
  • Affixing CE marking (Art. 48)
  • Establishing a post-market monitoring system (Art. 72)
  • Reporting serious incidents (Art. 73)

Deployer obligations under Chapter 3, Section 3 (Art. 26) include:

  • Using the AI system in accordance with the instructions of use accompanying it
  • Ensuring human oversight by natural persons who have the necessary competence, training, and authority
  • Monitoring the operation of the AI system based on the instructions of use
  • Suspending or discontinuing use if the system presents a risk
  • Keeping logs automatically generated by the AI system, where under their control
  • Conducting a fundamental rights impact assessment for certain high-risk systems (Art. 27)
  • Informing affected persons when the high-risk AI system makes decisions about them

The operational burden differs substantially. Providers must build the compliance infrastructure from the ground up. Deployers must build the governance, monitoring, and oversight infrastructure that operationalises the provider's instructions. Neither is trivial.

Risk Management System: Article 9

Article 9 requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. This system must be a continuous iterative process planned and run throughout the entire lifecycle of the AI system.

The risk management system is not a risk register or a periodic assessment. It is an ongoing process that must:

Identify and analyse known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety, or fundamental rights. This analysis must consider the intended purpose, conditions of reasonably foreseeable misuse, and the specific context in which the system is deployed.

Evaluate risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse. The evaluation must consider the severity and probability of harm, the extent to which individuals are exposed to the risk, and the vulnerability of affected groups.

Adopt appropriate and targeted risk management measures addressing identified risks. These measures must consider the state of the art, the available technical means, and the overall expected benefit of the system weighed against the residual risks.

Test the system to ensure risk management measures are effective and appropriate. Testing must include, where appropriate, testing in real-world conditions.

Practical implementation structure:

  1. Conduct an initial risk identification covering the intended purpose, foreseeable misuse, training data biases, and failure modes
  2. For each identified risk, assess severity (potential harm), probability (likelihood given controls), and exposure (who and how many people are affected)
  3. Define risk mitigation measures — technical (constraints, guardrails, input validation), operational (human oversight procedures, deployment restrictions), and organisational (training, monitoring, escalation)
  4. Test each mitigation measure for effectiveness — can the system still produce harmful outputs despite the mitigation? Under what conditions?
  5. Document residual risks — risks that remain after mitigation — and ensure these are communicated to deployers through the instructions of use
  6. Establish monitoring triggers for risk re-assessment: significant changes to the system, new evidence of harm, changes in deployment context, or adverse incident reports

The risk management system must be updated throughout the lifecycle. When the system is modified, when new risks are identified through post-market monitoring, or when the deployment context changes, the risk assessment must be revisited.

Data Governance: Article 10

Article 10 addresses the quality of training, validation, and testing data for high-risk AI systems. This is a provider obligation, but deployers who fine-tune or retrain models become subject to these requirements for the data they introduce.

The data governance requirements include:

Data quality criteria: Training, validation, and testing datasets must be relevant, sufficiently representative, and to the best extent possible free of errors and complete in view of the intended purpose. The "to the best extent possible" qualifier is not an escape clause — it means the provider must demonstrate what efforts were made to ensure data quality.

Bias examination: Datasets must be examined in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination. Where bias is identified, appropriate measures to detect, prevent, and mitigate it must be applied.

Data documentation: The datasets must be described in the technical documentation, including their characteristics, the collection methodology, preprocessing steps, labelling procedures, data cleaning operations, and the rationale for the choice of data.

Special categories of personal data: Processing of special categories of personal data (Art. 10(5)) for bias detection and correction is permitted under strict conditions — it must be strictly necessary, subject to appropriate safeguards, and proportionate. This is a carefully bounded exception to GDPR Art. 9 restrictions.

For deployers, the practical implication is ensuring that your use of the AI system does not introduce data that degrades the quality guarantees the provider established. If you feed the system domain-specific data that was not part of the provider's training or validation scope, you are altering the risk profile and may trigger re-assessment obligations.

Technical Documentation: Article 11

Article 11 requires providers to draw up technical documentation before the system is placed on the market and to keep it up to date. Annex IV specifies the minimum content of this documentation.

The technical documentation must include:

  • A general description of the AI system: its intended purpose, the provider's identity, the version, and how the system interacts with hardware or software
  • A detailed description of the elements and development process: design specifications, system architecture, computational resources, algorithms used, data requirements, training methodology, and validation and testing procedures
  • Information about the monitoring, functioning, and control of the system: instructions for deployers, human oversight measures, and technical measures for interpretation of outputs
  • A description of the risk management system and the measures adopted
  • A description of changes made during the system's lifecycle
  • The conformity assessment documentation

For providers building systems in-house, this documentation must be created as part of the development process — not retrofitted after the system is built. Retrofitting technical documentation for a complex AI system is an exercise in archaeology: reverse-engineering design decisions, reconstructing training data provenance, and documenting architecture choices that were never recorded. Starting documentation during development is significantly less expensive than reconstructing it afterward.

For deployers procuring high-risk AI systems, the provider must furnish this documentation (or relevant portions) as part of the instructions of use. If your vendor cannot provide adequate technical documentation, that is a procurement red flag — it suggests the system may not be ready for the August 2026 compliance date.

Human Oversight: Article 14

Article 14 requires that high-risk AI systems are designed and developed so as to be effectively overseen by natural persons during their use. The oversight must enable the persons to whom human oversight is assigned to:

  • Properly understand the relevant capacities and limitations of the system and be able to duly monitor its operation
  • Remain aware of the possible tendency of automatically relying on the output of the system (automation bias)
  • Be able to correctly interpret the system's output, taking into account the characteristics of the system and the interpretation tools available
  • Be able to decide, in any particular situation, not to use the system or to disregard, override, or reverse the output
  • Be able to intervene in the operation or interrupt the system through a "stop" button or similar procedure

This is where the provider-deployer relationship becomes operationally critical. The provider must design the system to enable human oversight. The deployer must implement human oversight in practice — assigning competent persons, providing them with training, establishing procedures for when and how to override system outputs, and ensuring the organisational environment allows overseers to exercise genuine judgement without pressure to simply accept the system's outputs.

Common implementation failure: The deployer assigns human oversight to the same person who operates the system, under the same performance metrics that reward throughput. An HR specialist tasked with reviewing AI-recommended candidate shortlists while also being measured on time-to-fill is structurally incentivised to accept the AI's recommendation rather than exercise independent judgement. Human oversight requires not just assignment but organisational design that makes oversight genuine.

Accuracy, Robustness, and Cybersecurity: Article 15

Article 15 requires that high-risk AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity. These properties must be maintained throughout the system's lifecycle.

Accuracy must be declared in the instructions of use. Providers must specify the levels of accuracy — and the relevant accuracy metrics — that are appropriate for the intended purpose. Deployers must monitor whether the system continues to perform at the declared accuracy levels in their specific deployment context.

Robustness means the system must be resilient to errors, faults, or inconsistencies in the environment in which it operates. Technical redundancy solutions, including backup or fail-safe plans, may be required. The system must also be resilient to attempts by unauthorised third parties to alter its use or performance by exploiting vulnerabilities.

Cybersecurity measures must protect the system against attempts to alter its behaviour, outputs, or performance through adversarial attacks — data poisoning, model manipulation, and exploitation of vulnerabilities in the system's technical infrastructure.

For deployers, the practical obligation is monitoring. If the system's accuracy degrades in your deployment context — because your data distribution differs from the training data, because the real-world environment has changed, or because the system is being used at the boundaries of its intended purpose — you must detect that degradation and act on it. This requires baseline accuracy measurement at deployment and ongoing monitoring against that baseline.

Conformity Assessment: Article 43

Before a high-risk AI system is placed on the market, it must undergo a conformity assessment to verify compliance with the requirements in Chapter 3, Section 2. The conformity assessment procedure depends on the pathway through which the system was classified as high-risk.

For Annex III systems (use-case-based classification): Most Annex III categories allow the provider to perform conformity assessment based on internal control (Annex VI). This is a self-assessment — the provider verifies its own compliance with the requirements. The exception is biometric identification systems, where third-party conformity assessment by a notified body is required.

For Annex I systems (safety component classification): The conformity assessment follows the procedure established by the relevant Union harmonisation legislation listed in Annex I. This may require third-party assessment depending on the sector-specific rules.

Internal control conformity assessment under Annex VI requires the provider to:

  1. Verify that the quality management system complies with the requirements of Art. 17
  2. Examine the information in the technical documentation to assess whether the system complies with the relevant requirements
  3. Verify that the design and development process of the system and its post-market monitoring are consistent with the technical documentation

The conformity assessment results in an EU declaration of conformity (Art. 47) and the affixing of the CE marking (Art. 48). The declaration of conformity must be kept for ten years after the system is placed on the market.

Post-Market Monitoring: Article 72

Providers must establish and document a post-market monitoring system proportionate to the nature of the AI technology and the risks of the high-risk AI system. The system must actively and systematically collect, document, and analyse relevant data on the performance of the system throughout its lifetime.

Post-market monitoring is not optional enhancement — it is a compliance obligation that begins at deployment and continues for the system's entire operational life. The monitoring system must be designed to:

  • Continuously evaluate the system's compliance with the requirements
  • Detect risks to health, safety, and fundamental rights that emerge during operation
  • Assess whether the system continues to perform within its declared accuracy, robustness, and safety parameters
  • Identify patterns that suggest the system's behaviour has changed in ways not anticipated during development

When post-market monitoring identifies a risk or non-compliance, the provider must take corrective action — which may range from updating the system to withdrawing it from the market.

For deployers, the practical implication is data sharing. Providers will need performance data from deployers to fulfil their post-market monitoring obligations. Deployers should expect contractual requirements to share monitoring data with providers and should build the infrastructure to collect and transmit that data.

Four-Month Preparation Roadmap

With August 2, 2026 as the compliance date, the preparation must be structured to deliver results, not just activity.

Weeks 1-4: Inventory and Classification Confirmation

  • Confirm the classification of all AI systems — verify that the classification analysis is current and accounts for any changes in deployment context since the initial assessment
  • For each high-risk system, confirm whether you are the provider, the deployer, or both
  • For procured systems, assess whether the vendor is prepared to deliver the documentation and support required under the AI Act — request evidence of their conformity assessment preparation
  • Identify gaps in your current governance infrastructure against the obligations for your role (provider or deployer)

Weeks 5-8: Infrastructure Build

  • Providers: Draft or complete technical documentation (Art. 11, Annex IV). Establish or formalise the risk management system (Art. 9). Implement data governance documentation (Art. 10). Design human oversight mechanisms (Art. 14).
  • Deployers: Establish human oversight procedures and assign competent persons. Define monitoring procedures for accuracy and performance. Conduct fundamental rights impact assessments where required (Art. 27). Build or configure logging infrastructure to capture system-generated records.
  • Both: Establish the quality management system (Art. 17) if not already in place. Define roles and responsibilities for AI Act compliance within the organisation.

Weeks 9-12: Assessment and Testing

  • Providers: Conduct the conformity assessment (Art. 43). Prepare the EU declaration of conformity (Art. 47). Establish the post-market monitoring system (Art. 72).
  • Deployers: Test human oversight procedures — can the assigned persons actually interpret outputs, identify errors, and override the system? Test monitoring infrastructure — does it detect accuracy degradation?
  • Both: Conduct a readiness review against all applicable obligations. Identify and address remaining gaps.

Weeks 13-16: Finalisation and Operational Readiness

  • Finalise all documentation: technical documentation, instructions of use, declaration of conformity, fundamental rights impact assessments
  • Ensure all organisational measures are operational: oversight persons are trained, monitoring is active, incident reporting procedures are defined
  • Conduct a final readiness assessment — can you demonstrate compliance if a market surveillance authority requests evidence on August 3?

The Vendor Readiness Question

For deployers, a significant portion of compliance readiness depends on vendors. If your AI system provider is not prepared for August 2 — if they cannot provide adequate technical documentation, if they have not conducted a conformity assessment, if they cannot demonstrate that the system meets Art. 9-15 requirements — your deployment is at risk.

This is a procurement and contract management issue as much as a compliance issue. Deployers should be engaging with their AI system vendors now — not after August 2 — to assess vendor readiness and establish contractual obligations for compliance support. The AI Act Code of Practice status of your vendors is one indicator of their preparedness, but it is not sufficient on its own. Request specific evidence: technical documentation drafts, conformity assessment timelines, and post-market monitoring system descriptions.

FortisEU's compliance automation and risk management platform supports the structured tracking of AI Act obligations across provider and deployer roles, generating the evidence architecture that conformity assessment and market surveillance interactions require.

Key Takeaways

  • August 2, 2026 is four months away. High-risk AI system obligations under the EU AI Act become applicable. Both providers and deployers must have compliance infrastructure operational — not documented as future plans.

  • Provider obligations are substantially heavier than deployer obligations. If you develop AI systems in-house or substantially modify procured systems, you bear the full conformity assessment burden: risk management, data governance, technical documentation, human oversight design, and post-market monitoring.

  • Human oversight requires organisational design, not just assignment. Assigning a person to oversee an AI system is insufficient if that person lacks competence, training, authority to override, and an organisational environment that supports genuine judgement over rubber-stamping.

  • Post-market monitoring is a continuous obligation, not a post-launch add-on. Both providers and deployers must build monitoring infrastructure that detects performance degradation, emerging risks, and accuracy drift throughout the system's operational life.

  • Vendor readiness is a deployer risk. If your AI system vendor is not prepared for August 2, your deployment is non-compliant. Engage vendors now to assess conformity assessment status, technical documentation availability, and post-market monitoring commitments.

  • Start with the risk management system (Art. 9) — everything else builds on it. The risk identification, assessment, and mitigation process under Art. 9 informs data governance decisions, shapes technical documentation, defines human oversight requirements, and drives post-market monitoring priorities.

Next Step

Turn guidance into evidence.

If procurement is involved, start with the Trust Center. If you want to see the product, create an account or launch a live demo.