The compliance deadline for high-risk AI systems under the EU AI Act is August 2026. For most financial services providers, that is one quarter away. The question is no longer whether you are in scope. It is whether your model inventory is complete enough to prove it.
What the Act Actually Requires
Annex III of the EU AI Act explicitly classifies two categories of financial services AI as high-risk: systems used to evaluate the creditworthiness of natural persons or establish their credit score, and systems used for risk assessment and pricing in relation to natural persons in the case of life and health insurance. Fraud detection systems are carved out. Everything else that touches a lending or insurance pricing decision is in.
The obligations that follow are specific and auditable. Providers of high-risk AI systems must implement a risk management system that runs throughout the system's lifecycle. They must maintain technical documentation sufficient for a third party to assess compliance. They must ensure data governance standards that address training, validation, and testing datasets. They must build in human oversight mechanisms. And they must meet transparency requirements that allow deployers and, where relevant, affected individuals to understand what the system does and how.
For financial services firms, "provider" status is the critical classification. If you developed or commissioned the model, you are a provider. If you fine-tuned a foundation model for credit decisioning, you are a provider. The obligations are yours.
The Model Inventory Problem
In my work with financial services clients, the first and most revealing exercise is always the model inventory. Most institutions have more AI systems in production than their compliance or risk teams are aware of. The proliferation of departmental models, vendor-embedded AI, and spreadsheet-based scoring tools that technically qualify as AI systems under the Act's broad definition creates a gap between what the firm thinks it runs and what it actually runs.
A complete inventory requires three things. First, a sweep across all business units, not just those with formal "AI" labels. Second, classification of each system against Annex III criteria, with a documented decision either way. Third, an ownership assignment: who is the provider, who is the deployer, and who holds accountability under the Senior Managers and Certification Regime (SMCR) for each system's compliance.
The firms I see making progress started this work in early 2025. Those beginning now face a compressed timeline, but the work is not optional.
The Digital Omnibus Uncertainty
The European Commission's Digital Omnibus package, proposed in late 2025, includes a provision to postpone Annex III high-risk obligations to December 2027. This has created a temptation to slow down. That would be a mistake for two reasons.
First, the postponement is not confirmed. It requires European Parliament and Council approval, and the legislative process is not guaranteed to conclude before August 2026. Planning against an uncertain extension is a governance risk in itself.
Second, the UK's regulatory trajectory is running on a separate clock. The FCA's Mills Review, launched in January 2026, is examining how AI systems, including agentic systems, interact with the existing regulatory perimeter. The PRA's expectations around model risk management (SS1/23) already apply to AI models in regulated banking. Firms operating across the UK and EU cannot afford to wait for either jurisdiction to finalise its approach. The practical minimum is to build the inventory and classification now, regardless of which deadline lands first.
Risk Classification in Practice
The classification exercise is where most firms struggle. The Act's definition of an AI system is broad: a machine-based system designed to operate with varying levels of autonomy, that generates outputs such as predictions, recommendations, or decisions influencing physical or virtual environments. This captures not only deep learning models but also statistical models, decision trees, and hybrid systems that combine rules with learned components.
For a typical retail bank, the classification exercise surfaces systems in credit origination, collections scoring, pricing, customer segmentation, fraud detection (excluded from high-risk, but still requiring documentation), and customer-facing chatbots. For an insurer, add underwriting models, claims triage, and reserving.
Each system needs a risk tier assignment with documented reasoning. The documentation must be specific enough to survive regulatory scrutiny: not "this system is low-risk because it is internal" but a reasoned analysis against the Annex III criteria.
Transparency Obligations
The transparency requirements merit particular attention. High-risk AI systems must be designed to allow deployers to interpret outputs and use them appropriately. For credit scoring, this means explainability is not a nice-to-have; it is a legal requirement. For insurance pricing, the interaction between the AI Act's transparency obligations and existing conduct requirements under Solvency II and local conduct rules creates a layered compliance challenge that needs to be addressed as a single programme, not in silos.
What to Do in the Next 90 Days
The practical steps for Q2 2026 are clear, even if the timeline shifts.
First, complete the model inventory. Every AI system, every business unit, every vendor. Classify against Annex III with documented decisions. Do not assume your existing model risk management framework covers the full scope; the AI Act's definition of an AI system is broader than most MRM inventories.
Second, assign ownership. Each high-risk system needs a named individual accountable for compliance. Under SMCR, this accountability must be explicit and documented.
Third, begin the gap analysis between your current technical documentation and what the Act requires. The conformity assessment for high-risk systems is detailed. Starting the documentation now, even if the deadline shifts, de-risks the programme and surfaces issues while there is still time to address them.
Fourth, engage your board. This is not a technology project. It is a regulatory compliance programme that touches operating model, governance, and risk appetite. The audit committee should see a status report before the end of Q2.
The firms that treat August 2026 as the working deadline, regardless of the Omnibus outcome, will be the ones that are ready when the obligation crystallises. Those banking on a postponement are making a bet that no compliance officer should be comfortable with.
*To discuss how the 90-Day AI Acceleration programme can help your organisation prepare for the EU AI Act's high-risk obligations, contact the Value Institute.*
