REGULATORY7 min read

EIOPA & AI in Insurance Pricing

The fairness and explainability bar for personalised premium.

CS

Clint Sookermany

28 April 2026

Editorial banner for EIOPA & AI in Insurance Pricing

EIOPA published its Opinion on AI Governance and Risk Management in August 2025. It is the clearest statement yet of what European insurance supervisors expect from firms using AI in pricing. The Opinion does not create new rules. It clarifies how existing insurance regulation, the Insurance Distribution Directive, Solvency II, and the EU AI Act intersect when an insurer uses an algorithm to set a premium.

For insurers already using AI in pricing, or planning to, the Opinion sets a bar that is higher than many compliance teams have assumed. Fairness and explainability are not aspirational principles. They are supervisory expectations backed by the regulatory framework that national supervisors will enforce.

What EIOPA Actually Requires

The Opinion follows a risk-based and proportionate approach, but the proportionality works in one direction: the more consequential the AI system, the more demanding the requirements. AI used for risk assessment and pricing in life and health insurance is explicitly classified as high-risk under the EU AI Act's Annex III. This classification triggers the Act's full obligation set: risk management systems, data governance, technical documentation, human oversight, and transparency.

Beyond the AI Act, EIOPA's Opinion layers insurance-specific expectations on top. Four stand out.

Data quality and bias. Training data must be complete, accurate, and free of bias. The outputs of AI systems must be meaningfully explainable to identify and mitigate potential bias. For pricing models, this means the insurer must be able to demonstrate that the data used to train the model does not systematically disadvantage any group of customers, and that the model's outputs do not produce unfair pricing outcomes, even indirectly.

This is more demanding than it sounds. A model trained on historical claims data will reflect historical patterns, including patterns of exclusion, under-reporting, and structural disadvantage. An insurer that deploys such a model without testing for proxy discrimination (where a permitted rating factor correlates with a protected characteristic) is taking a conduct risk that EIOPA expects supervisors to examine.

Fairness in pricing practices. EIOPA's 2023 Supervisory Statement on Differential Pricing Practices already established that certain pricing strategies, such as price optimisation based on customer inertia rather than risk, are not compliant with the requirement to treat customers fairly under Article 17 of the IDD. AI amplifies this risk. A model that identifies which customers are least likely to switch and prices them higher is doing exactly what the Supervisory Statement prohibits, just more efficiently.

The fairness bar for AI pricing is therefore: does the model price on risk, or does it price on something else? If the answer includes factors that correlate with customer vulnerability, loyalty, or switching propensity rather than actuarial risk, the model has a conduct problem.

Explainability. The Opinion requires that AI outputs be "meaningfully explainable." For pricing, this means the insurer must be able to explain, to the customer, to the supervisor, and to an internal audit function, why a specific premium was set at the level it was. "The model produced this number" is not an explanation. "The premium reflects the customer's risk profile based on factors X, Y, and Z, weighted in the following proportions" is closer to what supervisors expect.

The challenge for complex pricing models, particularly those using gradient-boosted trees or neural networks, is that global explainability (how the model works in general) is achievable, but local explainability (why this specific customer received this specific price) requires additional tooling. SHAP values, LIME, or equivalent techniques are not optional; they are the mechanism through which the insurer meets the explainability obligation.

Human oversight. The Opinion expects human oversight proportionate to the risk. For pricing decisions that affect access to insurance or the affordability of cover, this means a human must be in a position to understand, challenge, and override the model's output. Automated pricing pipelines that run from quote request to price without meaningful human intervention will need to demonstrate that the oversight mechanism is genuine, not nominal.

The Practical Gap

In my work with European insurers, the gap between current practice and EIOPA's expectations typically shows up in three places.

First, bias testing. Most insurers test their pricing models for accuracy (does the model predict claims cost correctly?) but fewer test systematically for fairness (does the model produce systematically different outcomes for different groups of customers, controlling for risk?). The testing methodology matters: a model can be accurate on average while being unfair at the margins.

Second, documentation. The AI Act requires technical documentation sufficient for a third party to assess compliance. For a pricing model, this means documenting not just the model architecture and training process but also the feature selection rationale, the bias testing results, the explainability approach, and the human oversight mechanism. Most insurers I work with have some of this. Few have all of it, and fewer still have it in a format that would satisfy a regulatory review.

Third, governance. EIOPA expects AI governance to sit within the insurer's existing risk management framework, not in a separate AI governance silo. This means the Chief Actuary, the risk function, and the compliance function all need to engage with AI pricing models, not just the data science team that built them. In practice, many insurers have a data science team that builds the models and a compliance team that reviews them after the fact. EIOPA expects these functions to be integrated, not sequential.

What "Personalised Premium" Now Means

The combination of AI capability and regulatory constraint is reshaping what personalised pricing can look like in European insurance. The technology makes hyper-personalisation possible: pricing every customer individually based on hundreds of variables. The regulation constrains how that personalisation can work.

The permissible model is one that personalises on actuarially justified risk factors, tested for fairness, explained to the customer, documented for the supervisor, and subject to meaningful human oversight. This is a higher bar than "we use more data to price more accurately." It requires the insurer to demonstrate that the personalisation serves the customer's interest, not just the insurer's margin.

Guidewire's 2026 analysis argues that European and London Market insurers must modernise their pricing for the AI era. This is correct, but modernisation must include the governance infrastructure, not just the modelling capability. An insurer that builds a sophisticated AI pricing model without the fairness testing, explainability tooling, and oversight mechanisms that EIOPA expects has built half a system.

Steps for the Next Quarter

First, map your current pricing models against EIOPA's Opinion. Identify which systems are in scope, what documentation exists, and where the gaps are. Treat this as a gap analysis, not a compliance checklist.

Second, implement systematic fairness testing for all AI-driven pricing models. This means testing for both direct and proxy discrimination, using appropriate statistical techniques, and documenting the results.

Third, deploy explainability tooling. If your pricing model cannot produce a customer-level explanation of why a premium was set at a particular level, this is a gap that needs closing before supervisory expectations harden into enforcement.

Fourth, integrate AI pricing governance into the existing risk management framework. The Chief Actuary and the risk function should be reviewing AI pricing models with the same rigour they apply to traditional actuarial models. If AI pricing sits outside the standard governance framework, bring it in.

The insurers that treat EIOPA's Opinion as a compliance exercise will meet the minimum bar. Those that treat it as a design specification for how AI pricing should work will build better products and stronger regulatory relationships. The second approach takes more effort. It also produces pricing systems that are genuinely defensible when a supervisor asks how they work.

*To discuss how the 90-Day AI Acceleration programme can help your organisation align AI pricing with EIOPA expectations, contact the Value Institute.*

CS

Clint Sookermany

Founder, The AI Value Institute by Regenvita

25 years of enterprise transformation experience across financial services, healthcare, technology, and government. Helping senior leaders turn AI ambition into measurable business value.

Get insights delivered weekly

Subscribe to the Intelligence Report for practical analysis on AI value creation. Free, weekly, no fluff.