REGULATORY6 min read

Consumer Duty & AI Advice

The bar for "good outcomes" when AI is in the loop.

CS

Clint Sookermany

28 April 2026

Editorial banner for Consumer Duty & AI Advice

The FCA's Targeted Support regime launches in April 2026. For the first time, regulated firms can offer tailored guidance to groups of consumers without crossing the regulatory boundary into full financial advice. This changes the calculus for wealth managers deploying AI. The gap between "guidance" and "advice" has always been the constraint. Targeted Support narrows that gap, and AI is the delivery mechanism that makes it economically viable.

But narrowing the gap does not eliminate it. The Consumer Duty still requires evidence of good outcomes from every customer interaction. When AI is involved in generating, filtering, or presenting investment guidance to clients, the standard is not whether the AI performed well on average. It is whether individual customers received information that was fair, clear, and not misleading, and whether the outcomes they experienced were consistent with the product's value proposition.

What "Good Outcomes" Means for AI-Assisted Advice

The FCA's shift from implementation to continuous monitoring means firms must now prove outcomes on an ongoing basis. For wealth managers, four outcome areas apply directly to AI-assisted client interactions.

Products and services. Are the products recommended (or guided towards) appropriate for the customer's circumstances? When AI pre-filters a product set or suggests a portfolio allocation, the firm must demonstrate that the filtering criteria align with the customer's risk profile, investment horizon, and financial objectives. A model that consistently guides mass-affluent clients towards the firm's own products, ignoring better alternatives, fails the products and services outcome regardless of the AI's technical performance.

Price and value. Does the customer receive fair value? AI can improve value delivery by reducing the cost of serving clients who would otherwise be uneconomical for human advisers. But if the cost saving accrues entirely to the firm while the client pays the same fee for an AI-assisted service as they would for a human-delivered one, the value proposition is questionable. The FCA will examine whether AI-driven efficiencies are shared with clients, particularly in the mass-affluent segment where fee sensitivity is highest.

Consumer understanding. Does the customer understand what they are receiving? AI-generated communications, whether chatbot interactions, portfolio summaries, or risk warnings, must be comprehensible to the specific customer, not to a hypothetical reasonable person. This means testing comprehension with real users, monitoring for confusion signals (repeated questions, early exits, complaint patterns), and adapting the communication style to the customer's demonstrated level of financial literacy.

Consumer support. Can the customer get help when they need it? An AI-assisted wealth management service that provides excellent automated guidance but makes it difficult to reach a human when things go wrong fails the consumer support outcome. The design must include clear escalation paths, and those paths must be genuinely accessible, not buried in menus or gated by chatbot loops.

The Suitability Question

The most consequential regulatory question for AI in wealth management is whether an AI system that assesses a customer's circumstances and recommends a course of action is providing regulated advice. The FCA has not yet given a definitive answer. The Mills Review, launched in January 2026, is examining whether AI systems could provide services functionally equivalent to regulated activities while remaining outside the regulatory perimeter.

The Data (Use and Access) Act 2025, effective from February 2026, adds another layer. If a decision is based solely on automated processing and has a significant effect on an individual, the controller must provide safeguards including the right to meaningful human intervention. A human merely rubber-stamping a machine's output does not meet this threshold.

For wealth managers, this creates a practical constraint: any AI system that generates personalised investment recommendations must either be supervised by a human who genuinely reviews and can override the recommendation, or the firm must be prepared to argue that the system is providing guidance (not advice) and that the customer understands the distinction.

In my experience, the firms navigating this most effectively are the ones that design the AI's role explicitly. The AI gathers data, performs analysis, and presents options. The human adviser (or the customer, in a self-directed model) makes the decision. The boundary between AI contribution and human decision is documented, auditable, and enforced in the system architecture, not just in the compliance manual.

The Emerging Agent-to-Agent Model

A structural shift is emerging in how AI interacts with wealth management clients. General-purpose AI assistants (ChatGPT, Gemini, and their successors) are becoming the interface through which consumers interact with financial services. But general-purpose AI is structurally unsuited to regulated financial services: it lacks the licensing, the product knowledge, the compliance controls, and the audit trails.

The model that is emerging, and that the FCA acknowledged in the Mills Review, is agent-to-agent: general-purpose platforms act as interfaces, routing financial queries to specialist regulated agents that handle assessments and recommendations within the regulatory perimeter. The wealth management firm operates the regulated agent. The general-purpose platform provides the customer interface.

This model has implications for Consumer Duty compliance. The wealth management firm remains responsible for the outcomes delivered by its agent, regardless of the interface through which the customer accessed the service. The firm must ensure that the information passed from the general-purpose platform to its agent is sufficient for suitability assessment, and that the recommendation passed back is appropriate for the customer's circumstances as the firm understands them.

Practical Design Principles

For wealth managers building AI-assisted advice or guidance services, four design principles align with Consumer Duty requirements.

First, separate the AI's analytical role from the decision-making role. The AI can assess, model, and recommend. The decision to act must involve either a human adviser or a customer who has been given sufficient information to decide. This separation must be enforced in the system architecture, not just in policy.

Second, build outcome measurement into the system from day one. Track what the AI recommended, what the customer did, and what outcome resulted. Measure this at the individual customer level, not just in aggregate. The FCA will ask for evidence of good outcomes for specific customer segments. Aggregate statistics that mask poor outcomes for a subset of customers will not suffice.

Third, design escalation paths that work. When a customer's situation is complex, when vulnerability indicators are present, or when the AI's confidence in its analysis is low, the system must escalate to a human. The escalation must be seamless from the customer's perspective and documented for audit.

Fourth, test comprehension, not just compliance. A risk warning that is technically compliant but incomprehensible to the customer does not deliver a good outcome. Test AI-generated communications with real users from the target segments and iterate until comprehension is demonstrated.

The wealth managers that build AI services around Consumer Duty as a design specification, rather than as a compliance constraint, will deliver better client outcomes and stronger regulatory positions. The FCA has been clear: the Duty applies to outcomes, not intentions. When AI is in the loop, the outcomes must be evidenced.

*To discuss how the 90-Day AI Acceleration programme can help your wealth management firm build Consumer Duty-compliant AI services, contact the Value Institute.*

CS

Clint Sookermany

Founder, The AI Value Institute by Regenvita

25 years of enterprise transformation experience across financial services, healthcare, technology, and government. Helping senior leaders turn AI ambition into measurable business value.

Get insights delivered weekly

Subscribe to the Intelligence Report for practical analysis on AI value creation. Free, weekly, no fluff.