Three in four financial services boards have approved major AI investments. Fewer than half have set governance expectations around those investments. Fewer than half have made AI risk a standing agenda item. This gap between capital commitment and oversight is where pilot fatigue takes root, and where the audit committee can close it.
The Pilot Fatigue Pattern
Gartner's 2025 AI Maturity Curve found that only 11% of financial firms report measurable ROI from AI initiatives. The rest are in various stages of what the industry has started calling "pilot purgatory": cycling through proofs of concept that demonstrate technical feasibility but never convert to business value at scale.
The pattern is consistent. A firm launches 15 to 20 AI pilots across different business units. Each pilot has its own sponsor, its own data, its own success criteria. A handful succeed. Most produce ambiguous results. The board asks for an update. The CTO presents a portfolio of green-status projects, none of which are generating measurable returns. The board loses confidence. Funding tightens. The next round of pilots is smaller and more cautious. The cycle repeats.
In the cases I have been involved in, the root cause is almost never the technology. It is governance. Specifically, it is the absence of the same portfolio discipline that the firm applies to every other category of strategic investment.
Why the Audit Committee, Not the Technology Committee
Most firms route AI governance through the technology committee or the risk committee. Both have a role. But the audit committee brings something neither of those bodies can: a mandate to ask whether the firm's investments are being managed with adequate controls, proper measurement, and honest reporting.
The audit committee's existing remit covers exactly the gaps that produce pilot fatigue:
Measurement integrity. Are the success metrics for AI initiatives defined before the work starts, or retrofitted to make results look better? The audit committee is accustomed to interrogating measurement frameworks. Applying that same scrutiny to AI business cases would surface the optimistic assumptions that allow weak pilots to persist.
Portfolio visibility. Does the board have a consolidated view of all AI initiatives, their status, their spend, and their outcomes? In most firms, it does not. AI projects are scattered across business units, reported through different channels, with no single view of the portfolio. The audit committee can mandate the same consolidated reporting it expects for other material investment categories.
Exit discipline. When should a pilot be killed? This is the question most AI governance frameworks avoid. A portfolio approach requires explicit criteria for continuing, scaling, or stopping each initiative. The organisations pulling ahead, according to PwC's 2026 AI Performance Study, are not scaling more pilots. They are scaling fewer, with better measurement and clearer exit criteria.
Risk appetite alignment. PwC's study found that leading AI performers build a failure rate assumption of 40 to 50% into their portfolio. This is healthy. Innovation requires failed experiments. But that failure rate needs to be explicit, budgeted, and approved, not discovered after the fact. The audit committee is the right body to validate that the firm's AI risk appetite is articulated and that actual performance is tracking against it.
The Governance Confidence Gap
Grant Thornton's 2026 AI Impact Survey exposed a striking metric: among organisations still piloting AI, only 7% are very confident they could pass an independent AI governance audit within 90 days. Among those with fully integrated AI programmes, 74% are very confident.
The difference is not that the leaders have more sophisticated technology. It is that they have built governance into the programme from the start. PwC's data reinforces this: AI leaders are 1.7 times more likely to have a Responsible AI framework and 1.5 times more likely to have a cross-functional AI governance board. Their employees are twice as likely to trust AI outputs. Trust is a function of governance, not technology.
For audit committees, this metric is a useful diagnostic. Ask your CIO or Chief AI Officer: "If we were subject to an independent AI governance audit in 90 days, how confident are you that we would pass?" The answer tells you more about your AI programme's health than any portfolio dashboard.
From Oversight to Enablement
The audit committee's involvement should not slow AI programmes down. Done well, it accelerates them. The reason is simple: the biggest drag on AI programmes is not regulatory caution or technical complexity. It is uncertainty about whether the firm's governance structures will support or block the work.
When the audit committee sets clear expectations, three things happen. First, project teams know what "good" looks like before they start. They build measurement, documentation, and controls into the design rather than bolting them on later. Second, the board gets honest reporting rather than optimistic narratives, which means better capital allocation decisions and faster kills on underperforming initiatives. Third, regulators see a firm that is governing AI through its existing control framework, which is exactly what the FCA, PRA, and Bank of England have asked for.
The FCA's approach to AI oversight has been consistent: existing frameworks apply. The Senior Managers and Certification Regime, systems and controls requirements, and conduct obligations do not need new AI-specific rules to govern AI programmes. They need to be applied with the same rigour to AI as to every other activity. The audit committee is the body best positioned to ensure that happens.
Practical Steps for the Next Quarter
For boards and audit committees looking to close the governance gap:
First, request a consolidated AI portfolio view. Every initiative, its business case, its current status, its spend to date, and its measured outcomes against the original case. If this view does not exist, that is the first finding.
Second, establish AI as a standing agenda item for the audit committee. Not quarterly technology updates from the CTO, but structured reporting on AI investment performance, risk metrics, and governance maturity.
Third, set explicit portfolio criteria: what threshold of evidence is required to move from pilot to scale, and what triggers an exit? Apply these consistently. The audit committee should see the same discipline applied to AI as to any other material investment programme.
Fourth, test governance readiness. Commission a gap analysis against the governance frameworks the firm will be measured against: the EU AI Act (if in scope), FCA expectations, and PRA model risk management standards. The 90-day confidence question is a useful starting point.
The audit committee that engages with AI governance now becomes the programme's strongest enabler. The one that defers to the technology committee becomes a bottleneck that surfaces only when something goes wrong. In 2026, with regulatory expectations tightening and AI investment accelerating, deferral is the riskier choice.
*To discuss how the 90-Day AI Acceleration programme can help your board build AI governance that enables rather than blocks, contact the Value Institute.*
