BOARD7 min read

AI Capital Case for 2026

Board-ready model: capacity, capability, capital and conduct.

CS

Clint Sookermany

28 April 2026

Editorial banner for AI Capital Case for 2026

Forty-six percent of financial services firms have now implemented AI to a "high degree" across their operations, according to EY's 2026 research. That is up from 28% of general corporations. Banking is moving faster than most sectors, but faster adoption without a structured capital case creates a different problem: spend without governance, and governance without measurement.

The boards I advise are asking for a model that brings AI investment into the same framework they use for every other capital decision. What follows is the four-lens model that I find works: capacity, capability, capital, and conduct. Each lens asks a different question, and together they produce a case that finance committees, risk committees, and audit committees can engage with.

Lens 1: Capacity

The capacity question is: what can we do with AI that we cannot do today, or cannot do at the scale we need?

This is not about efficiency savings. Efficiency is a byproduct. The capacity lens asks whether AI creates new operating capacity that changes the firm's competitive position. Examples in banking:

A mid-tier UK bank I advised in late 2025 deployed AI-driven credit decisioning that reduced time-to-decision from 48 hours to under 4 minutes for 60% of applications. The efficiency gain was real, but the strategic value was capacity: the bank could process three times its previous application volume without adding underwriters. That is a market share argument, not a cost argument.

Accenture's 2026 research on agentic AI in financial services finds that banks deploying AI agents in operations are not just reducing headcount; they are creating capacity to handle workload volumes that would be physically impossible with human teams alone. The capacity lens captures this: not "we save X FTEs" but "we can now do Y, which we could not do before."

For the board case, capacity should be expressed as: new volume the bank can handle, new products it can offer, new markets it can serve, or new speed at which it can operate. These are growth arguments, and they justify investment differently from cost reduction.

Lens 2: Capability

The capability question is: what can AI do better than our current approach?

This is where most AI business cases start, and where many of them stop. Better fraud detection, better risk scoring, better customer segmentation. These are real capabilities, and they have measurable value. But a capability argument alone is insufficient for a board case because it does not address scale, governance, or risk.

The capability lens should be structured as a maturity assessment. Where is the bank today on each critical AI capability, where does it need to be in 12 and 24 months, and what investment is required to close the gap?

EY's research finds that 61% of banking respondents report substantial impacts from generative AI deployments. But "substantial impact" without a capability roadmap is anecdotal evidence, not a capital case. The board needs to see: these are the five capabilities we are building, this is the current maturity of each, this is the target state, and this is the investment required to get there.

The capability assessment should include build-versus-buy decisions. Banks are spending heavily on foundation model access, but many have not resolved whether their AI capabilities should be built on proprietary models, fine-tuned vendor models, or API-based consumption. Each has different cost structures, risk profiles, and capability ceilings. The board needs this analysis before approving the capital.

Lens 3: Capital

The capital question is: what does this cost, what does it return, and how do we measure it?

This is where most AI business cases are weakest. PwC's 2026 AI Performance Study found that 74% of AI's economic value is captured by just 20% of organisations. The differentiator is not how much they spend but how they govern the spend. Leading firms build in a failure rate assumption of 40 to 50% and manage AI as a portfolio, not a set of individual projects.

The capital lens requires:

Total cost of ownership. Not just the model development cost but the full lifecycle: data engineering, infrastructure, validation, monitoring, governance, and the ongoing MRM burden under SS1/23. In my experience, banks underestimate the ongoing cost by 30 to 50%, because they budget for development without budgeting for the regulatory compliance tail.

Return measurement. Define the metrics before deployment. Revenue generated, cost avoided, risk reduced, capacity created. Each metric needs a baseline and a target. "AI improved our fraud detection" is not a return metric. "AI reduced fraud losses by 2.3 million pounds in the first six months against a baseline of 8.1 million pounds" is.

Portfolio structure. The board should see the AI investment portfolio as a whole: total spend, allocation across capability areas, expected return profile, actual return to date, and failure rate. This is how every other investment portfolio is governed. AI should not be an exception.

Funding model. Is AI funded from a central innovation budget, from business unit P&Ls, or from a dedicated AI capital allocation? Each model has different incentive structures. Central funding encourages experimentation but can disconnect AI investment from business outcomes. Business unit funding ties investment to outcomes but can under-invest in foundational capabilities that serve multiple units. The right answer depends on the bank's operating model, but the board needs to make a deliberate choice rather than inheriting whatever funding model emerged during the experimentation phase.

Lens 4: Conduct

The conduct question is: can we deploy this AI in a way that meets our regulatory obligations and protects our customers?

This lens is unique to regulated industries, and in banking it is non-negotiable. The FCA's approach to AI is principles-based and outcomes-focused, relying on Consumer Duty, SMCR, and operational resilience requirements. The PRA's SS1/23 adds model risk management obligations. The EU AI Act (for firms with EU operations) adds high-risk classification requirements for credit scoring and insurance pricing systems.

The conduct lens requires:

Consumer Duty alignment. Every customer-facing AI system must demonstrate good outcomes. This is not a compliance checkbox; it is a measurable standard. The board should see outcome metrics for every AI system that touches customers.

SMCR accountability. Every AI system must have a named senior manager who is accountable for its performance and compliance. The board should see the accountability map.

Model risk compliance. Every AI system must be in the SS1/23 model inventory, with documentation, validation, and ongoing monitoring in place. The board should see the MRM coverage report.

Ethical risk. Bias testing, fairness assessment, and explainability review for every model that makes decisions about customers. The board should see the results, not just an assurance that they were done.

Assembling the Case

The four-lens model produces a board paper that covers strategy (capacity), operations (capability), finance (capital), and risk (conduct) in a single framework. This is important because AI investment decisions that are made through one lens alone tend to fail:

A pure capability case gets approved but under-funded because the capital analysis was not rigorous. A pure capital case gets approved but creates conduct risk because the regulatory dimension was not addressed. A pure conduct case never gets approved because it frames AI as a risk to be managed rather than an opportunity to be captured.

The board needs all four lenses in a single view, with clear interdependencies. The capacity argument justifies the ambition. The capability assessment defines the roadmap. The capital analysis ensures the investment is governed. The conduct review ensures it is safe.

For banks still presenting AI investment as a technology initiative with a spreadsheet of use cases, the shift to a four-lens capital case is significant. It is also necessary. AI is now a board-level investment category, and it needs board-level rigour to match.

*To discuss how the 90-Day AI Acceleration programme can help your bank build a board-ready AI capital case, contact the Value Institute.*

CS

Clint Sookermany

Founder, The AI Value Institute by Regenvita

25 years of enterprise transformation experience across financial services, healthcare, technology, and government. Helping senior leaders turn AI ambition into measurable business value.

Get insights delivered weekly

Subscribe to the Intelligence Report for practical analysis on AI value creation. Free, weekly, no fluff.