BOARD6 min read

AI in CDD Scopes

What buy-side diligence looks like when AI is part of the thesis.

CS

Clint Sookermany

28 April 2026

Editorial banner for AI in CDD Scopes

AI has moved from a footnote in commercial due diligence reports to a dedicated workstream. In 2026, deal teams are incorporating AI risk alongside GDPR readiness, cybersecurity maturity, and ESG compliance in every serious diligence scope. The reason is straightforward: if the investment thesis depends on operational improvement through AI, the diligence must validate whether that improvement is achievable.

This is a material shift. Two years ago, AI diligence in PE meant checking whether the target used any machine learning models and whether those models carried obvious risks. In 2026, it means assessing whether the target's AI capabilities are a source of value, a source of risk, or both, and pricing the answer into the deal.

AI as a Diligence Dimension

RSM's framework for AI due diligence assessment identifies three questions that buy-side teams now ask systematically.

First: where can AI drive cost efficiency, improve throughput, and enhance decision-making across the target's operations? This is the value creation question. The diligence team maps the target's operations against the fund's AI playbook and identifies which use cases are deployable, what the expected EBITDA impact is, and what investment is required to realise it. A target with clean data, modern infrastructure, and operations suited to AI-driven improvement is worth more than an otherwise identical target without those characteristics.

Second: what is the AI execution risk? A target that claims to be "AI-enabled" but has no production models, no measurement framework, and no technical talent presents execution risk that must be factored into the price. Conversely, a target with mature AI capabilities, demonstrated returns, and a scalable architecture presents less execution risk and may justify a premium.

Third: what are the AI-specific liabilities? Under the EU AI Act, companies deploying high-risk AI systems face compliance obligations with penalties up to 35 million euros or 7% of global turnover for prohibited-use violations. A target with AI systems that have not been classified against the Act's risk tiers, or that are non-compliant with high-risk obligations, carries a regulatory liability that must be quantified.

In the diligence engagements I have worked on, this is where the most significant surprises emerge. Management teams that describe their AI as "internal tools" or "decision support" often have not assessed whether those tools fall within the AI Act's scope.

The CDD AI Workstream

A structured AI diligence workstream covers five areas.

AI inventory and classification. What AI systems does the target operate? This includes not only models the target has built but also AI embedded in vendor products, third-party APIs, and SaaS platforms. Each system is classified by function (customer-facing, operational, analytical), by risk tier under the AI Act, and by criticality to the business.

Data assets and readiness. AI performance depends on data quality. The diligence team assesses the target's data architecture, data governance, and the availability of training data for future AI deployments. Proprietary data assets that could power AI-driven value creation (customer transaction data, operational data, sensor data) are valued as strategic assets. Fragmented, poorly governed data is flagged as a remediation cost.

Technical capability. Does the target have the technical infrastructure and talent to build, deploy, and maintain AI systems? Or will the fund need to invest in infrastructure, hire data scientists, and build the capability from scratch? The answer determines the investment required post-acquisition and the timeline for AI-driven value creation.

In my experience, the capability assessment is where the gap between management's narrative and operational reality is widest. A management team that says "we use AI extensively" may mean they have a handful of pilots, no production models, and no internal capability to scale.

Governance and compliance. Is the target's AI governance adequate for its current deployments and for the planned post-acquisition scaling? This includes model risk management, bias testing, explainability, audit trails, and compliance with the AI Act and any sector-specific regulation. Governance gaps are remediation costs that should be factored into the deal model.

Value creation roadmap. Based on the assessment above, what is the realistic AI value creation opportunity? This is expressed as a prioritised set of use cases, each with an expected EBITDA impact, an investment requirement, a timeline, and a confidence level. The roadmap feeds directly into the 100-day plan and the value creation bridge.

Pricing AI into the Deal

The diligence findings affect deal pricing in three ways.

Value creation upside. If the diligence identifies validated AI use cases with a credible EBITDA impact, this supports the investment thesis and may justify a higher entry multiple. But the key word is "validated." A list of theoretical use cases with no supporting evidence is not a basis for pricing upside.

Remediation costs. Data infrastructure deficiencies, governance gaps, AI Act compliance requirements, and talent shortfalls are all costs that the fund will bear post-acquisition. These should be quantified and deducted from the value creation estimate, not ignored because they are difficult to measure precisely.

Risk adjustment. AI-specific liabilities (regulatory non-compliance, bias in customer-facing models, dependence on a single vendor's AI platform) are risk factors that affect the deal's risk profile. In severe cases, they may warrant specific indemnities or warranty provisions.

The deal teams that run AI diligence most effectively are the ones that integrate it with the commercial and operational diligence rather than treating it as a separate technology workstream. AI value creation is commercial value creation. AI risk is operational risk. The diligence should assess them in context, not in isolation.

What LPs Are Asking

The LP community is beginning to ask GPs about AI strategy, both at the fund level and at the portfolio level. The questions are not yet as structured as ESG reporting, but the direction is clear.

LPs want to understand: does the fund have an AI value creation thesis? Is AI diligence a standard part of the deal process? What AI-driven returns has the fund demonstrated?

For GPs, this means the AI diligence capability is becoming a fundraising asset. A GP that can demonstrate a systematic approach to AI value creation, with case studies, measured returns, and a repeatable playbook, will find it easier to raise the next fund than one that treats AI as an ad hoc initiative.

The firms that build AI into their CDD scope now are investing in a capability that pays dividends across the deal cycle: better entry pricing, faster value creation, stronger exits, and a more compelling fundraising narrative. The cost of not doing so is increasing with every deal cycle.

*To discuss how the 90-Day AI Acceleration programme can help your fund build AI into its CDD scope, contact the Value Institute.*

CS

Clint Sookermany

Founder, The AI Value Institute by Regenvita

25 years of enterprise transformation experience across financial services, healthcare, technology, and government. Helping senior leaders turn AI ambition into measurable business value.

Get insights delivered weekly

Subscribe to the Intelligence Report for practical analysis on AI value creation. Free, weekly, no fluff.