TECHNICAL6 min read

Cross-Portfolio AI Infrastructure

Shared services patterns that survive PortCo independence.

CS

Clint Sookermany

28 April 2026

Editorial banner for Cross-Portfolio AI Infrastructure

The central tension in private equity AI strategy is scale versus independence. The fund wants to deploy AI capabilities across the portfolio efficiently: shared infrastructure, shared playbooks, shared vendor relationships, economies of scale. But every portfolio company must be independently viable at exit. A PortCo that depends on fund-level AI infrastructure for its core operations has a problem that buy-side diligence will find and that exit multiples will reflect.

The firms navigating this tension well are the ones that have thought clearly about which layers of the AI stack belong at the fund level and which belong at the PortCo level. Bain & Company's *Global Private Equity Report 2025*, based on a survey of investors representing $3.2 trillion in assets under management, found that leading firms including Apollo and Hg are building fund-level AI centres of excellence and shared architectures to deploy generative AI systematically across portfolio companies. Getting this architecture right is a technical decision with direct financial consequences.

The Three-Layer Model

The cross-portfolio AI architecture that I find works in practice separates into three layers, each with a different ownership model.

Layer 1: Governance and standards (fund-owned). This layer includes the AI governance framework, the risk classification methodology, the model validation standards, the bias testing protocols, and the compliance playbook for the EU AI Act and other relevant regulation. These are intellectual assets that the fund develops once and deploys across the portfolio. They do not create operational dependencies.

When a PortCo exits, it takes a copy of the standards and continues to operate them independently. The cost of maintaining these standards at the PortCo level post-exit is minimal.

This layer also includes the use case library and the deployment methodology. These are knowledge assets, not infrastructure assets. They transfer cleanly at exit because they are documentation and process, not running systems.

Layer 2: Shared enablement (fund-facilitated, PortCo-owned). This layer includes capabilities that the fund negotiates or builds at scale but that are instantiated separately for each PortCo. Examples: enterprise agreements with cloud providers (negotiated at the fund level for volume pricing, but each PortCo has its own account and its own data), shared vendor relationships for AI tooling (the fund negotiates the master agreement, but each PortCo has its own instance), and a shared talent pool (fractional data science resources that the fund provides to PortCos that cannot justify a full-time hire).

The key design principle: each PortCo's instance is separable. The cloud account, the data, the models, and the integrations all belong to the PortCo. The fund provides the negotiating leverage and the expertise, but the operational capability lives within the PortCo.

At exit, the PortCo continues with its own accounts and its own vendor relationships. The only thing that changes is the pricing, which may increase once the fund's volume discount no longer applies.

Layer 3: PortCo-native capabilities (PortCo-owned and operated). This layer includes the AI models, the data pipelines, the integrations with the PortCo's operational systems, and the in-house or contracted technical talent that builds and maintains them. These capabilities are fully owned by the PortCo and are not shared across the portfolio.

This is where the value creation happens and where the exit narrative is built. A PortCo with its own pricing model, its own customer analytics capability, and its own AI-driven operational improvements is demonstrably more valuable than one whose AI capability is a plugin to the fund's shared platform.

Patterns That Fail at Exit

Three common cross-portfolio AI architectures create exit problems.

The centralised data lake. Some funds build a shared data platform that aggregates data from multiple PortCos for cross-portfolio analytics. This is useful for fund-level reporting and portfolio optimisation, but it creates a data governance problem (whose data is it?) and an exit problem (the PortCo's data must be cleanly extractable). In two of the PE technology diligence engagements I have worked on, centralised data lakes delayed exit timelines because the data extraction and migration was more complex than anticipated.

The shared model platform. A fund that builds AI models at the portfolio level and deploys them across PortCos creates a dependency. If the PortCo's pricing model runs on the fund's platform, the PortCo needs to migrate that model to its own infrastructure at exit. This is technically feasible but operationally disruptive and often more expensive than building the model within the PortCo from the start.

The fund's data science team as the PortCo's AI capability. When the fund provides data scientists who build and maintain models for the PortCo, the capability leaves when the fund does. This is the most common and most damaging pattern. The PortCo has AI-driven operations but no internal capability to maintain or evolve them. Buy-side diligence will identify this as a key person risk (where the key person is the fund's team, not an individual) and price it accordingly.

Patterns That Work

Embedded-then-transferred. The fund provides a data scientist or AI engineer to the PortCo for 6 to 12 months. During that period, the external resource builds the capability and trains an internal team. At the end of the engagement, the PortCo has its own models, its own pipelines, and an internal team that can maintain them.

The fund's resource moves to the next PortCo. This pattern builds genuine capability within the PortCo while leveraging the fund's talent pool.

Playbook-driven deployment. The fund provides the methodology, templates, and tools for deploying standard AI use cases. The PortCo's team (internal or contracted) executes the deployment using the playbook. The playbook accelerates deployment and reduces risk, but the resulting capability is fully owned by the PortCo.

Vendor-managed, PortCo-contracted. For PortCos that need AI capabilities but cannot build internally, the fund identifies and vets a vendor, negotiates the contract terms, and helps the PortCo implement. The contract is between the PortCo and the vendor, not between the fund and the vendor. At exit, the vendor relationship continues with the PortCo. The fund's role was matchmaking and quality assurance, not operational provision.

The Exit Readiness Test

For every cross-portfolio AI investment, the operating partner should apply a simple test: if this PortCo were sold tomorrow, could its AI capabilities continue to operate without any involvement from the fund? If the answer is no, the architecture needs to change.

This test should be applied at the point of investment, not at the point of exit. Retrofitting PortCo independence into a fund-dependent architecture is significantly more expensive than designing for independence from the start. The operational partnership should begin with exit in mind, ensuring that every AI deployment builds value within the PortCo, not dependency on the fund.

The funds that get this right will sell portfolio companies with genuine, self-contained AI capabilities. Those that build centralised platforms for short-term efficiency will spend the last 12 months before exit untangling dependencies. The architecture decision is a value creation decision. It should be made with the same rigour.

*To discuss how the 90-Day AI Acceleration programme can help your fund design exit-ready cross-portfolio AI infrastructure, contact the Value Institute.*

CS

Clint Sookermany

Founder, The AI Value Institute by Regenvita

25 years of enterprise transformation experience across financial services, healthcare, technology, and government. Helping senior leaders turn AI ambition into measurable business value.

Get insights delivered weekly

Subscribe to the Intelligence Report for practical analysis on AI value creation. Free, weekly, no fluff.