REGULATORY6 min read

AI Act & GP/LP Disclosure

What LPs are starting to expect in AI risk reporting.

CS

Clint Sookermany

28 April 2026

Editorial banner for AI Act & GP/LP Disclosure

The EU AI Act has entered the deal room. Deal teams that once treated AI governance as a formality are now incorporating a dedicated AI risk workstream alongside GDPR, cybersecurity, and ESG diligence. The penalties make this inevitable: up to 35 million euros or 7% of global turnover for prohibited-use violations, 15 million euros or 3% for high-risk non-compliance. For a private equity fund with portfolio companies deploying AI across customer-facing and operational functions, the aggregate regulatory exposure is material.

But the disclosure challenge extends beyond regulatory compliance. LPs are beginning to ask about AI risk in the same structured way they now ask about ESG. The questions are less formalised, but the direction is clear, and the GPs that build disclosure capability now will have an advantage when LP expectations harden.

The Regulatory Exposure

The AI Act's obligations apply to the entities that deploy AI systems, not to the investors who own those entities. But private equity ownership creates a chain of accountability that LPs are right to scrutinise.

If a portfolio company deploys a high-risk AI system (credit scoring, insurance pricing, recruitment filtering, biometric identification) without meeting the Act's requirements for risk management, data governance, technical documentation, human oversight, and transparency, the regulatory liability sits with the PortCo. The fine is calculated against the PortCo's turnover. But the economic impact falls on the fund and, ultimately, on the LP.

The Act also elevates AI governance to board-level responsibility. Directors face potential personal liability under corporate law fiduciary duties if they consciously disregard significant regulatory risks. For a PE-backed company, the board typically includes fund representatives. The governance obligation is not theoretical.

In the PE advisory work I have done, the most common gap is basic: portfolio companies have not inventoried their AI systems against the Act's classification framework. They do not know how many of their systems are in scope, let alone whether those systems comply. The August 2026 compliance deadline for high-risk systems is one quarter away. A portfolio company that cannot tell its board which of its AI systems are high-risk under the Act has a governance problem that is also an LP disclosure problem.

What LPs Are Asking

The LP questions about AI are evolving through three phases.

Phase 1 (current for most LPs): Strategy and thesis. Does the fund have an AI value creation strategy? Is AI a component of the investment thesis? How does the fund assess AI capability and risk in diligence?

These are qualitative questions. They appear in due diligence questionnaires and in annual meetings. A GP that can articulate a clear AI strategy, with examples and measured returns, satisfies this phase.

Phase 2 (emerging among sophisticated LPs): Portfolio exposure. What AI systems are deployed across the portfolio? What is the aggregate regulatory exposure under the AI Act? What is the fund's governance framework for AI risk?

These are more structured questions. They require the GP to have portfolio-level visibility into AI deployments, which most funds do not yet have.

Phase 3 (anticipated within 2 to 3 years): Standardised reporting. Standardised AI risk metrics in the LP report, analogous to ESG reporting frameworks. This phase has not yet arrived, but the trajectory from ESG suggests it will. The GPs that build the reporting infrastructure now will transition smoothly. Those that wait will scramble to retrofit it, just as many did when ESG reporting standards crystallised.

Building the Disclosure Framework

A GP building an AI disclosure framework for LP reporting needs four components.

Portfolio AI inventory. A fund-level register of all AI systems deployed across portfolio companies, classified by risk tier under the AI Act (prohibited, high-risk, limited risk, minimal risk). This inventory should be updated annually at minimum, and whenever a PortCo makes a significant AI deployment. The inventory is the foundation: without it, no meaningful disclosure is possible.

Compliance status tracking. For each high-risk AI system in the portfolio, the GP should track compliance status against the Act's requirements: risk management system in place, technical documentation complete, data governance adequate, human oversight mechanisms operational, transparency obligations met, CE marking and EU database registration where required. This is a material undertaking for a diversified portfolio, but it is the same discipline that funds already apply to GDPR and financial regulation.

Risk quantification. The GP should be able to estimate the aggregate regulatory exposure across the portfolio. This is not a precise calculation (the Act has not yet been enforced, and penalty calibration is uncertain), but a range estimate based on the number of high-risk systems, their compliance status, and the applicable penalty thresholds gives LPs a meaningful view of the risk.

Value creation attribution. On the positive side, the disclosure should quantify AI-driven value creation across the portfolio: EBITDA impact, efficiency gains, revenue uplift. This balances the risk narrative with the return narrative and demonstrates that the fund's AI strategy is generating value, not just managing liability.

The Governance Architecture

The disclosure framework requires a governance architecture that flows from the fund level to the PortCo level and back.

At the fund level, the AI operating partner (or equivalent) maintains the portfolio AI inventory, sets governance standards, and produces the LP report. At the PortCo level, a designated board member or executive is responsible for maintaining the PortCo's AI inventory, ensuring compliance, and reporting to the fund.

The information flow must be bidirectional. The fund pushes governance standards and compliance requirements to the PortCos. The PortCos push inventory data, compliance status, and value creation metrics to the fund. The LP report aggregates the portfolio-level view.

This architecture does not need to be complex. A quarterly reporting template, a standardised risk classification methodology, and a clear accountability structure at each level are sufficient. The cost of building this is modest relative to the cost of discovering a compliance gap during diligence or defending a regulatory action without adequate governance evidence.

Timing

The LP community's AI risk expectations will formalise faster than ESG did, because the regulatory framework (the AI Act) already exists and includes specific penalties. ESG reporting evolved over a decade from informal expectations to standardised frameworks. AI risk reporting will compress that timeline because the regulatory catalyst is already in place.

GPs that build the disclosure capability in 2026 will find it easier to raise their next fund, easier to demonstrate governance to LPs, and better positioned to manage the regulatory exposure across their portfolio. Those that treat AI disclosure as a future problem will find it arriving faster than they expected.

*To discuss how the 90-Day AI Acceleration programme can help your fund build an AI risk disclosure framework for LP reporting, contact the Value Institute.*

CS

Clint Sookermany

Founder, The AI Value Institute by Regenvita

25 years of enterprise transformation experience across financial services, healthcare, technology, and government. Helping senior leaders turn AI ambition into measurable business value.

Get insights delivered weekly

Subscribe to the Intelligence Report for practical analysis on AI value creation. Free, weekly, no fluff.