REGULATORY7 min read

Loyalty Data & GDPR

Loyalty datasets under the new EU AI Act regime.

CS

Clint Sookermany

28 April 2026

Editorial banner for Loyalty Data & GDPR

Loyalty programmes are the richest source of customer data in retail. Transaction history, purchase frequency, product preferences, price sensitivity, channel behaviour, redemption patterns. For a major retailer, the loyalty dataset contains years of behavioural data on millions of customers. This data is the foundation of AI-driven personalisation, demand forecasting, and pricing optimisation.

It is also, as of August 2026, subject to a regulatory framework that most retail compliance teams have not fully mapped.

The EU AI Act does not replace GDPR. It supplements it. Loyalty data that is used to train or operate AI systems must now comply with both frameworks simultaneously. Where the AI Act's requirements overlap with GDPR, the more demanding standard applies. Where they diverge, both must be met. For retail organisations that have built their AI capabilities on loyalty data, this creates a compliance exercise that is broader and more urgent than many have anticipated.

The Dual Compliance Challenge

GDPR governs the collection, processing, and storage of personal data. It requires a legal basis for processing (typically legitimate interest or consent for loyalty programme data), purpose limitation, data minimisation, and individual rights including access, rectification, and erasure.

The AI Act adds requirements that are specific to how the data is used in AI systems. Three obligations are particularly relevant for loyalty data.

Data governance for AI training. The AI Act requires that training, validation, and testing data for high-risk AI systems meet specific quality standards: it must be relevant, sufficiently representative, and as free of errors as possible. For loyalty data used to train personalisation or pricing models, this means the retailer must demonstrate that the dataset is representative of the customer population it serves, that it has been cleaned of errors and biases, and that its use for AI training is documented.

This is a higher standard than GDPR's general data quality obligation. GDPR requires that personal data be accurate and kept up to date. The AI Act requires that training data be specifically assessed for representativeness and bias. A loyalty dataset that is GDPR-compliant (accurate, lawfully collected, properly consented) may still fail the AI Act's training data standard if it under-represents certain customer segments or contains systematic biases in purchasing patterns.

Fundamental Rights Impact Assessment. For high-risk AI systems that process personal data, the AI Act requires a Fundamental Rights Impact Assessment (FRIA) under Article 27, in addition to the Data Protection Impact Assessment (DPIA) already required by GDPR Article 35. The two assessments have different scopes: the DPIA focuses on risks to data subjects from the processing of their personal data; the FRIA assesses broader risks to fundamental rights, including non-discrimination, consumer protection, and the right to an effective remedy.

For a retailer using loyalty data to power AI-driven personalisation, both assessments are likely required. The DPIA assesses the data protection risks (is the personalisation processing lawful? are individual rights preserved?). The FRIA assesses the fundamental rights risks (does the personalisation system discriminate? does it affect consumer choice in ways that are disproportionate?). Running both assessments in parallel, with a consistent methodology and a single governance structure, is the practical approach.

Transparency obligations. The AI Act requires that individuals be informed when they are interacting with an AI system or when AI is being used to make decisions that affect them. For loyalty-powered personalisation, this means customers must know that their loyalty data is being used by an AI system to personalise their experience. This goes beyond GDPR's existing transparency requirements (which focus on what data is collected and how it is processed) to include how the AI system uses that data and what effect it has on the customer's experience.

Where Retail Compliance Gaps Typically Appear

In the retail organisations I have worked with, three compliance gaps are most common.

Consent architecture does not cover AI use. The loyalty programme's terms and conditions, drafted before the AI Act, typically cover data collection and basic personalisation. They do not explicitly cover the use of loyalty data to train AI models, to power real-time pricing decisions, or to generate AI-driven communications. Under GDPR, this may be addressed through legitimate interest (depending on the jurisdiction and the specific processing). Under the AI Act, the transparency obligation requires explicit disclosure of AI use, which the existing consent architecture may not provide.

The practical fix is a consent audit: review every AI system that uses loyalty data, map the data flows, and verify that the existing consent or legal basis covers the specific AI processing. Where gaps exist, update the terms, the privacy notice, and the consent management platform. This is a significant but bounded exercise for most retailers.

Data quality has not been assessed against AI Act standards. Most retailers have data quality programmes focused on operational accuracy: is the customer's address correct? is the transaction record complete? The AI Act's data quality standard is different: is the dataset representative? does it contain biases that could lead to discriminatory outcomes? has it been tested for these issues?

A loyalty dataset that accurately records every transaction is operationally clean. But if it under-represents customers who shop less frequently, who use cash instead of loyalty cards, or who belong to demographic groups with lower programme participation, the AI models trained on it will produce systematically biased outputs. The bias assessment is not something most retail data teams have done.

No single view of AI systems using loyalty data. Most retailers I work with cannot produce a complete inventory of the AI systems that access their loyalty dataset. Marketing has personalisation models. Commercial has pricing models. Supply chain has demand forecasting models. Each team built their models independently, often with different vendors, and the loyalty data flows into all of them through various pipelines. Without a consolidated inventory, the retailer cannot assess compliance at the system level.

The Master Customer ID Prerequisite

The technical prerequisite for AI Act compliance is the same capability that powers effective AI personalisation: a master customer ID that links online and offline transactions, loyalty activity, and service interactions into a single customer view. Without it, the retailer cannot trace how a specific customer's data flows through AI systems, cannot assess whether the AI's treatment of that customer is fair and transparent, and cannot fulfil individual rights requests that span multiple AI applications.

Retailers that have invested in customer data platforms and identity resolution are better positioned for AI Act compliance than those whose customer data remains siloed by channel. The investment in data infrastructure pays a regulatory dividend as well as a commercial one.

Steps for the Next Quarter

First, complete the AI inventory. Every system that uses loyalty data, including vendor-provided systems, needs to be catalogued with its data inputs, its outputs, and its risk classification under the AI Act.

Second, run the consent audit. Does the existing legal basis cover the specific AI processing? Update where it does not.

Third, commission the bias assessment. Is the loyalty dataset representative? Where are the gaps? What are the implications for the AI models trained on it?

Fourth, integrate the FRIA with the existing DPIA process. Run both assessments for every high-risk AI system that processes loyalty data, using a consistent methodology.

The August 2026 deadline for high-risk AI system compliance is one quarter away. Retailers that have not started this work face a compressed timeline. But the work is not optional, and the penalty framework (up to 35 million euros or 7% of global turnover for the most serious violations) provides the urgency that compliance budgets sometimes need.

*To discuss how the 90-Day AI Acceleration programme can help your retail organisation align loyalty data AI with the new regulatory regime, contact the Value Institute.*

CS

Clint Sookermany

Founder, The AI Value Institute by Regenvita

25 years of enterprise transformation experience across financial services, healthcare, technology, and government. Helping senior leaders turn AI ambition into measurable business value.

Get insights delivered weekly

Subscribe to the Intelligence Report for practical analysis on AI value creation. Free, weekly, no fluff.