Most manufacturers assume the EU AI Act is a software regulation that stops at the IT department. It does not. By August 2026, any AI system that performs a safety function on the plant floor will be subject to conformity assessment rigour equal to the machinery it is embedded in. The manufacturers that treat this as a documentation exercise will discover too late that it is an engineering one.
The Act classifies AI systems as high-risk through two pathways. The first, under Article 6(1), captures AI systems used as safety components in products that are already regulated by EU product safety legislation (the Machinery Regulation, the Medical Devices Regulation, the Radio Equipment Directive, and others). The second, under Article 6(2) and Annex III, captures AI systems in eight specific use case categories regardless of the product context.
For manufacturers, both pathways are relevant. An AI system that controls a robotic welding cell is a safety component in machinery and is high-risk under Article 6(1). An AI system used for worker monitoring, shift scheduling based on performance metrics, or recruitment filtering is high-risk under Annex III. Many manufacturers will have AI systems in both categories and must comply with both sets of obligations by August 2026.
The difficulty is that most plant-floor AI was deployed before the Act was drafted. These systems were built to operational specifications, not regulatory ones. Retrofitting compliance is harder and more expensive than building it in, and the manufacturers that have not started the classification exercise are running out of time.
The Commission was required to provide guidelines with practical examples of high-risk and not-high-risk use cases by February 2026. These guidelines are now available and should be the starting point for any manufacturer's classification exercise.
What "Safety Component" Means in Practice
The Article 6(1) pathway is the one most relevant to plant-floor AI. An AI system is a safety component if it performs a safety function for a product covered by EU product safety legislation, or if the AI system is itself such a product. For manufacturing, this captures:
Predictive maintenance systems that determine when equipment should be taken out of service. If the AI's recommendation directly affects the safety of the machinery (by allowing it to continue operating when it should be stopped, or by triggering unnecessary shutdowns that create other hazards), it is a safety component.
Autonomous mobile robots (AMRs) that navigate factory floors. These are covered by the Machinery Regulation, and the AI navigation system is a safety component because it determines how the robot moves in proximity to human workers.
Quality inspection systems where the inspection result has safety implications. An AI vision system that inspects safety-critical components (brake assemblies, structural welds, pressure vessels) is performing a safety function. If the system misclassifies a defective component as acceptable, the safety consequence is direct.
Process control systems where the AI adjusts manufacturing parameters that affect product safety or worker safety. An AI system that controls temperature, pressure, or chemical composition in a process where deviation could create a hazard is a safety component.
Not every AI system on the plant floor is a safety component. An AI system that optimises production scheduling, predicts demand, or analyses energy consumption is not performing a safety function and is not high-risk under Article 6(1).
But the boundary can be ambiguous. An energy optimisation system that adjusts ventilation rates in a facility where air quality is a safety concern may cross the threshold. The classification requires a case-by-case assessment.
Annex III High-Risk Categories for Manufacturers
Three Annex III categories are particularly relevant to manufacturers.
Employment and workforce management. AI systems used in recruitment, performance evaluation, task allocation based on individual behaviour, or monitoring and evaluation of workers in employment relationships are high-risk. For manufacturers, this captures AI-driven shift scheduling based on individual performance data, productivity monitoring systems that track individual workers, and automated screening of job applicants.
Access to essential services. AI systems used to evaluate creditworthiness or establish credit scores are high-risk. For manufacturers with financial services operations (trade credit, leasing, equipment financing), this may be relevant.
Safety components of products. This reinforces the Article 6(1) pathway: AI systems intended to be used as safety components in the management and operation of critical infrastructure, including industrial facilities, are high-risk.
Compliance on the Plant Floor
The compliance requirements for high-risk AI systems are specific and auditable. For plant-floor AI, five obligations require particular attention.
Risk management. A risk management system must be established, implemented, documented, and maintained throughout the AI system's lifecycle. For a predictive maintenance system, this means identifying the risks of both false negatives (failing to predict a failure that then occurs) and false positives (triggering unnecessary shutdowns that create other hazards), and demonstrating that the system is designed to minimise both.
Data governance. Training, validation, and testing data must meet quality criteria: it must be relevant, representative, and as free of errors as possible. For a vision QC system trained on images from a specific production line, this means demonstrating that the training data is representative of the full range of products, defect types, and production conditions the system will encounter. A model trained only on daytime images may not perform adequately on the night shift if lighting conditions differ.
Technical documentation. Documentation sufficient for a third party to assess the system's compliance with the Act's requirements. For manufacturing AI, this is often the largest compliance burden. The documentation must cover the system's intended purpose, its design and development process, the data used for training and validation, its performance metrics, and its limitations.
Human oversight. The system must be designed to allow effective human oversight. For a safety-critical AI system on the plant floor, this means: a human can understand the system's outputs, a human can decide not to use the system or override its output, and a human can interrupt the system's operation.
The challenge for real-time process control systems is that human intervention introduces latency. The system design must balance the need for human oversight with the need for timely control action.
Conformity assessment. High-risk AI systems under Article 6(1) must undergo the conformity assessment procedure that applies to the product in which they are embedded. For machinery, this means the AI system is assessed as part of the machinery's CE marking process. For manufacturers that already have conformity assessment processes for their machinery, the AI system must be integrated into that process rather than assessed separately.
Practical Steps
First, inventory every AI system on the plant floor and in the back office. Classify each against Article 6(1) (safety component in a regulated product) and Annex III (specific use case categories). Document the classification reasoning.
Second, for each high-risk system, conduct a gap analysis against the Act's requirements. The gaps will typically be in documentation (most plant-floor AI was deployed without the level of documentation the Act requires), data governance (training data was not assessed for representativeness and bias), and human oversight (real-time systems may not have adequate override mechanisms).
In a recent engagement with an automotive components manufacturer, we found that 11 of their 14 plant-floor AI systems had no documented training data lineage. Seven would classify as high-risk under Article 6(1). The documentation gap alone required a four-month remediation programme. This is not unusual. Most manufacturers that deployed AI before 2024 will find similar gaps when they conduct their first classification exercise.
Third, integrate AI Act compliance into the existing CE marking and quality management processes. The Act's requirements are most efficiently met when they are built into the processes the manufacturer already operates, rather than treated as a separate compliance programme.
Fourth, plan for post-market monitoring. High-risk AI systems require ongoing monitoring of performance and any serious incidents must be reported. For plant-floor AI, this means automated performance monitoring, regular validation against known standards, and a clear escalation process when the system's performance degrades.
The August 2026 deadline is one quarter away. Manufacturers that have not started the classification exercise should begin immediately. The penalty framework (up to 15 million euros or 3% of global turnover for high-risk non-compliance) provides the urgency. But the real motivation should be that a properly classified and compliant AI system is a better-governed, better-documented, and ultimately safer system.
*To discuss how the 90-Day AI Acceleration programme can help your manufacturing organisation comply with the AI Act's requirements for industrial AI systems, contact the Value Institute.*
