EU AI Act Amendments Explained: New High‑Risk AI Compliance Rules and Machinery Regulation Changes

Key Dates and Compliance Milestones

2 December 2026: Compliance deadline for prohibitions on AI systems used to generate non consensual intimate content and AI generated child sexual abuse material, as well as revised transparency and watermarking obligations for AI generated content.

2 December 2027: Compliance deadline for stand alone high risk AI systems, including systems used in biometrics, critical infrastructure, education, employment, law enforcement, migration, asylum, and border management.

2 August 2028: Compliance deadline for AI systems used as safety components of products, or AI systems that are themselves products covered by EU sector specific product safety legislation, including machinery related systems.

These revised implementation dates are intended to provide regulators and market participants additional time to develop harmonized standards, conformity assessment procedures, technical guidance, and compliance infrastructure necessary for implementation of the AI Act.

EU Reaches Provisional Agreement to Amend the AI Act

The European Parliament and the Council of the European Union have reached a provisional political agreement to amend portions of the EU Artificial Intelligence Act (“AI Act”) as part of the European Commission’s Digital Omnibus simplification initiative.

The amendments are intended to address concerns regarding regulatory overlap, implementation uncertainty, and compliance burdens associated with high risk AI systems. While the agreement preserves the AI Act’s risk based framework, it introduces targeted revisions affecting implementation timelines, sectoral applicability, enforcement mechanisms, and prohibited AI practices.

The agreement remains subject to formal adoption by the European Parliament and the Council before entering into force.

Revised Compliance Deadlines for High Risk AI Systems

The agreement postpones the application of certain obligations applicable to high risk AI systems.

Under the revised framework, obligations for stand alone high risk AI systems, including systems used in biometrics, critical infrastructure, education, employment, law enforcement, migration, asylum, and border management, will apply beginning 2 December 2027.

Obligations applicable to AI systems used as safety components of products, or AI systems that are themselves products covered by Union harmonisation legislation concerning product safety and market surveillance, will apply beginning 2 August 2028.

EU institutions stated that the revised timelines are intended to ensure that harmonized standards, conformity assessment procedures, technical guidance, and implementation tools are available prior to enforcement.

The amendments also postpone transparency obligations applicable to AI generated content. The deadline for implementation of watermarking and transparency solutions has been revised to 2 December 2026.

Clarification of the AI Act’s Relationship with the Machinery Regulation

A principal issue during negotiations concerned the interaction between the AI Act and existing EU product safety legislation, particularly Regulation (EU) 2023/1230, commonly referred to as the Machinery Regulation. The Machinery Regulation establishes harmonized health and safety requirements applicable to machinery products placed on the EU market, including certain AI enabled machinery systems.

The provisional agreement clarifies that machinery products already subject to AI related health and safety obligations under sector specific legislation generally will not also be subject to duplicative requirements under the AI Act. Instead, compliance obligations relating to health and safety risks associated with AI enabled machinery will primarily be governed through the Machinery Regulation framework.

The agreement further authorizes the European Commission to adopt delegated acts under the Machinery Regulation where additional AI specific health and safety requirements are necessary for AI systems classified as high risk pursuant to the AI Act.

The amendments are intended to reduce duplicative compliance obligations for manufacturers and providers of AI enabled industrial products while preserving equivalent levels of health and safety protection. The revised framework may also influence future interpretation of overlaps between the AI Act and other sector specific legislation governing medical devices, connected vehicles, toys, lifts, and watercraft.

Narrowing the Definition of “Safety Component”

The amendments narrow the definition of “safety component” for purposes of determining whether AI systems qualify as high risk under the AI Act.

Under the revised language, products with AI functions that “only assist users or optimise performance” will not automatically be treated as high risk AI systems where “their failure or malfunction does not create health or safety risks.” This language reflects the compromise text agreed upon by EU co legislators during trilogue negotiations.

The clarification is significant because the AI Act’s high risk obligations apply to AI systems intended to be used as safety components of products covered by Union harmonisation legislation. By narrowing the interpretation of what constitutes a “safety component,” the amendments may reduce the number of industrial and enterprise AI systems subject to obligations relating to conformity assessments, technical documentation, risk management systems, registration in the EU database for high risk AI systems, and post market monitoring requirements.

The revisions are intended to distinguish between AI systems that materially affect product safety and AI functionalities that are primarily operational, assistive, or performance related.

Prohibition on AI Generated Non Consensual Intimate Content

The agreement expands the AI Act’s prohibited AI practices to include systems designed to generate non consensual sexually explicit content.

The prohibition applies to AI systems used to create child sexual abuse material or sexually explicit depictions of identifiable individuals without consent, including so called “nudifier” applications.

The restrictions apply to providers placing such systems on the EU market, organizations deploying the systems for prohibited purposes, and providers that fail to implement reasonable safeguards against misuse.

The prohibition covers AI systems that generate images, audio, or video depicting intimate parts of an identifiable person or depicting an identifiable person engaged in sexually explicit conduct without consent.

Compliance with the prohibition will become mandatory on 2 December 2026.

Expanded Flexibility for Bias Detection and Mitigation

The amendments permit organizations to process personal data where “strictly necessary to detect and correct biases,” provided appropriate safeguards are implemented.

The provision applies to both high risk and non high risk AI systems and is intended to facilitate fairness testing, bias detection, algorithmic auditing, and corrective model validation procedures.

The revised language reinstates the “strict necessity” standard for processing special categories of personal data in connection with bias detection and mitigation activities.

Organizations processing personal data for these purposes will remain subject to obligations under the General Data Protection Regulation (“GDPR”), applicable national privacy laws, and existing proportionality and data minimisation requirements under EU law.

Regulatory Relief for SMEs and Small Mid Cap Companies

The reforms extend certain regulatory accommodations previously available to small and medium sized enterprises (“SMEs”) to small mid cap companies (“SMCs”).

EU policymakers stated that the amendments are intended to reduce compliance costs for scaling technology companies, support innovation within the European AI market, and improve competitiveness within the EU technology sector.

The agreement also extends access to certain regulatory support mechanisms, including AI regulatory sandboxes designed to facilitate testing and development of AI systems under regulatory supervision.

Centralized Enforcement and AI Office Authority

The agreement expands the supervisory authority of the EU AI Office with respect to certain general purpose AI systems, particularly where the provider develops both the underlying AI model and the downstream system.

National authorities will retain jurisdiction over specified sectors, including law enforcement, border management, judicial systems, and financial services.

The amendments also reinstate the obligation for providers to register certain exempted high risk AI systems within the EU database for high risk systems.

In addition, the agreement streamlines enforcement responsibilities for certain general purpose AI systems embedded within very large online platforms and very large online search engines.

Industry and Stakeholder Responses

Industry and consumer responses to the agreement have been divided.

Industry organizations, including DIGITALEUROPE, supported efforts to reduce overlap between the AI Act and sector specific legislation but criticized the absence of broader simplification measures for additional regulated sectors, including medical devices.

The Computer & Communications Industry Association (“CCIA Europe”) stated that the amendments did not sufficiently reduce overall compliance complexity under the AI Act.

Consumer organizations, including BEUC, argued that certain amendments could weaken existing safeguards and create regulatory gaps for industrial AI systems.

EU officials characterized the agreement as an effort to balance innovation, competitiveness, and protection of fundamental rights.

Considerations for Businesses

Organizations developing, deploying, importing, distributing, or integrating AI systems within the European market should assess how the proposed amendments affect existing compliance programs, contractual frameworks, product governance procedures, and risk allocation strategies.

Legal and compliance teams should evaluate whether AI enabled products fall primarily within the scope of sector specific legislation rather than direct AI Act obligations, particularly in industries subject to Union harmonisation legislation concerning product safety and market surveillance.

Businesses should also reassess whether AI functionalities previously expected to qualify as “high risk” continue to meet the revised threshold applicable to “safety components.” This assessment may affect conformity assessment obligations, technical documentation requirements, post market monitoring procedures, and registration obligations within the EU database for high risk AI systems.

Organizations deploying AI systems should review governance procedures relating to:

• Risk classification methodologies;

• Documentation and recordkeeping obligations;

• Vendor and third party AI procurement terms;

• Product liability allocation;

• Human oversight procedures;

• Transparency and watermarking requirements for AI generated content;

• Bias detection and algorithmic auditing procedures; and

• Data governance controls involving special categories of personal data.

Manufacturers and providers operating across multiple regulated sectors should also monitor forthcoming delegated acts, implementing acts, harmonized standards, and European Commission guidance clarifying the interaction between the AI Act and sector specific legislation.

Because the amendments remain subject to formal adoption and additional implementing measures, organizations should avoid assuming that existing compliance obligations have been eliminated entirely. Instead, businesses should continue preparing for implementation while reassessing timelines, scope determinations, and sector specific applicability.

Conclusion

The provisional agreement reflects a targeted effort by EU lawmakers to reduce implementation burdens associated with the AI Act while preserving its core risk based regulatory structure.

The amendments postpone key compliance deadlines, clarify the relationship between the AI Act and sector specific legislation, narrow the interpretation of certain high risk classifications, and expand regulatory accommodations for SMEs and small mid cap companies. At the same time, the reforms expand prohibited AI practices involving non consensual intimate content and AI generated child sexual abuse material.

The agreement also signals a broader shift toward harmonizing the AI Act with existing EU product safety legislation, particularly in heavily regulated industrial sectors. For manufacturers, AI developers, deployers, and compliance professionals, the revisions may materially affect risk classification analyses, product governance obligations, and implementation timelines.

The amendments remain subject to formal adoption by the European Parliament and the Council of the European Union, as well as additional delegated acts, implementing measures, and regulatory guidance. Organizations operating within the EU market should continue monitoring legislative developments and reassess existing AI governance frameworks in advance of the revised compliance deadlines.

Next
Next

EU Artificial Intelligence Act: A Landmark Regulation with Global Implications