Brussels (Brussels Morning Newspaper) January 12, 2026 – EU member states aim to finalise positions on proposed amendments to the AI Act by April 2026. The regulation, applicable since August 2024, undergoes review to address enforcement issues and technological developments. Coordination involves the Commission, Council, and Parliament focusing on high-risk systems and general-purpose AI models.
- AI Act Implementation Timeline and Phased Rollout
- Key Amendment Proposals Under Member State Review
- Coordination Structures Among Member States
- European Commission Oversight of Amendment Drive
- Global Standards Influenced by EU Approach
- Consensus Challenges Facing April Deadline
- Stakeholder Input Shaping Revisions
The AI Act implements a risk-based approach to regulating artificial intelligence across the European Union. It prohibits certain practices, mandates assessments for high-risk applications in areas such as hiring and medical devices, and requires transparency for generative models. Member states use working parties to develop amendment proposals ahead of the April target.
AI Act Implementation Timeline and Phased Rollout

The Artificial Intelligence Act took effect on August 1, 2024, with prohibitions on practices like government social scoring applying immediately. High-risk system requirements begin in August 2026, while general-purpose AI obligations started in August 2025. Codes of Practice for foundational models completed public consultation in late 2025.
National authorities enforce rules, supported by the European AI Office. Member states submit yearly reports on progress. Stakeholder input from 2025 identified gaps in high-risk definitions and model evaluation criteria.
Key Amendment Proposals Under Member State Review
Discussions centre on refining high-risk system lists in Annex III, covering biometric identification and education tools. Proposals adjust thresholds for systemic risk in general-purpose models based on computing parameters. Enforcement updates may harmonise penalties reaching €35 million or 7% of turnover.
Council working parties under successive presidencies facilitate technical exchanges. France, Germany, and the Netherlands contribute leading inputs on the innovation-safety balance. Pilots from 2025 sandboxes inform evidence-based changes.
| Amendment Area | Existing Provision | Potential Revision |
| High-Risk AI | Fixed Annex III | Comitology updates |
| GPAI Models | Risk assessments | Capability scaling |
| SME Support | Basic exemptions | Raised thresholds |
| Penalties | National variation | EU coordination |
Privacy enforcement intersects with AI rules, as data protection authorities scrutinise training practices. AI ethics commentator Tim Green @humanin_theloop said in X post,
“Meta pauses rollout of its Meta AI in Europe following a request from the Irish Data Protection Commission citing privacy concerns over use of Facebook and Instagram data for AI training. A setback for EU AI innovation, but critical for user consent standards.”
Meta pauses rollout of its Meta AI in Europe following a request from the Irish Data Protection Commission citing privacy concerns over use of Facebook and Instagram data for AI training. A setback for EU AI innovation but critical for user consent standards. Details:…
— Tim Green (@humanin_theloop) January 12, 2026
Coordination Structures Among Member States
The AI Technical Working Party convenes regularly, with COREPER ambassadors assessing advances before April. The Commission supplies impact assessments. Sandboxes operational in multiple states since 2025 test flexibilities.
Italy and Spain document sandbox successes in voluntary compliance. Germany aligns proposals with GDPR frameworks. Over 200 organisations provided feedback during December 2025 dialogues.
The AI Pact counts 400 voluntary adherents advancing early adoption.
European Commission Oversight of Amendment Drive

The Commission exercises delegated authority for annex updates under scrutiny. Substantive changes follow the ordinary legislative procedure. Executive Vice-President for tech sovereignty directs AI Office operations.
DG Connect drafts texts with AI Board input from national experts. GPAI codes finalise in the first quarter of 2026. The Act integrates with the Digital Services Act and Data Act provisions. Annual enforcement costs project at €4-10 billion Union-wide.
Global Standards Influenced by EU Approach

EU rules shape international frameworks, with references in Japan and South Korea legislation. United States bills in 2026 consider risk tiers. G7 Hiroshima commitments address generative risks.
China prioritises security in AI governance. Singapore and Canada implement a tiered system modelled on the EU design. Member states pursue cross-border harmonisation.
Consensus Challenges Facing April Deadline
Positions vary on expanding prohibitions and fine-tuning allowances. Smaller member states seek enforcement capacity aid. Major economies advocate flexibility.
France strengthens biometric safeguards; Nordics expand research carve-outs. March Competitiveness Council reviews status. The timeline coincides with the Digital Decade assessment targeting 75% public AI adoption.
Stakeholder Input Shaping Revisions
BusinessEurope endorses burden reductions for startups. Parliament’s internal market committee tracks progress. Rights groups demand non-regression protections. National parliaments evaluate subsidiarity. EESC delivered 2025 SME opinions. Sandboxes in 15 states supply deployment data.