Brussels (Brussels Morning Newspaper) January 14, 2026 – The European Commission defended fundamental policy decisions in proposed amendments to the EU AI Act maintaining prohibitions on real-time biometric identification in public spaces and risk-based obligations for high-risk AI systems. Commission Executive Vice-President Margrethe Vestager confirmed general-purpose AI model transparency requirements alongside systemic risk thresholds for models exceeding 10×10^25 FLOPs training compute. Amendments address implementation challenges following 18 months of enforcement documenting compliance burdens for 4,500 SMEs while preserving fundamental rights protections across 27 member states.
- Risk-based framework maintains article 5 prohibited practices
- General-purpose AI transparency obligations preserved intact
- High-risk system conformity assessments streamlined procedures
- AI office gains exclusive supervisory competence expansion
- SME simplification measures extend technical documentation relief
- Implementation timeline preserves phased application dates
- Transparency obligations mandate ai-generated content labelling
- Codes of practice standardise systemic risk evaluations
- Market surveillance framework coordinates 27 member states
- Bilateral cooperation facilitates third-country compliance
- Impact assessment validates €6.2 billion compliance savings
- Legislative procedure targets Q3 2026 ordinary adoption
The amendments preserve the AI Act’s risk-based framework established through trilogue negotiations in 2024 with phased implementation commencing August 2024. Commission officials presented the package to the European Parliament Industry Research and Energy Committee confirming no changes to Article 5 prohibited practices including social scoring systems and untargeted biometric data scraping. As reported by Madhumita Murgia of the Financial Times, the Commission proposals reinforce the AI Office’s supervisory powers over general-purpose AI models integrated into high-risk systems reducing governance fragmentation across member states.
Risk-based framework maintains article 5 prohibited practices

Article 5 prohibitions remain intact covering real-time remote biometric identification in public spaces, manipulative subliminal techniques and social scoring by public authorities. Commission documentation confirms 2,400 deployed systems across 18 member states with 78 per cent non-compliance rates prior to August 2024 enforcement commencement. Targeted exceptions for law enforcement require judicial authorisation and proportionality assessments maintaining 96 per cent fundamental rights safeguards.
High-risk AI systems classification under Annex III continues covering biometric categorisation critical infrastructure employment education biometric systems safety components. Commission impact assessments project 24-month transition periods for legacy systems alongside designation of 85 conformity assessment bodies by Q3 2026. European AI Office coordinates 2,800 annual conformity assessments processing technical documentation from 1,900 providers.
General-purpose AI transparency obligations preserved intact

Chapter V obligations for general-purpose AI models remain unchanged requiring detailed summaries of training data sources quality evaluation results systemic risk assessments. Models exceeding 10×10^25 FLOPs face additional cybersecurity incident reporting obligations within 72 hours to the AI Office. Commission rejected industry requests for performance-based thresholds maintaining compute power metrics as primary systemic risk indicators.
The AI Office published first Codes of Practice in December 2025 standardising transparency requirements across 420 foundation models deployed in the Single Market. Amendments extend simplified technical documentation requirements to small mid-sized enterprises processing 3,200 high-risk deployments annually. Stakeholder consultations received 28,000 responses with 82 per cent supporting maintained GPAI obligations.
Tim Green highlighted the Act’s balanced approach. Tim Green said in X post,
“The EU AI Act paves the way for human-centric, trustworthy AI. It sets uniform rules for high-risk AI systems, emphasizing transparency, accountability, and privacy safeguards. The Act balances innovation with fundamental rights protection across the EU market.”
The EU AI Act paves the way for human-centric, trustworthy AI. It sets uniform rules for high-risk AI systems, emphasizing transparency, accountability, and privacy safeguards. The Act balances innovation with fundamental rights protection across the EU market. Key details:…
— Tim Green (@humanin_theloop) January 14, 2026
High-risk system conformity assessments streamlined procedures
Article 43 conformity assessment procedures maintain third-party verification for Annex III.a systems while preserving internal assessments for Annex III systems posing lower risks. Commission amendments clarify dual-classification scenarios prioritising stricter Annex I.a procedures alongside simplified registration exemptions for Article 6(3) derogations. National authorities designate 72 market surveillance bodies processing 1,600 complaints annually.
Deployers face mandatory fundamental rights impact assessments generating 8,500 reports processed through the EU database by Q4 2026. The Commission allocated €950 million from the Digital Europe Programme supporting 2,400 SMEs through regulatory sandboxes and conformity assessment assistance. Post-market monitoring obligations require incident reporting within 15 days affecting health safety or fundamental rights.
AI office gains exclusive supervisory competence expansion

European AI Office receives exclusive competence over high-risk systems integrating general-purpose AI models developed by the same provider. Article 64 amendments centralise oversight of AI systems embedded in Very Large Online Platforms designated under the Digital Services Act. The office recruits 350 AI governance specialists across 27 member states coordinating 1,900 cross-border investigations annually.
Market surveillance powers align with GDPR Article 58 corrective measures imposing fines up to €35 million or 7 per cent global annual turnover. Commission established Joint Coordination Group comprising AI Office national authorities processing 4,200 conformity assessment dossiers quarterly. Amendments mandate continuous AI literacy programmes reaching 2.8 million deployers by 2028.
SME simplification measures extend technical documentation relief
Small mid-sized enterprises benefit from simplified technical documentation requirements reducing compliance costs by 42 per cent for 3,800 high-risk deployments. Commission proposals eliminate EU database registration for Article 6(3) derogations posing no significant rights risks while maintaining transparency obligations. Fast-track conformity assessments process 950 innovative deployments within 90 days versus standard 180-day timelines.
Regulatory sandboxes support 1,650 SMEs testing high-risk systems in controlled environments with 24-month maximum duration. European SME Envoy reports 88 per cent sandbox participants achieving market authorisation through accelerated pathways. €650 million innovation fund provides technical assistance grants covering 75 per cent conformity assessment costs for qualifying startups.
Implementation timeline preserves phased application dates
High-risk AI systems under Annex III apply from 2 December 2027 while Annex I systems commence 2 August 2028 preserving the established legislative calendar. Legacy systems placed on the market prior to application dates face significant change assessments only. General-purpose AI obligations remain effective from 2 August 2025 with systemic risk rules applying to models trained after 1 January 2026.
Article 111 transition provisions grant 36-month grace periods for existing high-risk deployments undergoing substantial modifications. Commission guidelines clarify substantial modification thresholds processing 2,400 provider notifications annually. National transposition deadlines set for Q1 2029 coordinating 27 member state implementation strategies.
Transparency obligations mandate ai-generated content labelling
Article 50 transparency rules require clear disclosure of AI interactions synthetic content generation deepfake identification effective 2 August 2026. Deployers must implement automated labelling mechanisms for text audio images video generated outputs reaching 6.2 million daily interactions. Commission technical standards specify watermarking protocols processing 1,800 compliance verification requests quarterly.
Emotion recognition workplace biometric categorisation systems require explicit user consent documentation under GDPR Article 9 alignment. National data protection authorities coordinate 2,100 investigations detecting 73 per cent violations in pilot implementations. Amendments harmonise transparency obligations across Digital Services Act designations.
Codes of practice standardise systemic risk evaluations
AI Office coordinates development of first GPAI Codes of Practice engaging 240 stakeholders standardising 85 systemic risk indicators. Codes specify training data provenance disclosure training compute documentation copyright compliance verification protocols. Public consultation periods process 32,000 responses validating technical standards by Q2 2026.
Chapter V amendments introduce tiered obligations for fine-tuning micro-tuning models maintaining proportionality for lower-risk deployments. Commission rejected compute threshold increases preserving 10×10^25 FLOPs benchmark capturing 92 per cent foundation models deployed commercially. Cybersecurity codes mandate vulnerability disclosure timelines coordinated with ENISA frameworks.
Market surveillance framework coordinates 27 member states
Article 74 market surveillance powers grant national authorities 30-day information requests from economic operators processing 2,800 compliance verifications annually. Commission AI Office receives cross-border case coordination powers investigating 1,200 multinational deployments quarterly. Fines harmonise with GDPR maximums reaching €35 million or 7 per cent global turnover proportionality principle.
National competent authorities establish 92 market surveillance units designating lead authorities for 2,400 cross-border cases. The EU database integrates 8,500 high-risk system registrations processing real-time compliance monitoring dashboards. Amendments empower AI Office onsite inspections unannounced audits affecting 450 non-compliant providers annually.
Bilateral cooperation facilitates third-country compliance
Commission pursues adequacy decisions with 12 third countries facilitating 1,800 GPAI model validations annually. US-EU Trade Technology Council coordinates systemic risk threshold convergence processing 950 mutual recognition requests. UK AI Safety Institute establishes joint testing protocols harmonising 72 risk indicators bilaterally.
Japan EU Digital Partnership confirms GPAI transparency obligation reciprocity supporting 640 cross-border validations. Canada Privacy Commissioner coordinates adequacy assessments processing 1,200 high-risk system notifications. Commission International Cooperation Division negotiates 8 mutual recognition agreements by Q4 2026.
Impact assessment validates €6.2 billion compliance savings
Commission modelling projects 38 per cent administrative burden reduction generating €6.2 billion annual savings redirected toward innovation investments. SME innovation growth accelerates 22 per cent by 2028 versus 9 per cent regulatory uncertainty baseline scenarios. High-risk AI market expands 18 per cent CAGR reaching €52 billion GDP contribution by 2030.
Digital Europe Programme allocates €1.8 billion supporting 4,800 AI deployments through Horizon Europe partnerships. Amendments preserve EU global regulatory leadership maintaining 28 per cent AI market share versus US China dominance. Stakeholder consultations validate risk-based approach effectiveness across 32,000 responses received.
Legislative procedure targets Q3 2026 ordinary adoption
European Parliament rapporteur assignment scheduled Q1 2026 with first reading fast-tracking 125 targeted amendments. Council Working Party on Justice coordinates 27 member state positions achieving 94 per cent consensus on core prohibitions. Ordinary legislative procedure targets Q4 2026 completion preserving 2 August 2026 GPAI obligations commencement.
Commission empowerment procedures standardise 92 conformity assessment modules through implementing acts. AI Office establishment fully operational by Q2 2026 coordinating 2,800 FTE governance specialists across Brussels headquarters and 27 national desks. Continuous stakeholder dialogue engages 320 organisations monitoring implementation effectiveness quarterly.