top of page
Search

AI Act: A European Framework for Safe, Ethical and Transparent Artificial Intelligence


On August 1, 2024, Regulation (EU) 2024/1684 — widely known as the AI Act — officially entered into force. It marks a global first: the creation of a comprehensive, binding legal framework dedicated entirely to artificial intelligence.


What makes the AI Act unique is its risk-based approach: obligations vary depending on the level of risk that a particular AI system may pose to individuals, society, or fundamental rights.


Here is the timeline for its phased implementation:


  • From February 2, 2025:


    • A ban applies to unacceptable-risk AI systems (Art. 5);

    • A new obligation of AI literacy is introduced for providers and professional users (Art. 4.5).


  • From August 2, 2025:


    • Full application of requirements for general-purpose AI models (GPAI), including rules on transparency, technical documentation, safety, and performance monitoring (Arts. 52–53).


  • From August 2, 2026:


    • Rules for high-risk AI systems come into force (Title III), with an extended deadline until August 2, 2027 for those integrated into CE-marked products (Art. 83).




Rather than acting as a barrier to innovation, the AI Act represents a strategic opportunity: it empowers developers, users, and enterprises to build trustworthy, legally compliant, and competitive AI systems in both EU and global markets. The regulation aims to foster responsible innovation, increase transparency, and strengthen user and investor confidence across sectors.

 
 
 

Recent Posts

See All
NIS 2: A New European Era of Cybersecurity Governance

The NIS 2 Directive  – formally, Directive (EU) 2022/2555  – is the cornerstone of the EU’s strategy for a cybersecure and resilient digital economy . On October 1, 2024 , Italy transposed NIS 2 into

 
 
 

Comments


bottom of page