— Ch. 1 · The Legislative Timeline —
Artificial Intelligence Act.
~6 min read · Ch. 1 of 7
On the 21st of April 2021, the European Commission officially proposed the Artificial Intelligence Act. This proposal marked the beginning of a complex legislative journey that would span nearly three years. The European Council adopted its general orientation on the 6th of December 2022, allowing formal negotiations to begin with the European Parliament. After three days of marathon talks concluding on the 9th of December 2023, the EU Council and Parliament reached an agreement.
The law passed the European Parliament in plenary session on the 13th of March 2024. Members of the European Parliament voted 523 for the text, 46 against it, and 49 abstained. The EU Council approved the final version unanimously on the 21st of May 2024. The regulation entered into force on the 1st of August 2024, twenty days after being published in the Official Journal on the 12th of July 2024.
Provisions will come into operation gradually over the following six to thirty-six months. Bans on unacceptable risk systems take effect after six months. Codes of practice become applicable nine months later. General-purpose AI systems face a twelve-month delay before full application. Some obligations related to high-risk AI systems have a thirty-six-month implementation window.
Four Tiers Of Risk
The Act classifies non-exempt AI applications by their potential to cause harm using four distinct levels. Applications with unacceptable risks are banned outright unless specific exemptions apply. This category includes systems that manipulate human behavior or use real-time remote biometric identification in public spaces. Social scoring systems that rank individuals based on personal characteristics also fall under this prohibition.
High-risk applications must comply with strict security, transparency, and quality obligations. These systems undergo conformity assessments before entering the market and throughout their life cycle. Notable examples include AI used in health, education, recruitment, critical infrastructure management, law enforcement, and justice sectors. A Fundamental Rights Impact Assessment is required for some deployments to identify potential harms before they occur.
Limited-risk applications carry only transparency obligations. Users must be informed when interacting with an AI system to make informed choices. This category covers tools that generate or manipulate images, sound, or videos like deepfakes. Minimal-risk applications remain unregulated and include video games or spam filters. Most AI applications expected to exist will fall into this minimal risk category.