Free to follow every thread. No paywall, no dead ends.
Regulation of artificial intelligence | HearLore
— Ch. 1 · Global Legislative Surge —
Regulation of artificial intelligence.
~6 min read · Ch. 1 of 6
Since 2016, the number of countries introducing AI-related laws has risen sharply. Stanford University's 2025 AI Index reports a ninefold increase in legislative mentions across 75 nations compared to earlier years. In 2024 alone, U.S. federal agencies introduced 59 new regulations on artificial intelligence. This figure represents more than double the count from the previous year. State-level activity surged even faster. Nearly 700 bills appeared before state legislatures in 45 American states during 2024. That total jumped from just 191 bills in 2023. The pace suggests governments worldwide are racing to establish control over rapidly evolving technologies. Public opinion varies widely by region. A 2022 Ipsos survey found that 78% of Chinese citizens believed AI products offered more benefits than drawbacks. Only 35% of Americans held the same view. These divergent attitudes complicate efforts to create unified global standards. Prominent industry leaders have also weighed in. Elon Musk and other tech figures signed an open letter in 2023 calling for a pause on training more powerful AI systems. Conversely, Mark Zuckerberg warned that premature regulation could stifle innovation. The tension between safety concerns and economic growth drives much of this legislative activity.
Hard Versus Soft Law Debates
Legal scholars distinguish between binding hard law and flexible soft law approaches when regulating artificial intelligence. Hard law refers to statutes with enforceable penalties and clear jurisdictional authority. Traditional laws often struggle to keep pace with rapid technological changes. This pacing problem leaves regulators unable to address emerging risks effectively. Some experts argue that existing agencies lack the scope to oversee diverse AI applications. Soft law offers an alternative path through voluntary guidelines and ethical principles. Organizations deploying AI can adopt these frameworks without immediate legal mandates. Cason Schmit, Megan Doerr, and Jennifer Wagner proposed using intellectual property rights to create quasi-governmental oversight. Their model suggests licensing AI models under terms requiring adherence to specific ethical practices. Such soft mechanisms provide flexibility but often lack substantial enforcement potential. A 2020 meta-review by the Berkman Klein Center identified eight core principles including privacy, accountability, and transparency. These form the basis for many current guidelines. Public administration strategies now link AI law directly to workforce transformation and social trust issues. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. The debate continues over whether strict rules or adaptable guidance better serve public interest.
When did the European Union adopt its Artificial Intelligence Act and when did it enter into force?
The European Union adopted its Artificial Intelligence Act in May 2024. This legislation entered into force on the 1st of August 2024.
How many countries introduced AI-related laws by 2025 according to Stanford University's report?
Stanford University's 2025 AI Index reports a ninefold increase in legislative mentions across 75 nations compared to earlier years since 2016. The number of countries introducing AI-related laws has risen sharply during this period.
What are the four risk levels defined in the European Union Artificial Intelligence Act?
The act categorizes AI applications into four levels: minimal, limited, high, and unacceptable risk. High-risk systems operating in sectors like healthcare, education, and public safety face stringent requirements under these rules.
Which countries were first signatories to the Framework Convention on Artificial Intelligence and Human Rights adopted on the 17th of May 2024?
First signatories included Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union. These entities joined the Council of Europe initiative following negotiations that began in September 2022.
When did the Global Partnership on Artificial Intelligence launch and how many members does it have as of 2023?
The Global Partnership on Artificial Intelligence launched in June 2020 with 15 founding members. By 2023, membership expanded to 29 countries including India, Japan, Mexico, and Singapore.
The European Union adopted its Artificial Intelligence Act in May 2024 after years of negotiation. This legislation entered into force on the 1st of August 2024, creating a risk-based framework for AI systems globally. The act categorizes AI applications into four levels: minimal, limited, high, and unacceptable risk. High-risk systems operating in sectors like healthcare, education, and public safety face stringent requirements. Organizations must ensure data governance, human oversight, and algorithmic robustness before deployment. General-purpose AI models capable of exceeding 10^25 FLOPS undergo additional evaluation processes. Specific prohibitions include real-time remote biometric identification with narrow exemptions for law enforcement. Recognition of emotions also faces restrictions under the new rules. Critics argue that compliance costs may delay certain designs and increase overhead for developers. The European Court of Auditors released a report on the 29th of May 2024, noting poor coordination between EU measures and national implementations. Despite these concerns, the act establishes a precedent for other jurisdictions. It introduces special provisions for general-purpose AI enforceable by the 2nd of August 2025. The framework aims to foster ethical use while maintaining strategic autonomy within Europe.
National Strategies Compared
Major economies have adopted distinct approaches to artificial intelligence regulation. The United States follows a market-driven model emphasizing sector-specific guidelines rather than comprehensive federal mandates. China advances a state-driven strategy ensuring government control over data and company operations. The European Union pursues a rights-based approach prioritizing human rights and democratic values. Canada launched its Pan-Canadian Artificial Intelligence Strategy in 2017 with Can$125 million in funding. By November 2024, it announced a 2.4 billion CAD investment package including sovereign computing infrastructure. Australia introduced voluntary safety standards in August 2024 followed by proposals for mandatory guardrails. Brazil passed a revised bill in May 2023 requiring risk assessments before deployment. Colombia issued CONPES 4144 in 2025 as part of its national policy on AI adoption. These varied strategies reflect differing cultural values and economic priorities. Some nations prioritize innovation while others emphasize social protection. The lack of consensus complicates international cooperation efforts. In September 2023, the UK hosted an AI Safety Summit aiming to position itself as a global leader. Yet both the UK and US declined to sign an international agreement at the Paris summit in 2025. Their governments cited insufficient practical clarity and unresolved national security questions.
International Governance Bodies
Global organizations play critical roles in shaping artificial intelligence governance frameworks. The Global Partnership on Artificial Intelligence (GPAI) launched in June 2020 with 15 founding members. Its secretariat resides within the OECD in Paris, France. By 2023, membership expanded to 29 countries including India, Japan, Mexico, and Singapore. UNESCO commenced a two-year process in November 2019 to develop a global standard-setting instrument on ethics. This effort culminated in adoption by the General Conference in November 2021. The Council of Europe initiated treaty negotiations in September 2022 involving 46 member states plus additional partners. On the 17th of May 2024, it adopted the Framework Convention on Artificial Intelligence and Human Rights. First signatories included Andorra, Georgia, Iceland, Norway, Moldova, San Marino, the United Kingdom, Israel, the United States, and the European Union. These bodies aim to establish common ground across regional approaches. Academic initiatives like the Munich Convention on AI, Data and Human Rights call for binding international agreements protecting human rights. UNICRI's Centre for AI and Robotics issued reports on law enforcement applications in April 2019 and May 2020. While progress exists, institutional capabilities remain limited regarding existential risks from advanced systems.
Autonomous Weapons Regulation
Legal questions surrounding lethal autonomous weapons systems have been debated at the United Nations since 2013. Discussions occur within the context of the Convention on Certain Conventional Weapons. Informal expert meetings took place in 2014, 2015, and 2016 before appointing a Group of Governmental Experts in 2016. Guiding principles affirmed by this group were adopted in 2018. China published a position paper in 2016 questioning existing international law adequacy. This marked the first time a permanent Security Council member broached the issue directly. The Campaign to Stop Killer Robots advocates for moratoriums or preemptive bans on development. The U.S. government maintains current humanitarian law suffices for regulation. Congressional Research Service data from 2023 indicates no LAWS exist in American inventories yet policy does not prohibit their creation. Academics urge nations to establish regulations similar to those governing other military industries. Recent research highlights AI's role as a new factor in cyber defense strategies. Initiatives emphasize human rights compliance alongside technological advancement. The tension between innovation and ethical constraints remains unresolved globally.