September 5, 2025 — A year after the launch of the world’s first international treaty on artificial intelligence, policymakers, legal experts, and civil society leaders gathered in Madrid to reflect on progress and chart the next phase of global AI governance.
Hosted by the Council of Europe, the conference marked the anniversary of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law—a pioneering legal instrument designed to ensure that AI development aligns with fundamental rights and democratic values.
Delegates from over 50 countries discussed implementation strategies, ethical oversight, and the role of national legislation in complementing the treaty’s principles. Special focus was given to transparency in algorithmic decision-making, cross-border accountability, and AI’s impact on electoral integrity.
Speakers emphasized the need for inclusive governance, with civil society and private sector voices shaping how AI is regulated across jurisdictions. The event also previewed upcoming guidance on AI risk classification and human oversight standards, expected to be released later this year.
The Madrid conference reaffirmed Europe’s leadership in setting global norms for responsible AI—and its commitment to keeping human dignity at the center of technological progress.
The AI Treaty of the Council of Europe—formally known as the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law—is the world’s first legally binding international agreement on artificial intelligence.
What Is the AI Treaty?
- Adopted in May 2024, and opened for signature in September 2024, this treaty sets out rules to ensure that AI systems are developed and used in ways that respect human rights, democratic principles, and the rule of law.
- It covers the entire lifecycle of AI systems—from design and development to deployment and oversight.
- The treaty is technology-neutral, meaning it doesn’t regulate specific tools or platforms, but focuses on principles and safeguards that apply across all AI applications.
Who’s Involved?
- Drafted by the 46 member states of the Council of Europe, with input from observer countries like Canada, Japan, the U.S., and the EU.
- Civil society, academia, and industry experts also contributed, making it a multi-stakeholder effort.
What Does It Require?
- Governments must ensure transparency, accountability, and human oversight in AI systems.
- It promotes a risk-based approach, similar to the EU AI Act, allowing for bans on high-risk or rights-violating AI applications.
- Countries are expected to align their national laws with the treaty’s principles and report on implementation progress.
This treaty is a landmark step in shaping global AI governance, aiming to balance innovation with ethical responsibility.
Source: Council of Europe | Council of Europe’s official AI treaty page.