Several global powers, including the United Kingdom, United States, and European Union, signed the world’s first artificial intelligence (AI) treaty in Vilnius, Lithuania on Thursday, under the auspicies of the Coucil of Europe.

First adopted in May, the The Framework Convention on Artificial Intelligence, or simply ‘AI Convention,’ comes after intense discussions across 57 nation-states to explore potential risks, best practices, and other concerns regarding the emerging technology.

The Convention will also aim to protect the human rights of citizens as AI systems increasingly come into use.

However, it remains separate from the EU AI Act, which encompasses sweeping regulatory stipulation on AI within the EU’s borders, and serves a broader number of markets, reports show.

This includes the way that AI systems are created, rolled out, and implemented across the single market in the European Union.

Formed in 2022 after three years of examination, the Committee on Artificial Intelligence began drafting the document, and following its enactment, can adjust or amend measures as needed.

The landmark document comes as the Council of Europe celebrates its 75th year of operations after its founding in 1949.

The Convention’s website also notes that Japan, Canada, Mexico, the Holy See, are Convention members. Countries such as Israel, Costa Rica, Australia, Argentina, Peru, and Uruguay are non-member observer states.

68 global representatives from across “civil society, academia and industry,” along with numerous international organisations, have participated in developing the Convention’s regulatory framework.

It also contains the Conference of the Parties as a protective mechanism for monitoring member-state compliance for the Convention. This is implemented to “guarantee its long-term effectiveness” via public hearings, the Council of Europe added.

D×M also understands it is separate from the US Executive Order [on] Artificial Intelligence, which entered force in October last year.

Comments on the AI Convention

Shabana Mahmood, Justice Minister, United Kingdom, said in a statement as quoted by Reuters that the Convention was a “major step” to safeguarding the emerging technologies.

Continuing, she said,

“This Convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law,”

However, Francesca Fanucci, Legal Expert, European Center for Not-for-Profit Law Stitching disagreed and told Reuters,

“The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability.”

She continued that the Convention exempted AI systems for national security protocols along with an imbalance in public and private sector monitoring, adding, “This double standard is disappointing.”

Thoughts on Europe’s AI Regulatory Future

The AI Convention is arguably the first major step to getting on the same page with like-minded nations. It will likely set the tone for a global AI legal framework similarly to the Geneva Convention for Western human rights.

With the development of emerging technologies like extended reality (XR), robotics, and AI, the latter remains a key component of many technology stacks and will continue to dominate conversations across markets.

Others have created monitoring instruments at the business and non-profit level, including the XR Association (XRA), XR Safety Initiative (XRSI), the IBM and Meta-led AI Alliance, and Google-led Secure AI Framework, among others.

Whether through Tesla’s xAI-backed Optimus robots or via OpenAI’s venture into for-profit AI business models, governments now face the immense task of regulating the increasingly-competitive landscape of AI solutions. The framework is a long-time coming, but is not by any means going to be perfect in its initial phases.

It will require simplicity, oversight, balance and fairness, and enforcement.

Meta, Spotify CEOs Slam ‘Fragmented’ EU AI Policy

Fanucci’s statements are nothing new and come just weeks after Mark Zuckerberg, Founder and CEO, Meta and Daniel Ek, Founder and CEO, Spotify voiced similar concerned about Europe’s approach to AI innovation.

Both addressed the issues in a joint open letter concerning alleged red tape due to Europe’s regulatory market, which, from the two executives’ perspective, stifled innovation due to excessive, unclear scrutiny.

The two said in a critical statement,

“[Europe’s] fragmented regulatory structure, riddled with inconsistent implementation, is hampering innovation and holding back developers. Instead of clear rules that inform and guide how companies do business across the continent, our industry faces overlapping regulations and inconsistent guidance on how to comply with them. Without urgent changes, European businesses, academics and others risk missing out on the next wave of technology investment and economic-growth opportunities.”

The letter also addressed issues with the General Data Protection Regulation (GDPR), stating EU privacy regulators were “creating delays and uncertainty” by failing to agree on how to apply laws.

The executives also explained that Meta had been instructed to delay its Llama large language model (LLM) training on Facebook and Instagram due to disagreements among regulators.

The letter urged Europe to adopt a “new approach with clearer policies and more consistent enforcement.”

The NATO of Artificial Intelligence?

Hopefully, with the new Convention entering force, the EU can sit at the table with the United States and others stakeholders to address their most critical concerns on AI, namely for their respective markets.

This will also see a globalisation of AI ethics and best practices, which may run contrary to some nation-states of the international community.

It is understood that all parties involved in the AI Convention will use it to regulate their activities among individuals, non-state actors, businesses, and private and public sector entities. However, one can also expect a policy of ‘Westernised’ views AI implementation, with the Convention acting as a gatekeeper or accusatory body against competitor and rival countries such as Russia, China, Iran, and North Korea, among others.

A great point of contact for further study on the matter is Abishur Prakash’s “next geopolitics” concept and his analysis of global tech power. Other crucial concepts include the D10 Strategic Forum, which has, in effect, passed the torch of tech ethics to the current AI Convention.

Concluding, one must remain optimistic, if not cautiously so, as the Convention enters force. Every action it takes will reveal whether global AI policy works cooperatively or fragments even further, leading to greater breaks in continuity as seen with the ongoing US-China trade and tech war.

Like this article? Be sure to like, share, and subscribe for all the latest updates from D×M!

One response to “What to Expect from the Council of Europe’s AI Convention”

  1. […] upgraded their investigative units to tackle such issues. With the inauguration of the Council of Europe’s AI Convention, will certainly hope the platform can transnationally coordinate efforts to enforce legally-binding […]

    Like

Leave a comment

Trending