Case for a much more Inclusive and Democratic Treaty-Making for AI
The current dominant path towards treaty-making and global governance of AI is centered on a tightly US-led coalition of "liberal democracies" and rich autocracies, pitched against dozens of "undemocratic" states including China. A more inclusive, wide-scoped and globally-democratic approach may be required to reliably tackle the immense risks for human safety and undemocratic concentrations of power, and realize the astounding opportunities for all.
It only took eighteen months since the release of ChatGPT for a wide majority of experts, world leaders and citizens to become cognizant of the immense risks of AI to human safety, global concentration of power and the potential distortion of human nature.
Conversely, the advantages of AI are becoming increasingly apparent, posing extraordinary opportunities to accelerate scientific advancement, generate abundance, eradicate poverty, and improve education and the quality of life itself.
Reflecting the urgency and scale of this issue, hundreds of AI experts and influential figures have called for a robust and inclusive international treaty for AI, and so has the Pope. Such an idea is even shared by 77 percent of U.S. citizens.
Shortcoming of Current Global Governance Initiatives for AI
Historically, the security agencies of a handful of countries have safeguarded humanity against the dangers and misuse of powerful technologies. Although we have averted major catastrophes, the risks associated with nuclear and bioweapons are greater today than ever before.
While the recent Council of Europe treaty on AI and other initiatives for international coordination, such as the AI Summit series are praiseworthy, they fall massively short in addressing the principal risks for safety and concentration of power, and lack mechanisms for their future improvement.
The current dominant path towards treaty-making and global governance of AI is centered on a tightly US-led coalition of "liberal democracies" and rich autocracies, pitched against dozens of "undemocratic" states including China.
This is inadequate because: (1) Reliably preventing dangerous AI proliferation will require unprecedented global compliance; (2) If successful, it is likely to amount to immense undemocratic concentrations of power and wealth in very few states, firms and/or agencies; (3) The so-called "undemocratic" represent rich variety of human political and cultural models that are entrenched, legitimate, and often comparable in levels of democraticness with the "West" one.
Hence, we may need to consider a much more inclusive and globally-democratic approach, and one that faces head on the unprecedented levels of global coordination that are needed.
The Harnessing AI Risk Initiative
In March 2023, we launched a groundbreaking initiative designed to facilitate a comprehensive, binding and far-reaching treaty for artificial intelligence and digital communications.
Unlike the prevailing approaches - overly reliant on a sequence of unstructured summits, unanimous declarations of intent and vetoes - we are assembling an open coalition of NGOs and states committed to a treaty-making process that is far more efficient, democratic and participatory directly of world citizens.
We draw direct inspiration from the intergovernmental constituent assembly model, which was pioneered with two US states convening the Annapolis Convention and culminated with the ratification of the US federal constitution in 1787.
The mandate of the assembly will look up to the Baruch Plan, proposed by the USA to the UN in 1946 to address the emerging risks and opportunities of nuclear technology through creation of a new, powerful and federal global organization.
Strategy
Our strategy is centered on broadening and strengthening our coalition by actively engaging with states, NGOs, industry experts and companies.
This will be achieved through a series of events and summits both in Geneva and other locations, beginning with the 1st Harnessing AI Risk Summit in November 2024.
The first seven states and private AI labs to join the initiative will enjoy substantial yet temporary economic and political advantages and agree on an initial version of rules for the constituent assembly.
Considering the vast disparity in power between states, particularly in AI and more broadly, and recognising that that three billion people are illiterate or lack internet access, we foresee the voting power in such assembly to be initially weighted by population and GDP.
AI Superpowers' Veto?
While the participation of the US and China is crucial for achieving the initiative's AI safety goals, other states will have very strong incentives beforehand.
By incorporating in the assembly's mandate the creation of a state-of-the-art public-private $15+ billion Global Public Benefit AI Lab - and a "mutually dependent" and eventually autonomous supply chain - participant states and AI labs will secure vast economic and political benefits.
They will gain cutting-edge industrial AI capabilities, digital sovereignty, political leadership, and an enhanced negotiating power vis-a-vis other less inclusive global governance initiatives.
Participation will remain open to all states at all stages of the process, including during the mandatory, periodic statutory reviews of the treaty's charter.
Partners, Advisors, Coalition and Movement
We have built and are extending a vast network of advisors and partners around our coalition to attract a critical mass of states to co-promote the initiative.
A wide and expanding number of world-renowned experts, distinguished individuals, diplomats and NGOs will participate as speakers of our 1st Harnessing AI Risk Summit in November in Geneva, as speakers of its hybrid Pre-Summit held along the G7 Summit in Italy this June, or as members of an emerging coalition around the Open Call for the Harnessing AI risk Initiative.
Over the last three months, we held bilateral and multilateral meetings with several interested states, especially from Africa, including a number of ambassadors to the UN in Geneva.
Additionally, we have received formal interest from the mission of the regional intergovernmental organization to the UN, comprising over 30 member states.
Since December, we have held some discussions with three of the top five AI labs regarding their participation in the Global Public Interest AI Lab.
Call to Action
If you are convinced of the merit of our initiative, then I'd love to explore ways you could participate or help. As we gain considerable traction and just started formally pursuing funding, we are at a pivotal moment and your support could make a significant difference:
Join our Coalition. Read and sign our Open Call for the Harnessing AI Risk Initiative and/or sign up to our newsletter and follow us on Linkedin.
Participate in our Events: Apply to participate as a partner, speaker or sponsor in our 1st Harnessing AI Risk Summit, this November in Geneva, or our Pre-Summit held along the G7 Summit next week.
Donate: Your financial contribution, no matter the size, will directly support our efforts to build a safer future with AI.
Introduce Us to Donors: If you know individuals or organizations passionate about AI, ethics, and global governance, please introduce us. Personal recommendations can open doors to valuable partnerships.
Advisorship: Apply to join our advisory boards if you have relevant skills, to help shape and drive the initiative.