Towards an Open Transnational Constituent Assembly for AI and Digital Communications.
We are at the dawn of a new Era. A consensus is fast emerging that AI poses immense risks of human safety and concentration of power and wealth, and that only new powerful global intergovernmental organizations can enable us to manage them, and so benefit from their great opportunities.
Current proposals, however, fall short in scope and detail, and their associated "constituent" initiatives lack in inclusivity, participation, neutrality, detail and urgency.
The historical success of constituent assemblies in forming enduring and trusted governance structures suggests their potential utility here, and possibly their inescapable necessity.
The mandate, composition, and communications contexts of such assemblies are paramount in maximizing their success, and mitigating the great risks. A key role awaits a few NGOs and courageous nations to properly catalyze and jump-start such processes.
In recent months, top AI scientists and CEOs have shocked the world with calls for new global institutions to tackle what they believe to be immense safety risks posed by of AI.
While most heads of state have been hesitant, and the two AI superpowers are just starting to hint at their strategies in adversarial terms, the UN Secretary-General’s Tech Envoy is building a forum for member states to build new intergovernmental organizations to tackle the challenge, framed in a wider governance for digital communications.
In a shocking reversal of roles, OpenAI’s CEO Sam Altman called last March for the convening of a global democratic constituent assembly for the governance of AI akin, to the U.S. Constitutional Convention of 1787. He followed up with calls for participatory democratic global governance of AI in an interview, and then another. Along similar suggestions by Anthropic’s CEO Dario Amodei, he even pledged to convert OpenAI's current governance structure into a participatory democratic global body.
Is he crazy, joking or lying?
Most Western media seem to think he is being disingenuous. We believe the contrary may be true: Altman may be the sanest and most conscientious among us. His calls have similarities to those of the top US business leader in 1946, Bernard Baruch, who put forth a plan for the creation of a new intergovernmental organization to assume global federal control of all nuclear weapons and energy capabilities, which US President Truman chose as a formal US proposal to the UN: the Baruch Plan.
Five days later, Russia proposed an even more ambitious Gromyko Plan, for a total nuclear disarmament and centralization of nuclear energy. No agreement ensued. Coordination among intelligence agencies tried to fill the gap, and weaker institutions followed only 10 years later, with the IAEA and CERN. We managed to avoid nuclear war so far, but we also went very close to accidents leading up to it several times, and the nuclear risk is higher today than it ever was.
While those plans failed, the fact that the US and Russia negotiated proposals for structural global cooperation that appear today exceedingly radical, due to the size of the risk, should raise our hopes for AI.
China is today considered a few years behind the US in AI, much similar to Russia’s delay in nuclear weapons in 1946. Highly developed nations like Russia, Japan or Israel, and even large private groups, could develop globally dangerous AIs in just a few years, leveraging open-source AI science with or without access to the US or China supply chains. Given that Putin stated that the nation that will lead in AI “will rule the World,” then even he may agree to a “Baruch or Gromyko Plan for AI,” where all nations would come to enjoy AI's national security and economic benefits.
We may have a second opportunity, this time for AI. But this time around, success by any measure will require the wide participation of non-superpowers and world citizens because the power over the definition of safety controls of AI - unlike in the case of nuclear technologies - is deeply intertwined with competitive advantage in its economic exploration and shaping the future of humanity.
If we succeed, the same governance model could be extended to other dangerous technologies, like nuclear and bioweapons. Even if the chances of success were just 1%, trying would still be very much worth it. In fact, it may be our greatest duty.
A Pivotal Crossroad.
Today, the stakes are even higher than in 1946. We are literally at the dawn of a new Era for humanity and fast approaching the brink of a precipice. The recent shocking acceleration of AI has turned the next few years into the most consequential of human history and life on Earth. It’s a “make it or break it” moment for humanity, whereby middle-ground outcomes are highly unlikely.
Frontier AIs are expected to keep expanding their capabilities tenfold annually. And that’s based on growth in investments and computing power alone, without accounting for AI's increasing ability to self-improve and multiply the productivity of its developers.
Most experts agree AI poses enormous risks of concentration of power and wealth, disastrous misuse and accidents, and even loss of human control and extinction, over a time horizon of even a few years.
We must succeed in creating new global institutions that can reliably ensure that all AIs worldwide are both human-controllable, to radically mitigate the risks of misuse, accidents of loss of human control safety, and humanity-controlled, to ensure their accountability to humanity, nations and individuals, according to the subsidiarity principle.
If we succeed, the opportunities are similarly gigantic. The emergence of new global governance models to deal with dangerous technologies and scientific progress turbo-charged by human-controllable AI, can easily result in unimagined improvements in safety and wellbeing, peace and stability, eradication of scarcity and hunger, great increases in physical and mental health, provide solutions to climate change and nuclear fusion, and much more for generations.
It’s a huge challenge. Time is scarce. It’s a “make it or break it,” as middle-ground outcomes are very unlikely. But we are not starting from scratch as we can learn from the Baruch Plan's history, as analyzed by the current Head of AGI Governance and Strategy of Google Deepmind, Allan Poe.
We can do it. But what’s our best shot at it!? We need a plan!
The Roadmap is Clear, but No One is Leading.
Everyone agrees the challenge is global, affecting nearly all aspects of human life, and it's widely accepted that only robust new international institutions can effectively address it. But initiative and plans put forth by AI superpowers are lacking in many respects.
While global experts in AI, industry leaders, and the UN Secretary-General, have loudly called for new global bodies similar to those for nuclear weapons and energy to master AI's risks and rewards, our political leaders seem paralyzed, divided, hesitant, unable or unwilling to lead a truly global response.
This is highlighted by the United States backed AI Safety Summit slated for next November in the United Kingdom, and reserved only to “key” nations and Western AI firms. It must be hugely praised for placing “loss of control risk” square and center in the agenda, at a time when most voters and heads of state are yet to get up to date with how serious the safety threat is.
It rightly emphasizes the rare expertise and agility needed for effective urgent actions on the safety front, but it is not even pretending to be an inclusive global leadership initiative, appearing aimed at expanding US geopolitical power and propping up the UK economy.
As it stands, it appears set to exacerbate rather than mitigate the concentration of power and wealth, expanding the mistrust of world nations and citizens and so making compliance by many nations, to the likely upcoming worldwide AI safety and non-proliferation rules, much less likely.
Meanwhile, the 136 member states of the G77 stated that "all countries should be able to participate extensively in the global governance of AI." While providing no details or safeguards, China launched its Global AI Governance Initiative with a similar praiseworthy rhetoric, with mentions of safety risk but none of of loss of control risks.
Contrary to what is widely reported, it was a loose coordination among intelligence agencies and not International Atomic Energy Agency (IAEA) that was primarily responsible for countering nuclear proliferation and other safety hazards, especially in the first decades. So, it concerning that to this day these are not part of the discussion, and the first public appearance of the Five Eyes, the leading Western intelligence agencies’ club, this week in Silicon Valley, was focused on a supposed surge in China espionage rather than on AI and other concerns.
In a reversal of roles, as we’ve seen above, leading AI firms like OpenAI and Anthropic called for democratic and participatory global governance of AI, while Google DeepMind and prominent AI researchers have published highly detailed analyses of the possible creation of four new intergovernmental organizations to regulate and govern AI globally.
Building, Fast and Together.
We need to break through this gridlock, to shape the unstoppable dawn of AI in humanity’s best image.
AI superpowers, China and the US, appear locked in an economic and military AI arms race, driven by narrow interests, and entrenched in a rhetoric of mistrust. They do not show serious shared plans to avoid AI safety risks, nor provide concrete guarantees about the equal sharing of the huge benefits and power that will derive from AI.
Inter-governmental organizations like the EU, UN, G7, G20, and G77 have issued statements of principle, but proven once again to be really just forums, unable even to propose what’s needed, due to their unanimity and veto-based decision-making.
Meanwhile, other nations individually lack the political strength and strategic autonomy to table alternative proposals in such all-important domains.
Hence, there is an historical role that a few neutral NGOs and conscientious nations must play as convenors of all nations, IGOs and other legitimate and vision-aligned stakeholders, to take the initiative to build the intergovernmental organizations that we need - just as they did most for example in the 1990s for the creation of the International Criminal Court.
An Open Transnational Constituent Assembly
for AI and Digital Communications
Taking OpenAI’s CEO Sam Altman at his word, we are launching a series of conferences in Geneva and online forums to jump-start constituent processes for new intergovernmental organizations that can be expected to succeed in steering AI and digital communications to steer off the immense risks and durably advance human safety and wellbeing for generations to come.
Our inaugural events will take place on March 20-21, 2024. This includes the first edition of the Harnessing AI Risk Conference and the 12th Edition of the Free and Safe in Cyberspace conference series. These gatherings will bring together a diverse group of participants, including pioneering states, global citizens' assemblies, and other entities that share our vision.
Our primary goal will be for a critical mass of globally diverse nations to agree, by the end of 2024, on Rules for the Election of an Open Transnational Constitutional Assembly and Process for AI and Digital Communications, to be convened in 2025 in Geneva, for the duration of two months.
Our secondary goals will be to foster the deliberative discussion of highly comprehensive and detailed proposals for new IGOs for global governance of AI and constituent processes leading up to them, so that deliberative negotiations will be efficient, effective, substantive, comprehensive, coherent and transparent, such as Google Deepmind’s mentioned above, our Harnessing AI Risk Proposal, and others that will be publicly presented.
While standalone, our initiative seeks - along with the UN Secretary General initiative - to promote the convergence of the initiatives of AI superpowers. Accordingly, participation of China or the US will require that both accept, and the same will apply for G7 and G77.
Given what’s at stake, it will be of the 1946 United Nations Conference on International Organization, when 850 delegates from 50 states and 1800 civil society and media members gathered for two months in San Francisco to discuss and approve the UN Charter.
This time, however, in order to succeed and to last, it'll need to be a much more participatory process, remotely and digitally, and not largely a ratification of a draft previously agreed among the winners of World War II.
The Harnessing AI Risk Proposal
Our Harnessing AI Risk Proposal was originally presented last June 28th at the UN at a public event of the Community of Democracies, with its 40 member states.
The Proposal centers on fostering constituent processes that reconcile the maximization of urgency, expertise and agility, with that of global representation, wide participation, and multilateralism. The latter are, in fact, inescapable in order to:
Facilitate broad adoption and compliance with stringent global safety measures.
Enhance safety and accountability through global diversity and transparency.
Ensure a more equitable distribution of power and wealth.
Reduce the risks of global military instability due to uneven control over AI.
The Proposal sketches a comprehensive proof-of-concept design of such IGOs:
A Global AI Lab IGO, to achieve global leadership or co-leadership in human-controllable AI, alignment research and AI safety. It accrues the capabilities of member states and distributes dividends and controls to member states and directly to its citizens.
An AI Safety IGO, to enforce a worldwide prohibition on hazardous development and use of AI, managing oversight and compliance systems, and coordinating intelligence agencies to prevent misuse of AI.
An IT Security IGO, to develop and certify radically more trustworthy and widely trusted IT systems for the control subsystems for frontier AIs and critical societal infrastructure like social media, and for confidential and diplomatic communications.
Far from being a strict blueprint, our proposal will be discussed with similarly comprehensive ones and primarily aimed at making negotiations more concrete, comprehensive, transparent, effective and fast-tracked, in order to move soon to a single-text negotiation phase.
Our initiative aims to lead AI and IT global governance, it is meant to be complementary rather than alternative to current initiatives by AI superpowers, and in fact a point of convergence among them as they are increasingly shaping up as two opposing AI camps.
The urgent need of solutions to the AI proliferation and safety problem, perhaps calls for just a few key nations, companies and intelligence agencies to move ahead temporarily in a smaller more agile and expert group. That is vital in order to speedily start enforcing globally the drastic, short-term and immediate public safety measures that are needed to counter the increasingly urgent and enormous safety risks posed by unregulated frontier AIs and the proliferation of dangerous AI systems.
The Need for Trustless Organizations and Technologies
A majority of world citizens in all nations are in support of new global democratic federal organizations, with the notable exception of the UK and the US, echoed by their mainstream media. While the benefit are clear, many have legitimate fears that such institutions may fail to be, and durably remain, accountable to world citizens and overall beneficial to their wellbeing and liberty.
In addition to proper constituent processes and statutory clauses, and defining extremely complex safety requirements for the development of Frontier AI, key to alleviating those concerns is, we believe, to ensure that AIs control and compliance sub-systems and human digital communications are much more trustworthy and widely trusted in their safety, security, privacy and democratic accountability, than they are today.
This mainly requires, we believe, exceedingly trustless (i.e. transparent, democratic, decentralized) approaches to the engineering and assessment of their critical, compliance and control components, both organizational and technical.
This is needed to instil the needed trust, to prevent abuses and accidents, to avoid the lack of indisputable mechanisms for the assessment breaches that contributed to the failure of many nuclear treaties, and to mitigate the power of state and non-state actors to exploit such weaknesses to sway public opinion, political leaders and diplomats via their control of digital communications.
The availability of digital tools for sensitive communications and deliberations that are truly trustworthy and trusted in their integrity and confidentiality - including off-the-record, pseudonymous and anonymous communications, and together with tremendous advances in real-time translations and decentralized trust technologies - would be a game changer in the enabling and sustaining accountable digital diplomatic negotiations, global democratic constituent processes and global governance.
Hence, the IT Security IGO envisioned proposed in our Harnessing AI Risk Proposal will be the Trustless Computing Certification Body and Seevik Net Initiative that will certify and govern IT and AI systems that will substantially or radically exceed the state-of-the-art security, privacy and democratic-accountability of critical IT systems for:
(A) sensitive and diplomatic digital communications, to enable high-bandwidth and fair remote diplomatic negotiations and deliberations.
(B) control subsystems for Frontier AIs, and other society-critical systems like social media feeds, to adequately raise their safety and auditability.
The TCCB was launched in Geneva last June 2021, with over 15 states and IGOs interested so far.
Conclusions
Today, almost eighty years after the Baruch and Gromyko Plans, in the face of the acceleration and proliferation of a new catastrophically dangerous technology, we have a second chance to tame and steer powerful technologies for the benefit of humanity.
Seizing the opportunity requires a long-overdue extension of the democratic principle to the global level - starting from the all-important domains of Artificial Intelligence and human communications - to establish a solid foundation for long-term human safety, dramatically reduce wealth and power disparities, and firmly steer scientific progress towards the safety and wellbeing of all of humanity.