Open Call to OpenAI to join a Global Constituent Assembly for AI and Digital Communications
To OpenAI,
Dear Mr. Samuel Altman and OpenAI Team,
We are reaching out to invite OpenAI, along with other leading AI labs and all states, to join our Harnessing AI Risk Initiative. Our goal is to convene a global, open constituent assembly for AI in a neutral nation, with the aim of establishing the intergovernmental organizations we need to firmly tackle AI’s immense risks and opportunities.
We are taking you at your word when, in this 1-minute clip last March you called for a global constituent assembly akin to the U.S. Constitutional Convention of 1787 to establish suitable intergovernmental organizations for AI, and do so in a highly federal, decentralized and participatory way, according to the subsidiarity principle.
We are taking you at your word when you called for global participatory, democratic and decentralized governance of AI in a 1-minute clip from a recent interview, and then another.
We are taking you at your word when - along with similar suggestions by Anthropic’s CEO Dario Amodei - you stated last June that people should not trust OpenAI unless its power was eventually "democratized to all of humanity" and “If we are years down the road and we have not figured out how to start democratizing control, then I think you shouldn't".
We invite you to recognize that the speed of change, and OpenAI’s recent governance troubles, strongly suggest that “years down the road” may have become months. The immense economic, geopolitical and psychological pressures, and speed of change require that OpenAI moves on to the third phase of its envisioned governance, bringing the world along with it.
We agree with you that such a global constituent assembly is vital to tackle the immense risks for human safety, immense risks of unaccountable concentration of power and wealth, and to realize the unimaginable positive potential of a human-controllable and humanity-controlled AGI, or else a positive scenario for humanity if a takeover AI scenario is somehow to happen.
Getting from Here to There
But the road from here to there is fraught with actual and perceived risks.
In relation to past attempts to create highly participatory new governance structures by using non-participatory methods and processes, the political philosopher Martin Buber warned “One cannot in the nature of things expect a little tree that has been turned into a club to put forth leaves.”
To maximize the chance that the extremely powerful organizations that will result from such a global constituent assembly will be as effective, democratic, decentralized, and resilient as possible, our Initiative emphasizes the need for careful planning. This includes designing the steps leading to the assembly and setting clear scope and Rules for its election designed to maximize expertise, timeliness and agility, on the one hand, and participation, subsidiarity, democratic process, neutrality and inclusivity, on the other.
To promote a truly democratic process and reduce obstacles, participating states will agree in advance on the scope, functions, and approximate budgets of the organization formed from the assembly. Additionally, they will commit to a constituent process that seeks wide consensus but will ultimately be decided by supermajority vote. This process will also have a set time limit for ratification.
The Required Scope
Given the inherently global nature of those threats and opportunities, the scope of the resulting organizations will necessarily need to include - we believe - the setting of globally-trusted AI safety standards; the development of world-leading safe AI and AI safety capabilities; the enforcement of global bans for unsafe AI development and use; and the development globally-trusted governance-support systems. These could be divided in three IGOs, under a unified governance structure:
(1) As you suggested, along with the UN Secretary General and many others, more or less directly, there is a need for a global regulatory and oversight body, similar to the IAEA.
Hence, an AI Safety IGO would define and update descriptions of hazardous development and use of AI and enforce worldwide their prohibition. It will manage its oversight and compliance systems in coordination with intelligence agencies member and non-member nations.
(2) As highlighted by your co-founder Sutskever in 2019 "If you have an arms race dynamic between multiple kings trying to build the AGI first, they will have less time to make sure that the AGI that they will build will care deeply for humans." ... "Given these kinds of concerns it will be important that AGI is somehow built as a cooperation between multiple countries' '. Many other AI leaders have suggested the need to have a globally-accountable multi-national initiative to develop the most capable safe AI that we can develop, along with the best AI safety, alignment and control technologies and socio-technical systems. This will constitute the necessary "off ramp" for frontier AI states and firms currently engaged in the break-neck arms race, be incentives for all states in the world to comply with safety bans and ensure AI benefits and control will be widely shared. Google Deepmind, for example with top AI researchers has published last July 11th a very detailed "exploration" of the feasibility of creating 4 new IGOs, one of which is the Frontier AI Collaborative, an "international public-private partnership" to "develop and distribute cutting- edge AI systems, or to ensure such technologies are accessible to a broad international coalition".
Hence, a Global Public Interest AI Lab IGO, a joint-venture to achieve and sustain a solid global supremacy or co-supremacy in human-controllable AI, technical alignment research and AI safety, funded via a $10-20 billion project financing. It will accrue capabilities, talents and resources of member states, distribute dividends and control among them and its citizens, all the while stimulating and safeguarding private initiative for innovation and oversight.
(3) Last but not least "We should actively develop and apply technologies for AI governance, encourage the use of AI technologies to prevent AI risks, and enhance our technological capacity for AI governance." as stated by China's Global AI Governance Initiative.
Hence, an IT Security IGO, to develop and certify radically more trustworthy and widely trusted IT systems for the control and compliance subsystems for frontier AIs (and critical societal infrastructure like social media), and for confidential and diplomatic communications.
Mission and Business Case for Leading AI Labs
We are therefore proposing OpenAI to converge the entirety of its foundational capabilities, alignment and research activities in a global democratically-controlled public-private consortium.
If the Initiative succeeds in attracting a critical mass of states and leading AI labs, by joining as Founding Partner of such Initiative, OpenAI would benefit as technical contractor and go-to-market partner of the Global Public Interest AI Lab.
As a contractor and IP provider, OpenAI would be compensated via revenue share, secured via long-term pre-licensing agreement from member states, resulting in a substantial increase of its market valuation.
As a go-to-market partner, OpenAI would gain permanent co-exclusive access to AI capabilities, infrastructure, services and IP that will near-certainly far outcompete all others in capabilities and safety, and in global acceptance of its safety and accountability.
This setup will enable OpenAI to continue to innovate in capabilities and safety, at the base and application layers, but outside a “Wild West'' race to the bottom of states and labs, advancing both mission and market valuation.
Conclusions
For the reasons above, we urge you and OpenAI to charge ahead with your vision, joining us in calling on a critical mass of nations, both superpowers, the UN High-Level Advisory Board on AI, vision-aligned AI labs across the world, neutral AI safety experts and NGOs, and other like-minded entities, to participate in the global democratic assembly for AI that you envisaged.
We are also calling on neutral states and cities, like Geneva, Singapore or Vienna to host and support such participatory, effective and timely constituent processes and commit matching-funds to host such IGOs agencies and global AI Lab.